text
string
source
string
Kt, making exhaustive prompting infeasible. This motivated a retrieval-based approach: selecting a small set of candidate buckets from the current codebook against which an LLM can assess a new idea. Our next prototype achieved this by adding a semantic similarity-based candidate bucket set re- trieval mechanism. We used a fully LLM-managed pipeline for retrieval, decision-making, and code- book updating. This proved brittle as pipeline man- agement errors compounded, especially by smaller LLMs (e.g., phi4 ). We therefore offloaded retrieval and codebook management to external components to stabilize the system, letting the LLM focus solely on the subjective bucketing decisions. 6.2 M USERAG System Architecture Algorithm 1 summarizes MUSERAG ’s workflow. The LLM processes one idea at a time and assigns it to a semantically equivalent bucket or creates a new one. A dynamic codebook is initialized for each ideation task and updated as new ideas arrive. For each idea x∈ X , a dictionary of candidate buckets Dxis constructed via K-NN-based seman- tic search over the current codebook (Khandelwal et al., 2020). Dxhas a maximum size of Kc. When 5 Algorithm 1 MUSERAG : LLM-Based Incremen- tal Bucketing for a Single Creativity Task Require: Idea set X={x1, x2, . . . , x |X|}, LLM, candidate dictionary size Kc Ensure: Partition B={B1, . . . , B K}, assign- ment map k(x) 1:Initialize empty codebook C ← ∅ 2:Initialize bucket index K←0 3:for all ideas x∈ Xdo 4: if|C| ≤ Kcthen 5: Dx← C 6: else 7: UseK-NN search to find top- Kcclos- est entries in Ctox 8: Dx← {(kj, dj)}Kc j=1 9: end if 10: Query LLM: “Is xa rephrasing of any dj∈ Dx? Return kjor−1.” (In CoT prompting, also return a justification sentence) 11: ifLLM returns k∗̸=−1then 12: Assign k(x)←k∗ 13: Bk∗←Bk∗∪ {x} 14: else 15: K←K+ 1 16: Create new bucket BK← {x} 17: Update codebook C ← C ∪ { (K, x)} 18: Assign k(x)←K 19: end if 20:end for 21:return B={B1, . . . , B K},k(x)∀x∈ X the number of existing buckets is smaller than Kc, all of those buckets are taken in Dx. Each candi- date in the dictionary Dx={(kj, dj)}Kc j=1maps bucket IDs to representative descriptions. The LLM is prompted to determine whether xis a rephrasing of any dj(baseline prompting). If so, it returns the corresponding key kj; otherwise, it returns −1, signaling the creation of a new bucket withxas its description. We also explore Chain- of-Thought (CoT) prompting, where the LLM ad- ditionally provides a one-sentence reasoning (Wei et al., 2022). The codebook and bucket assignment are updated accordingly. We experiment with a factorial combination of LLM model variants, sentence embeddings, and prompting strategies (see Appendix Section A.1). We fix Kc= 10 to keep prompt length manageable while leaving sufficient margin for retrieval noise, and test robustness against other Kcchoices.7 Results and Discussion 7.1 Computational Baselines We use unsupervised clustering to establish a com- putational baseline for MUSERAG . We require algorithms that (i) allow clusters of vastly differ- ent sizes, including fat-tail distributed ones, and (ii) preserve singleton and rare buckets without
https://arxiv.org/abs/2505.16232v1
dropping them as noise or outliers (§5.4). These constraints discourage us from using algo- rithms like DBScan (singleton and rare buckets are likely to be marked as noise) (Ester et al., 1996) and HDBScan (minimum cluster size is 2) (Campello et al., 2013), and our experiments also corrobo- rate their poor performance. K-means clustering is poor at handling imbalanced cluster sizes or shapes, and requires the number of clusters to be close to the number of datapoints to allow many singleton or rare buckets (MacQueen, 1967). Agglomerative hierarchical clustering is a reasonable choice for our constraints (Ward Jr, 1963). We report results withK-means and agglomerative algorithms. For each algorithm, we automatically search for the optimal number of buckets Ktover the full re- gion of Kt= 1to|Xt|. We evaluate structural and semantic criteria using (i) Silhouette Score , which assesses cluster quality based on geometric com- pactness and separation, with higher values indicat- ing better-defined clusters (Rousseeuw, 1987); and (ii)Semantic Score , which is the geometric mean of coherence (intra-cluster similarity) and exclusivity (inter-cluster distinctiveness), encouraging clusters that are both internally consistent and mutually dis- tinct (Mimno et al., 2011). We experiment with the same sentence embeddings as M USERAG. 7.2 Distributional Properties of Computationally-labeled Idea Buckets We find K-means and agglomerative algorithms to produce an exorbitantly high Kt—588and838 buckets by agglomerative (based on Silhouette and Semantic scores), and 831and797buckets by K- means. For reference, |Xt| ≈1141 per task in socialmuse24 . These bucket counts are signifi- cantly higher than H1 and H2’s ( P < 0.001). In contrast, the MUSERAG models produce Ktin the range of 255to465, overlapping those of the humans. The scaling exponents of K-means and agglomerative are systematically higher than the human baseline ( P <0.001), but the MUSERAG models align with the humans (Table A4). 6 Model AMI NMI Pearson’s rSpearman’s ρ ICC (3,1) llama3.3 CoT 0.59±0.05 0 .88±0.02 0 .88±0.04 0 .87±0.05 0 .88±0.04 qwen3 CoT 0.56±0.05 0 .87±0.02 0 .79±0.07 0 .78±0.07 0 .77±0.08 phi4 CoT 0.54±0.01 0 .83±0.01 0 .78±0.08 0 .76±0.08 0 .72±0.09 llama3.3 Baseline 0.59±0.03 0 .86±0.02 0 .83±0.06 0 .79±0.07 0 .81±0.06 phi4 Baseline 0.53±0.02 0 .83±0.01 0 .80±0.07 0 .78±0.08 0 .75±0.08 K-means Silhouette 0.32±0.09 0 .86±0.02 0 .65±0.11 0 .67±0.11 0 .62±0.12 K-means Semantic 0.35±0.06 0 .87±0.02 0 .71±0.10 0 .70±0.10 0 .67±0.10 Aggl. Silhouette 0.39±0.02 0 .85±0.02 0 .73±0.09 0 .68±0.10 0 .69±0.10 Aggl. Semantic 0.31±0.05 0 .86±0.02 0 .65±0.11 0 .65±0.11 0 .61±0.12 Table 2: Agreement metrics comparing computational models to H1’s ground truths. Values are means ±half-width of the 95% C.I. (N= 109 ). See Table A5 for results based on H2’s annotations. 7.3 Construct Validity of Idea-level Bucketing Table 2 and Figure A2 show the AMI and NMI agreements between H1 and machine bucketing. The results are robust to taking H2 as the reference (see Table A5). Interestingly, all methods score highly in the less conservative NMI metric and match the H1-H2 agreement, showing reasonable preservation of semantic grouping. However, when we correct for random chance and penalize mismatch in structure and granularity using the AMI
https://arxiv.org/abs/2505.16232v1
metric, the MUSERAG models sus- tain human-like performance while the K-means and agglomerative algorithms suffer dramatically and systematically. Specifically, against a human- human AMI of 0.66 [0 .64,0.68], the llama3.3 LLM with CoT prompting achieves the best AMI among the MUSERAG models at 0.59 [0.55,0.64], while the silhouette-tuned agglomerative algorithm manages the best AMI among the baseline models at a poor 0.39 [0.36,0.41]. This is unsurprising, since a drop in AMI implies deviation from the structure and resolution of the human bucketing, which is corroborated by the systematically larger number of buckets K-means and agglomerative algorithms produce. In contrast, the MUSERAG models preserve more of the mutual structures, se- mantic coherence, and resolution, capturing up to 89% of the fine-grained patterns humans see. Overall, MUSERAG shows strong idea- bucketing alignment with the humans, surpassing the performances of clustering-based baselines. 7.4 Construct Validity of Participant-level Originality Scoring Table 2 and Figure A3 show the participant-level {Othresh i}score agreements based on H1 and ma- chine bucketing. The results are robust to tak- -1.00.0 -2.0 -3.0Difference between the two judges Mean of originality scores based on H1 and MuseRAG’s bucketings4 6 8 10Figure 2: Bland-Altman visualization for bias detection. ing H2 as the reference (Table A5). MUSERAG with llama3.3 and CoT prompting once again shows the best correlation ( r= 0.89 [0.83,0.92], P < 0.001). The baselines perform significantly worse, with the silhouette-tuned agglomerative al- gorithm achieving the best baseline correlation (r= 0.73 [0.63,0.81],P <0.001). MUSERAG with llama3.3 and CoT prompting also shows the best ICC(3,1) = 0 .88 [0.83,0.92], P < 0.001. The clustering baselines reach a maximum of ICC(3,1) = 0 .69 [0 .57,0.77], P < 0.001, with the silhouette-tuned agglomer- ative model, remaining significantly lower than llama3.3 ’s performance ( P < 0.001). Based on the above evidence, we pick llama3.3 with CoT prompting as the default configuration for MUSERAG and use it for the remaining analysis. We next visualize a Bland-Altman plot to iden- tify systematic biases between H1 and MUSERAG - derived originality scores (Figure 2). 94.5%of the points fall within the limits of agreement (LoA) of ±1.96SDs, and so does the mean difference (bias). This shows that MUSERAG -derived scores stay strongly in line with human scores across the origi- 7 nality spectrum. Although the proportional bias re- gression slope is slightly positive ( 0.09), the effect is not statistically significant ( P >0.05), suggest- ing no systematic trend where the machine over- or under-scores ideas as originality increases. This supports the conclusion that MUSERAG provides stable, human-comparable originality assessments. Taken together, MUSERAG shows strong origi- nality scoring validity against human ground truth. 7.5 Convergent and External Validity MUSERAG ’s{Othresh i}scores correlate strongly with participant-level Creativity Quotient (CQ) scores in the socialmuse24 dataset ( r= 0.4 [0.23,0.55], P < 0.001). CQ is a flexibility mea- sure that captures the diversity of semantic cat- egories. However, CQ is unnormalized and con- founded by idea fluency. Unsurprisingly, unnormal- ized{Rthresh i}shows a stronger correlation with CQ (r= 0.48 [0.32,0.62], P < 0.001). MUSERAG ’s{Othresh i}scores correlate strongly with person-level average cre- ative
https://arxiv.org/abs/2505.16232v1
quality ratings ( beaty18 :r= 0 .77 [0.71,0.83], P < 0.001;silvia17 :r= 0.54 [0.41,0.65], P < 0.001;beaty12 :r= 0.42 [0.27,0.55], P < 0.001). The mohr16 dataset contains rating-based originality scores, which correlate strongly with our frequency-based origi- nality scores ( r= 0.42 [0.35,0.49], P < 0.001). This dataset also contains manually annotated flexibility scores, which do not account for fluency. Unsurprisingly, these flexibility scores correlate strongly with unnormalized {Rthresh i}(r= 0.76 [0.73,0.80], P < 0.001). Overall, MUSERAG demonstrates excellent convergent validity. In terms of external validity, we find MUSERAG ’s{Othresh i}scores to correlate significantly with the person-level average creative metaphor generation ratings ( beaty18 :r= 0.17 [0.02,0.32], P < 0.05;beaty12 :r= 0.25 [0.08,0.40], P < 0.01).{Othresh i}correlates well with openness personality trait ( beaty18 : ρ= 0.16 [0 .01,0.30], P < 0.05;beaty12 : r= 0.30 [0 .14,0.45], P < 0.001;silvia17 : ρ= 0.14 [−0.02,0.30], marginal P= 0.09). We find systematic correlations with creative self- identity ( r= 0.34 [0.19,0.48], P < 0.001) and self-efficacy ( r= 0.29 [0.14,0.44], P < 0.001). We did not find any correlation with fluid intelli- gence or other personality traits. Our results largely corroborate previous insights (Beaty and Johnson, 2021), establishing strong external validity.7.6 Robustness The results depend on LLM, sentence embedding, and prompting strategy choices. We obtain the best results for the llama3.3:70b LLM (Meta AI, 2024), e5-large-v2 sentence embedding (Wang et al., 2022), and Chain-of-Thought prompt- ing (Wei et al., 2022) combination (§A.1). We further probe this configuration’s robustness across Kc∈ {5,15}, and find results statistically simi- lar to the default Kc= 10 . To assess ordering effects, we run the configuration with randomly or- deredXtacross 3seeds. We find the results stable within the bounds reported in Table 2. The main results with the threshold metric are largely repro- duced by the other three. But we find that rarity shows proportional bias in the Bland-Altman plot (slope = 0.2,P < 0.01), while shapley and uniqueness show no correlation with openness in the silvia17 dataset, losing some external va- lidity. The threshold metric thus emerges as the most robust choice. 8 Conclusion This work presents a scalable, zero-shot LLM- based system for scoring the originality of cre- ative ideas, addressing long-standing challenges in the automation of divergent thinking assess- ment. By leveraging the LLM-as-a-judge paradigm with externally orchestrated retrieval, our method provides psychometrically aligned, intent-sensitive judgments without the need for task-specific fine- tuning or training data. The proposed system robustly handled all five distinct datasets used in our evaluation, demon- strating consistent performance across varying task structures and idea distributions. Unlike opaque embedding-based approaches, our use of chain- of-thought (CoT) prompting enables interpretable outputs, offering justifications for each originality score and increasing transparency in the system’s subjective decision-making. Our approach is well-suited to support large- scale, high-throughput creativity studies, such as the growing body of human-AI creativity assess- ment literature (Doshi and Hauser, 2024; Tanveer et al., 2018). By combining reliability, interpretabil- ity, and scale, this system expands the arsenal of automated, complementary assessment tools for creativity researchers. 8 Limitations Although not the focus of this
https://arxiv.org/abs/2505.16232v1
paper, future appli- cations of frequency-based originality scoring sys- tems should carefully consider demographic fair- ness and accessibility. Differences in language use across cultural or educational backgrounds may affect bucketing judgments—particularly in non-English settings—potentially introducing un- intended bias if not monitored. Our validation is confined to the AUT and simi- lar text-based divergent thinking tasks. It remains to be seen how well our approach generalizes to other domains of creative production (e.g., design, visual arts). The effectiveness of our approach depends on carefully curated prompts. Although we use ex- ternally orchestrated RAG to control the context injected into the LLM, the system may still be sensi- tive to prompt length or phrasing (Liu et al., 2023). Subtle changes in prompt format can positively or negatively affect judgment outcomes, which re- mains to be explored further. The system has room for improvement in terms of efficiency. We loop one idea at a time through the LLM. Future research can explore multi-idea batching to enhance efficiency. However, we ob- serve simple andfocused LLM assignments to sta- bilize the system, especially for small-sized LLMs. Demanding more out of the LLM in each prompt can make the system brittle. The bucketing reasoning performance can be im- proved by adding multi-step thinking approaches. However, that might also increase computation cost. We kept the candidate dictionary size, Kc, small at{5,10,15}. Whether increasing the size further improves performance remains to be seen. How- ever, any performance improvement mechanism must be justified against the associated token usage and computation cost increases. Our most successful threshold metric applies a heuristic-based scoring function borrowed from prior literature. The robustness of the tiering choices of the scoring function remains to be ex- amined. Ethical Considerations We reanalyzed datasets from prior works and did not collect any new human data for this research. Given the nature of the research in creative assess- ment, we do not readily foresee potential harm.References Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, and 8 others. 2024. Phi-4 technical report. Preprint , arXiv:2412.08905. Selcuk Acar and Mark A Runco. 2014. Assessing associative distance among ideas elicited by tests of divergent thinking. Creativity Research Journal , 26(2):229–238. Raiyan Abdul Baten, Richard N Aslin, Gourab Ghoshal, and Ehsan Hoque. 2021. Cues to gender and racial identity reduce creativity in diverse social networks. Scientific Reports , 11(1):10261. Raiyan Abdul Baten, Richard N Aslin, Gourab Ghoshal, and Ehsan Hoque. 2022. Novel idea generation in social networks is optimized by exposure to a “Goldilocks”’ level of idea-variability. PNAS Nexus , 1(5):pgac255. Raiyan Abdul Baten, Daryl Bagley, Ashely Tenesaca, Famous Clark, James P Bagrow, Gourab Ghoshal, and Ehsan Hoque. 2020. Creativity in temporal social networks: How divergent thinking is impacted by one’s choice of peers. Journal of the Royal Society Interface , 17(171):20200667. Raiyan Abdul Baten, Ali Sarosh Bangash, Krish Veera, Gourab Ghoshal, and Ehsan Hoque. 2024. AI can
https://arxiv.org/abs/2505.16232v1
enhance creativity in social networks. arXiv preprint arXiv:2410.15264 . Roger E Beaty and Dan R Johnson. 2021. Automating creativity assessment with SemDis: An open plat- form for computing semantic distance. Behavior Research Methods , 53(2):757–780. Roger E Beaty, Yoed N Kenett, Alexander P Chris- tensen, Monica D Rosenberg, Mathias Benedek, Qun- lin Chen, Andreas Fink, Jiang Qiu, Thomas R Kwapil, Michael J Kane, and 1 others. 2018. Robust predic- tion of individual creative ability from brain func- tional connectivity. Proceedings of the National Academy of Sciences , 115(5):1087–1092. Roger E Beaty and Paul J Silvia. 2012. Why do ideas get more creative across time? An executive interpre- tation of the serial order effect in divergent thinking tasks. Psychology of Aesthetics, Creativity, and the Arts, 6(4):309. Roger E Beaty and Paul J Silvia. 2013. Metaphorically speaking: Cognitive abilities and the production of figurative language. Memory & cognition , 41:255– 267. Kenes Beketayev and Mark A Runco. 2016. Scoring di- vergent thinking tests by computer with a semantics- based algorithm. Europe’s Journal of Psychology , 12(2):210. 9 J. Martin Bland and Douglas G. Altman. 1986. Sta- tistical methods for assessing agreement between two methods of clinical measurement. The Lancet , 327(8476):307–310. Terry Bossomaier, Mike Harré, Anthony Knittel, and Allan Snyder. 2009. A semantic network approach to the Creativity Quotient (CQ). Creativity Research Journal , 21(1):64–71. Thomas J Bouchard Jr and Melana Hare. 1970. Size, performance, and potential in brainstorming groups. Journal of Applied Psychology , 54(1p1):51. Philip Buczak, He Huang, Boris Forthmann, and Philipp Doebler. 2023. The machines take over: A compar- ison of various supervised learning approaches for automated scoring of divergent thinking tasks. The Journal of Creative Behavior , 57(1):17–36. Ricardo J. G. B. Campello, Davoud Moulavi, and Jörg Sander. 2013. Density-based clustering based on hierarchical density estimates. In Pacific-Asia Con- ference on Knowledge Discovery and Data Mining (PAKDD) , pages 160–172. Springer. Raymond Bernard Cattell and Alberta KS Cattell. 1960. Measuring intelligence with the culture fair tests . In- stitute for Personality and Ability Testing. Robert Chew, John Bollenbacher, Michael Wenger, Jessica Speer, and Annice Kim. 2023. LLM- assisted content analysis: Using large language mod- els to support deductive coding. arXiv preprint arXiv:2306.14924 . Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. Power-law distributions in empirical data. SIAM Review , 51(4):661–703. Shih-Chieh Dai, Aiping Xiong, and Lun-Wei Ku. 2023. LLM-in-the-loop: Leveraging large lan- guage model for thematic analysis. arXiv preprint arXiv:2310.15100 . Colin G DeYoung, Joseph L Flanders, and Jordan B Pe- terson. 2008. Cognitive abilities involved in insight problem solving: An individual differences model. Creativity Research Journal , 20(3):278–290. Anil R Doshi and Oliver P Hauser. 2024. Generative AI enhances individual creativity but reduces the col- lective diversity of novel content. Science Advances , 10(28):eadn5290. Denis Dumas and Kevin N Dunbar. 2014. Understand- ing fluency and originality: A latent variable perspec- tive. Thinking Skills and Creativity , 14:56–67. Denis Dumas, Peter Organisciak, and Michael Doherty. 2021. Measuring divergent thinking originality with human raters and text-mining models: A psychomet- ric comparison of methods. Psychology of Aesthetics, Creativity, and the
https://arxiv.org/abs/2505.16232v1
Arts , 15(4):645.Ruth B Ekstrom, John W French, Harry H Harman, and D Dermen. 1976. Manual for kit of factor-referenced tests. Princeton, NJ: Educational Testing Service , 586:1989–1995. Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Min- ing (KDD-96) , pages 226–231. Boris Forthmann, Heinz Holling, Pınar Çelik, Martin Storme, and Todd Lubart. 2017. Typing speed as a confounding variable and the measurement of quality in divergent thinking. Creativity Research Journal , 29(3):257–269. Boris Forthmann, Sue Hyeon Paek, Denis Dumas, Bap- tiste Barbot, and Heinz Holling. 2020. Scrutinizing the basis of originality in divergent thinking tests: On the measurement precision of response propensity es- timates. British Journal of Educational Psychology , 90(3):683–699. Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Ship- ing Yang, and Xiaojun Wan. 2023. Human-like sum- marization evaluation with ChatGPT. arXiv preprint arXiv:2304.02554 . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Joy Paul Guilford. 1967. The Nature of Human Intelli- gence . McGraw-Hill. JP Guilford, PR Christensen, PR Merrifield, and RC Wil- son. 1978. Alternate Uses: Manual of Instructions and Interpretation . Orange, CA: Sheridan Psycho- logical Services. Alicia Hofelich Mohr, Andrew Sell, and Thomas Lind- say. 2016. Thinking inside the box: Visual design of the response box affects creative divergent thinking in an online survey. Social Science Computer Review , 34(3):347–359. Gautier Izacard and Edouard Grave. 2020. Distilling knowledge from reader to retriever for question an- swering. arXiv preprint arXiv:2012.04584 . Maciej Karwowski. 2014. Creative mindsets: Mea- surement, correlates, consequences. Psychology of Aesthetics, Creativity, and the Arts , 8(1):62. Sean Kelty, Raiyan Abdul Baten, Adiba Mahbub Proma, Ehsan Hoque, Johan Bollen, and Gourab Ghoshal. 2023. Don’t follow the leader: Independent thinkers create scientific innovation. arXiv preprint arXiv:2301.02396 . 10 Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In Proceedings of the International Confer- ence on Learning Representations (ICLR) . Kibeom Lee and Michael C Ashton. 2004. Psychomet- ric properties of the HEXACO personality inventory. Multivariate Behavioral Research , 39(2):329–358. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- intensive NLP task. Advances in Neural Information Processing Systems , 33:9459–9474. Dawei Li, Bohan Jiang, Liangjie Huang, Alimoham- mad Beigi, Chengshuai Zhao, Zhen Tan, Amrita Bhattacharjee, Yuxuan Jiang, Canyu Chen, Tianhao Wu, and 1 others. 2024a. From generation to judg- ment: Opportunities and challenges of LLM-as-a- judge. arXiv preprint arXiv:2411.16594 . Dawei Li, Shu Yang, Zhen Tan, Jae Young Baik, Suk- won Yun, Joseph Lee, Aaron Chacko, Bojian Hou, Duy Duong-Tran, Ying
https://arxiv.org/abs/2505.16232v1
Ding, and 1 others. 2024b. DALK: Dynamic co-augmentation of LLMs and KG to answer Alzheimer’s disease questions with scien- tific literature. arXiv preprint arXiv:2405.04819 . Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. 2023. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118 . Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language mod- els use long contexts. Preprint , arXiv:2307.03172. J. MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Pro- ceedings of the Fifth Berkeley Symposium on Mathe- matical Statistics and Probability , volume 1, pages 281–297. University of California Press. Robert R McCrae, Paul T Costa, Jr, and Thomas A Mar- tin. 2005. The NEO-PI-3: A more readable revised NEO personality inventory. Journal of Personality Assessment , 84(3):261–270. Meta AI. 2024. LLaMA 3.3-70B-Instruct. https://huggingface.co/meta-llama/Llama-3. 3-70B-Instruct . Accessed: 2025-05-18. David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing , pages 262–272. Mark Newman. 2018. Networks . Oxford University Press.Jay A Olson, Johnny Nahas, Denis Chmoulevitch, Simon J Cropper, and Margaret E Webb. 2021. Naming unrelated words predicts creativity. Pro- ceedings of the National Academy of Sciences , 118(25):e2022340118. Peter Organisciak, Selcuk Acar, Denis Dumas, and Kelly Berthiaume. 2023. Beyond semantic distance: Automated scoring of divergent thinking greatly im- proves with large language models. Thinking Skills and Creativity , 49:101356. Peter Organisciak and Denis Dumas. 2020. Open creativity scoring. https://openscoring.du.edu . [Computer software]. Scott E. Page. 2018. The Model Thinker: What You Need to Know to Make Data Work for You . Basic Books, New York. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics. Roni Reiter-Palmon, Boris Forthmann, and Baptiste Barbot. 2019. Scoring divergent thinking tests: A review and systematic framework. Psychology of Aesthetics, Creativity, and the Arts , 13(2):144. Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) , pages 410– 420, Prague, Czech Republic. Association for Com- putational Linguistics. Peter J Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics , 20:53–65. Mark A Runco and Garrett J Jaeger. 2012. The standard definition of creativity. Creativity Research Journal , 24(1):92–96. Mark A Runco and Wayne Mraz. 1992. Scoring diver- gent thinking tests using total ideational output and a creativity index. Educational and Psychological Measurement
https://arxiv.org/abs/2505.16232v1
, 52(1):213–221. Yanxin Shen, Lun Wang, Chuanqi Shi, Shaoshuai Du, Yiyi Tao, Yixian Shen, and Hang Zhang. 2024. Com- parative analysis of listwise reranking with large lan- guage models in limited-resource language contexts. arXiv preprint arXiv:2412.20061 . 11 Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Re- flexion: Language agents with verbal reinforcement learning. Advances in Neural Information Process- ing Systems , 36:8634–8652. Patrick E Shrout and Joseph L Fleiss. 1979. Intraclass correlations: Uses in assessing rater reliability. Psy- chological Bulletin , 86(2):420. Paul J Silvia, Emily C Nusbaum, and Roger E Beaty. 2017. Old or new? Evaluating the old/new scoring method for divergent thinking tasks. The Journal of Creative Behavior , 51(3):216–224. Paul J Silvia, Beate P Winterstein, John T Willse, Christopher M Barona, Joshua T Cram, Karl I Hess, Jenna L Martinez, and Crystal A Richard. 2008. As- sessing creativity with divergent thinking tasks: Ex- ploring the reliability and validity of new subjective scoring methods. Psychology of Aesthetics, Creativ- ity, and the Arts , 2(2):68. Allan Snyder, John Mitchell, Terry Bossomaier, and Gerry Pallier. 2004. The Creativity Quotient: An objective scoring of ideational fluency. Creativity Research Journal , 16(4):415–419. C. Stevenson, I. Smal, M. Baas, M. Dahrendorf, R. Gras- man, C. Tanis, E. Scheurs, D. Sleiffer, and H. van der Maas. 2020. Automated AUT scoring using a big data variant of the consensual assessment technique. Report Final Technical Report, Modeling Creativ- ity Project, Universiteit van Amsterdam, Amsterdam. Faculty of Social and Behavioural Sciences (FMG), Psychology Research Institute (PsyRes). M Iftekhar Tanveer, Samiha Samrose, Raiyan Abdul Baten, and M Ehsan Hoque. 2018. Awe the audience: How the narrative trajectories affect audience percep- tion in public speaking. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems , pages 1–12. L. L. Thurstone. 1938. Primary mental abilities. The Mathematical Gazette , 22(251):411–412. Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research , 11(95):2837–2854. Vijay Viswanathan, Kiril Gashteovski, Kiril Gash- teovski, Carolin Lawrence, Tongshuang Wu, and Gra- ham Neubig. 2024. Large language models enable few-shot clustering. Transactions of the Association for Computational Linguistics , 12:321–333. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv preprint arXiv:2212.03533 .Joe H. Ward Jr. 1963. Hierarchical grouping to opti- mize an objective function. Journal of the American Statistical Association , 58(301):236–244. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems (NeurIPS) . Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023a. C-pack: Packaged resources to advance general chinese embedding. Preprint , arXiv:2309.07597. Ziang Xiao, Xingdi Yuan, Q Vera Liao, Rania Abdel- ghani, and Pierre-Yves Oudeyer. 2023b. Support- ing qualitative analysis with large language models: Combining codebook with GPT-3 for deductive cod- ing. In Companion Proceedings of the
https://arxiv.org/abs/2505.16232v1
28th Inter- national Conference on Intelligent User Interfaces , pages 75–78. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Day- iheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, and 41 others. 2025. Qwen3 technical report. Preprint , arXiv:2505.09388. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems , 36:11809–11822. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Wein- berger, and Yoav Artzi. 2019. BERTscore: Evalu- ating text generation with BERT. arXiv preprint arXiv:1904.09675 . Lirui Zhao, Yue Yang, Kaipeng Zhang, Wenqi Shao, Yuxin Zhang, Yu Qiao, Ping Luo, and Rongrong Ji. 2024. Diffagent: Fast and accurate text-to-image API selection with large language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6390–6399. 12 A Supplementary Materials A.1 System Component Choices We experiment with the following system compo- nent alternatives: (i) Large language models: M = {llama3.3:70b-Instruct (Meta AI, 2024; Grattafiori et al., 2024), qwen3:32b (Yang et al., 2025), phi4:14b (Abdin et al., 2024)}. We pick these mid-sized, open-source models for their cost and computation efficiencies. (ii) Sentence embedding models: E= {all-mpnet-base-v2 (Reimers and Gurevych, 2019), bge-large-en-v1.5 (Xiao et al., 2023a), e5-large-v2 (Wang et al., 2022)}. These models are freely available on Huggingface and have been widely used in recent technological developments. (iii) Prompting strategies: P = {baseline_prompting ,cot_prompting (Wei et al., 2022)}. In our experiments, we found the combination ofllama3.3:70b-Instruct ,e5-large-v2 , and cot_prompting to give the best performance. A.2 Experimentation Setup and GPU Usage We conducted all experiments using (i) an Intel Core i7-based computer with 64GB RAM and an RTX3070 Ti graphics card, and (ii) three MacBook Pro laptops. All our code and data are available on GitHub. The R&D and final result generation took roughly 100GPU days. A.3 LLM Prompts System Prompt (Baseline Condition) You are an idea bucket annotator for ideas generated for the object {object_name} in Guilford’s Alternative Uses Test. You will be given an input_idea to annotate against up to {comparison_k} comparison_ideas, given to you in a dictionary format with key-value pairs of comparison_idea_ID: comparison_idea_description. The keys are integers, and the values are strings. Your goal is to determine if the input_idea is a very obviously rephrased version of one of those comparison_idea_description, or if it is slightly different. if input_idea is a very obviously rephrased version of a certain comparison_idea_description: your_annotation_ID = comparison_idea_ID key of that comparison_idea_description value elif input_idea is a slightly different one: your_annotation_ID = -1 Your response must be a text string containing exactly: <your_annotation_ID>.For example: if your_annotation_ID is 6 since the input idea is a very obviously rephrased version of comparison_idea_ID 6, your response string should be "6". Another example: if your_annotation_ID is -1 because the input idea is not an obvious rephrasing of any comparison_idea_ID, your response string should be "-1". Absolutely do not provide any extra text. System
https://arxiv.org/abs/2505.16232v1
Prompt (CoT Condition) You are an idea bucket annotator for ideas generated for the object {object_name} in Guilford’s Alternative Uses Test. You will be given an input_idea to annotate against up to {comparison_k} comparison_ideas, given to you in a dictionary format with key-value pairs of comparison_idea_ID: comparison_idea_description. The keys are integers, and the values are strings. Your goal is to determine if the input_idea is a very obviously rephrased version of one of those comparison_idea_description, or if it is slightly different. if input_idea is a very obviously rephrased version of a certain comparison_idea_description: your_annotation_ID = comparison_idea_ID key of that comparison_idea_description value elif input_idea is a slightly different one: your_annotation_ID = -1 You will also provide a reason string containing a single sentence explaining why you gave the input_idea that specific your_annotation_ID. Your response must be a text string containing exactly: <your_annotation_ID><SPACE><reason>. For example: if your_annotation_ID is 6 and the reason is "The input idea is a very obviously rephrased version of comparison_idea_ID 6", your response string should be "6 The input idea is a very obviously rephrased version of comparison_idea_ID 6". Another example: if your_annotation_ID is -1 and the reason is "The input idea is not an obvious rephrasing of any comparison_idea_ID", your response string should be "-1 The input idea is not an obvious rephrasing of any comparison_idea_ID". Absolutely do not provide any extra text. User Prompt Per Idea (Both Conditions) input_idea: {idea_text} comparison_ideas: {repr(comparison_ideas)} A.4 AI Usage We used Grammarly AI to improve the grammat- ical accuracy of the manuscript, and ChatGPT to speed up the implementation of standard statistical analysis code. 13 A.5 Supplementary Tables and Figures Table A1: Inter-human annotator agreement on idea bucketing in socialmuse24 . Metric Mean [95% C.I.] AMI 0.66 [0.64,0.68] NMI 0.85 [0.84,0.88] V-measure 0.85 [0.84,0.87] Homogeneity 0.80 [0.77,0.82] Completeness 0.92 [0.89,0.95] Table A2: Pearson and Spearman correlations between participant-level normalized Ometricscores based on H1 and H2’s bucketing. N= 109 in all cases. Scoring Metric Correlation Type Estimate 95% C.I. P-value threshold Pearson’s r 0.77 [0 .69,0.84] P <0.001 Spearman’s ρ 0.75 [0 .65,0.82] P <0.001 shapley Pearson’s r 0.79 [0 .70,0.85] P <0.001 Spearman’s ρ 0.74 [0 .64,0.82] P <0.001 rarity Pearson’s r 0.72 [0 .61,0.80] P <0.001 Spearman’s ρ 0.64 [0 .51,0.74] P <0.001 uniqueness Pearson’s r 0.73 [0 .63,0.81] P <0.001 Spearman’s ρ 0.66 [0 .54,0.76] P <0.001 Table A3: ICC reliability of the participants’ normal- ized originality Ometricscores based on H1 and H2’s bucketing. Scoring Metric ICC(3, k)F d f 1d f2P-value 95% C.I. threshold 0.85 6 .79 108 108 P <0.001 [0 .78,0.90] shapley 0.85 6 .67 108 108 P <0.001 [0 .78,0.90] rarity 0.83 5 .73 108 108 P <0.001 [0 .75,0.88] uniqueness 0.80 4 .97 108 108 P <0.001 [0 .71,0.86] Table A4: Cluster count Kand power-law exponent α for various computational scoring methods. Model K[95% C.I.] α[95% C.I.] llama3.3 CoT 465.4[426.8,504.0]2.28[2.14,2.42] qwen3 CoT 462.4[432.7,492.1]2.43[2.20,2.67] phi4 CoT 255.0[207.3,302.7]2.39[1.72,3.05] llama3.3 Baseline 367.8[333.3,402.3]2.29[1.97,2.61] phi4 Baseline 275.6[229.5,321.7]2.51[2.23,2.78] K-means Silhouette 830.6[729.2,932.0]3.12[2.82,3.43] K-means Semantic 797.4[757.8,837.0]3.12[2.67,3.57] Agglomerative Silhouette 588.0[524.9,651.1]5.68[1.26,10.09] Agglomerative Semantic 838.0[815.9,860.1]3.80[2.63,4.97] 14 Table A5: Agreement metrics comparing computational models to H2’s ground
https://arxiv.org/abs/2505.16232v1
truths. Values denote mean ± half-width of the 95% C.I. (N= 109 ). Model AMI NMI Pearson’s rSpearman’s ρ ICC (3,1) llama3.3 CoT 0.57±0.04 0.84±0.02 0 .76±0.08 0 .74±0.09 0.74±0.09 qwen3 CoT 0.54±0.04 0 .83±0.02 0 .74±0.09 0 .73±0.09 0 .74±0.09 phi4 CoT 0.56±0.03 0 .79±0.01 0 .67±0.10 0 .68±0.10 0 .67±0.10 llama3.3 Baseline 0.59±0.03 0.83±0.01 0 .76±0.08 0 .74±0.09 0.75±0.08 phi4 Baseline 0.55±0.04 0 .80±0.01 0 .73±0.09 0 .71±0.10 0 .73±0.09 K-means Silhouette 0.28±0.07 0 .80±0.02 0 .59±0.12 0 .62±0.12 0 .59±0.12 K-means Semantic 0.30±0.05 0 .80±0.02 0 .66±0.11 0 .68±0.10 0 .66±0.11 Aggl. Silhouette 0.36±0.03 0 .80±0.02 0 .65±0.11 0 .60±0.12 0 .64±0.11 Aggl. Semantic 0.26±0.05 0 .80±0.02 0 .60±0.12 0 .64±0.11 0 .60±0.12 15 Figure A1: Idea bucket size distribution based on annotator H2’s bucketing. 16 Figure A2: AMI and NMI performance comparison against annotator H1 17 Figure A3: Pearson’s randICC performance comparison against annotator H1 18
https://arxiv.org/abs/2505.16232v1
arXiv:2505.16234v1 [cs.CL] 22 May 2025LIFEB ENCH : Evaluating Length Instruction Following in Large Language Models Wei Zhang1∗, Zhenhong Zhou1∗, Junfeng Fang3∗, Rongwu Xu4∗, Kun Wang2∗, Yuanhe Zhang1,Rui Wang1,Ge Zhang5,Xinfeng Li2, Li Sun6,Lingjuan Lyu7,Yang Liu2,Sen Su1† 1BUPT,2NTU,3NUS,4THU,5M-A-P,6NCEPU,7Sony AI Abstract While large language models (LLMs) can solve PhD-level reasoning problems over long context inputs, they still struggle with a seemingly simpler task: following explicit length instructions —e.g.,“write a 10,000-word novel” . Additionally, models often generate far too short outputs, terminate prematurely, or even refuse the request. Existing benchmarks focus primarily on evaluating the quality, but often overlook whether the generations meet length constraints. To this end, we introduce Length Instruction Following Evaluation Benchmark (LIFEB ENCH ) to comprehensively evaluate LLMs’ ability to follow length instructions across diverse tasks and a wide range of specified lengths. LIFEB ENCH consists of 10,800 instances across 4 task categories in both English and Chinese, covering length constraints ranging from 16 to 8192 words. We evaluate 26 widely-used LLMs and find that most models reasonably follow short-length instructions but deteriorate sharply beyond a certain threshold. Surprisingly, almost all models fail to reach the vendor-claimed maximum output lengths in practice, as further confirmed by our evaluations extending up to 32K words. Even long-context LLMs, despite their extended input-output windows, counterintuitively fail to improve length-instructions following. Notably, reasoning LLMs outperform even specialized long-text generation models, achieving state-of-the-art length following. Overall, LIFEB ENCH uncovers fundamental limitations in current LLMs’ length instructions following ability, offering critical insights for future progress. Data & Code: github.com/LIFEBench/LIFEBench Data & Dataset Card: huggingface.co/datasets/LIFEBench/LIFEBench Homepage: ydyjya.github.io/LIFEBench 1 Introduction Large language models (LLMs) [ 40,13] demonstrate remarkable capabilities in sophisticated tasks such as long-context understanding, planning, and complex reasoning, among others [ 46,100,40, 48]. Unexpectedly, LLMs also often fail in a seemingly trivial and explicit task [ 119,58,50]: precisely following length instructions . Concretely, LLMs tend to terminate generation prematurely, especially when long lengths are instructed [ 8,85,83]. This non-intuitive observation highlights an underexplored but important problem: off-the-shelf language models exhibit shortcomings both in following explicit length instructions and generating long-text content [ 105,111]. Understanding and quantifying this limitation is critical, as accurate length control and long-text generation underpin numerous real-world LLM applications and practical productions [29, 70, 98]. ∗indicates equal contribution. †indicates corresponding author. Preprint. L I F E B e n c hEnglish ChineseAt Least Equal To At Most16 32 64 128 256 512 1k 2k 4k 8kLIFEBench StructureQAKnowledge Knowledge TechnologyTechnologySports Sports Lifestyle Health Entertainment Society Business Food SummarizationPublic DiscourseDialogue ReportAcademic Article Encyclopedia BookReasoning ProbabilisticProbabilisticDeductiveInductiveAbductiveCause and EffectAnalogicalCreative GenerationScript ExpansionScript ExpansionStory ExpansionStory ExpansionNews WritingNews WritingPrompt-based Creative W ritingBlog WritingSocial Commentary W riting AlpacaEval-LI Suri IFEval LonGen LonGen-2 LonGen-1 LongWriter Ruler HelloBench LongProc ➕o3-mini Claude-3.7-Sonnet- Thinking Gemini-2.5-Pro75.4 61.3 60.0 Qwen3-32B-Thinking Qwen3-235B-A22B- Thniking57.8 57.7 Figure 1: Overview of LIFEB ENCH .Top Left. LIFEB ENCH data types. Top Right. Length Instruction Following Leaderboard based on LIFEB ENCH .Bottom. LIFEB ENCH evaluates length generalization capabilities across bilingual content (Chinese/English), incorporating three distinct control methodologies and ten granular length constraints. To systematically evaluate LLMs’ length instruction following and explore why they fail to
https://arxiv.org/abs/2505.16234v1
follow longer length instructions, we formulate our investigation around three core research questions (RQs): •RQ1: To what extent do current LLMs accurately follow explicit length instructions, and what factors may lead to length instructions following failures? •RQ2: How capable is the current LLM at outputting long generations with specific length instructions, and how reliably can they meet their claimed maximum output length? •RQ3: How profoundly do input characteristics impact LLMs’ length instructions following? To address these questions comprehensively, we introduce the Length Instruction Following Evalu- ation Benchmark (LIFEB ENCH ). LIFEB ENCH is the first full-range length instruction evaluation benchmark comprising 10,800 instances across four representative natural language generation (NLG) tasks-Question Answering [ 30], Summarization [ 28], Reasoning [ 69], and Creative Gener- ation [ 108]—covering a wide spectrum of length constraints ranging from 16 to 8,192 bilingual (English &Chinese) words (Figure 1). Distinct from existing benchmarks that primarily assess gener- ation quality [ 86,83,8,85],LIFEB ENCH focuses on the evaluation of explicit length instruction following capabilities, using two metrics, Length Deviation andLength Score , to quantify model performance. Compared to simple word count matching, our metrics offer a more analytic and robust evaluation, Length Deviation captures deviation direction and magnitude, while Length Score ensures robust aggregation, underscoring the superiority of our benchmark. We conduct extensive experiments on 26 prevalent LLMs, revealing multiple intriguing insights. For RQ1 , results indicate current LLMs typically follow short length instructions but struggle with long ones. Further analysis reveals a core bottleneck: LLMs are unable to accurately recognize how many words they have generated, which might contribute to length instructions following failures. We find reasoning models slightly address this by calibrating output length during intermediate steps; o3-mini [80] achieves the highest Length Score of 75.4, while most models score below 60. For 2 RQ2 , we show that current models typically cannot approach their vendor-claimed maximum output length under realistic length constraints. Our further analysis reveals that some models fall short due to inherently limited long-text generation capabilities, while others appear to underperform by lazy strategies ( e.g., falsely claiming inability to follow instructions ). For RQ3 , we find that input characteristics critically impact length instruction fidelity. For example, summarization tasks exhibit the largest Length Score drop of 18.8, while increasing input length ( 1k→8kwords) reduces by 13.6. Notably, models overextend outputs in Chinese contexts, revealing language-specific biases. Our contributions are as follows: (I)We create LIFEB ENCH , the first benchmark to systematically evaluate length instruction following across a full range of tasks and lengths; (II)We reveal that the off-the-shelf LLMs have shortcomings in the length instructions following (Figure 1 top left), especially under the long-text generation task, demonstrating that LLMs fall short of their vendor- claimed maximum output length in practice; (III) We comprehensively analyze how input factors, such as task type, language, and input length, influence length-following fidelity. Our findings uncover fundamental limitations in current LLMs and provide actionable insights for future development. 2 Related Works Long Context Capabilities of LLMs. Long context capabilities refer to the ability of LLMs to effectively comprehend and
https://arxiv.org/abs/2505.16234v1
generate extended texts [ 52,68]. Various benchmarks have been proposed to evaluate comprehension over long contexts, including those based on synthetic data [99,56,65,43] and human-annotated real-world tasks [ 6,7,59,92,3]. Long-text generation, which is a classic task [ 10,19,104] in natural language generation (NLG), also remains a core focus of long context research. Existing evaluations of long-text generation mainly fall into five categories: question answering [ 71,97,69,115], summarization [ 23], instruction following [ 105,111], creative writing [ 81,85,106], and multi-dimensional assessment [ 47,86]. In addition to evaluation, numerous studies have proposed methods to improve long-text generation [ 8,101,109,83,85]. Our benchmark covers both long-context comprehension and long-text generation capabilities. Length Instruction Following Capabilities of LLMs. The capability to follow length instructions is an important aspect of instruction following. Several previous works have incorporated length constraints into instruction following evaluations [ 121,114,103,116,84,17]. MT-Bench-LI and AlpacaEval-LI [ 113], curated by augmenting MT-Bench [ 119] and AlpacaEval 2 [ 27] with length constraints, reveal that models often fail to comply with such constraints. To mitigate this issue, both training-based methods [ 113,88,58,11] and inference-time approaches using control frameworks or external tools [ 39,112] have been proposed. However, these efforts primarily target short length constraints instead of full-range length instruction following. 3 LIFEB ENCH In this section, we introduce LIFEB ENCH , a benchmark designed to evaluate the length instruction following capability of LLMs across the full range. The remainder of this section provides an overview of the dataset; a detailed comparison with existing benchmarks is provided in Appendix A and correlation analysis with leaderboard is included in L. LIFEB ENCH enjoys the following features: •Diverse Dataset: We ensure dataset diversity based on the following key features: (1) Various Tasks :LIFEB ENCH includes a broad range of NLG tasks, including question answering, summa- rization, creative generation, and reasoning, which are well-suited for length instruction following [67]. (2) Long-context Inputs :LIFEB ENCH includes both short inputs ( <2,000 words) and long inputs ( >2,000 words), allowing evaluation across different input scales, unlike most prior bench- marks. (3) Bilingual Queries :LIFEB ENCH contains Chinese and English data collected separately from distinct datasets, enabling our evaluation to investigate language-specific bias. •Extensive Length Constraints: LIFEB ENCH is the first benchmark designed to systematically evaluate LLMs’ full-range length instruction following capability, which is based on the following design choices. (1) Multiple Control Methods :LIFEB ENCH adopts three common length control methods: Equal To ,At Most , and At Least . (2) Length Constraints : We define ten distinct length constraints spanning short ( <100 words), medium (100–2000 words), and long ( >2000 words) ranges, which are more comprehensive than prior works. Detailed control methods and length constraints description can be found in Section 4.2. 3 •Analytical Evaluation: We evaluate length instruction following capability using two evaluation metrics: Length Deviation (LD) and Length Score (LS), offering multi-dimensional analysis. Length Deviation intuitively measures the extent of deviation between the generation length and the length constraints, while Length Score quantifies the model’s length instruction following capability. Compared to prior works that merely assess generated length, our
https://arxiv.org/abs/2505.16234v1
evaluation metrics allow comparison across models by quantifying both the magnitude and directionality of deviations. Section 4.3 provides a detailed discussion of these metrics. Data Collection and Statistics. Selecting NLG tasks tailored to evaluating length instruction following is crucial for constructing LIFEB ENCH . To this end, we introduce the concept of Length Paradigms , which categorize NLP tasks based on their inherent length characteristics. We assign common NLG tasks into four paradigms: Length-Independent Paradigm ,Fixed-Length Paradigm , Adaptive-Length Paradigm , and Flexible-Length Paradigm , ensuring that our benchmark targets scenarios where length control is meaningful. A detailed mapping of NLG tasks to these paradigms is provided in Appendix B.1. Among them, we focus on tasks within the Flexible-Length Paradigm , as they are the most suitable for evaluating length instruction following in LLMs. We categorize tasks within Flexible-Length Paradigms into four categories and collect 360 funda- mental data units from 21 datasets in English and Chinese. Figure 2 illustrates the distribution of data categories and text length in LENGTH BENCH . We introduce the data collection and refinement processes for these four categories as follows, details are provided in Appendix B.2 and B.3. (a) Category Distribution (b) Length Distribution Figure 2: Left. Detailed task categories distribution in LIFEB ENCH , spanning more than 20 diverse subtypes. Right. Input length distribution for each task category. •Question Answering (QA) Task: The QA task requires the model to generate answers based on given questions. The answer length can vary flexibly depending on the level of detail required. We collect questions from six representative QA datasets [ 53,1,54,118,74], which cover nine different domains. In total, we select 120 questions by filtering for open-ended, well-formed examples that allow for responses of varying length, excluding factoid, binary, and duplicate questions, with an equal split of 60 in Chinese and 60 in English. •Summarization Task: The Summarization task requires the model to condense long-text into a concise summary. A summary extracts key points or includes more supporting details, depending on the requirements. We collect data from seven summarization datasets [ 21,55,37,120,32,74], spanning seven types of summarization tasks. To ensure the task suits longer length constraints, we select input samples ranging from 10,000 to 15,000 words in length. Furthermore, we manually filter out content containing excessive tables, numerical data, or irregular formatting to improve information density. In total, we select 40 samples, with 20 in Chinese and 20 in English. •Reasoning Task: The Reasoning task requires the model to perform logical inference based on given facts or background information. The output can either be a final conclusion or a step-by-step reasoning process with detailed justification. Existing reasoning datasets ( e.g., GSM8K [ 20], CommonsenseQA [ 96]) are not suited for flexible-length generation due to their rigid structure and limited support for open-ended responses. To address this, we follow the prior reasoning 4 categories [ 44] and generate new reasoning problems using GPT-4o. These problems cover a range of reasoning depths, enabling better support for evaluating the length instructions following. We curate 80 reasoning problems, half Chinese and half English.
https://arxiv.org/abs/2505.16234v1
•Creative Generation Task: The Creative Generation task requires the model to produce creative text, such as academic papers, novels, and technical reports. The complexity and richness of the content can be adjusted to accommodate different length constraints. We collect data from seven datasets [ 82,75,31,42,51] and categorize the samples into six generation types based on their textual characteristics. In total, we select 120 samples by filtering out instances with explicit length or structural constraints, as well as those with ambiguous instructions or duplicated content, 50% Chinese and 50% English. Finally, to ensure consistency and suitability for length instructions, we refine or formalize the data. Specifically, we design refinement templates tailored to each subtype. Each template includes an Instruction specifying the task type and original input, and a Requirement imposing the length constraint and control method. The resulting refined dataset constitutes the final benchmark data for LIFEB ENCH . Representative examples of the four tasks above are provided in Appendix B.4. 4 Experimental Setup 4.1 Models We evaluate over 26 powerful LLMs on LIFEB ENCH , including nine proprietary models, eight open-source models, and three models enhanced for long-text generation. To ensure consistency in our experimental setup, we set the maximum output size to 65,536 tokens for all models, or to the maximum supported size if smaller. Additionally, we set the temperature to 0.8 for non-reasoning models, and configure reasoning models with a medium reasoning effort if this parameter is supported. See the Appendix D.1 for a complete model list and all configurations. 4.2 Length Constraints One of the core principles of LIFEB ENCH is the provision of extensive length constraints, achieved through three control methods and ten constraint levels. The three control methods are: Equal To , At Most , and At Least .Equal To requires the output length to match the length constraints exactly. At Most ensures that the output does not exceed the constraint, while At Least guarantees that the output is no shorter than the constraint. Building on the control methods, we define ten distinct length constraints: 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and 8192. For English data , the length is measured in words, while for Chinese data , it is measured in characters. The minimum constraint of 16 ensures that models generate complete responses, while the maximum constraint is set to 8192, which is equivalent to the number of words for the smallest maximum output length among all evaluated models. By applying all control methods and length constraints to 360 fundamental data units, we obtain a total of 10,800 instances. 4.3 Evaluation Metric LIFEB ENCH employs two metrics to analytically evaluate a model’s length instruction following: Length Deviation andLength Score . Length Deviation. Length Deviation (LD) measures the proportionate difference between the model’s output length and the specified length constraint. This metric provides an intuitive assessment of the model’s length instruction following, capturing both the direction (over- or under-generation) and the magnitude of deviation. Formally, it is defined as: LD=Loutput−Lconstraint Lconstraint, (1) where Lconstraint denotes the imposed length constraint, and Loutput is the word
https://arxiv.org/abs/2505.16234v1
count of the output. Length Score. The effectiveness of Length Deviation may be degraded when aggregating multiple samples, as positive and negative deviations can offset each other. Furthermore, Length Deviation is sensitive to outliers, where a few extreme values can disproportionately skew the aggregated results. To address these limitations, we introduce Length Score (LS) , which maps Length Deviation to a 5 bounded score, eliminating both the offsetting effect of positive and negative deviations and the excessive influence of outliers, thus enabling a more reliable evaluation. Specifically, for the Equal Tocontrol method, LSis defined as: LS=100×ek1·LD, ifLD < 0 100×e−k2·LD,ifLD≥0,(2) where k1= 5andk2= 2. This formulation ensures slower score decay for over-generation ( LD≥0), considering that exceeding the constraint is generally more acceptable, because it can be deleted, than under-generation. The scoring curve is steeper for small |LD|, allowing finer discrimination of subtle deviations; for larger |LD|, the decay moderates to avoid excessively penalizing poor cases. The score approaches zero as LDapproaches −1or+∞, indicating complete failure in following the length instructions. For At Most andAt Least control method, analogous scoring functions are defined according to their respective evaluation criteria. Full details are provided in Appendix D.2. 5 Results and Analysis In this section, we present evaluation results and analysis of LLMs’ ability to follow length instructions. In Section 5.1, we present evaluation results of current LLMs on LIFEB ENCH , and find that they exhibit widespread difficulties with following length instructions, particularly under long length constraints. Due to these deficiencies under long length constraints, in Section 5.2 we further investigate whether models can reliably achieve their claimed maximum output length. To gain deeper insight into the underlying causes of these limitations , in Section 5.3, we conduct a detailed diagnosis through a series of extended experiments, analyzing key sources of failure and characterizing model performance under varied conditions. Table 1: We compute the mean LSfor each model, averaging over all length constraints, to assess model performance under the three control methods. For the Equal To control method, we additionally report LD, computed as the mean absolute deviation across all length constraints. Models Params Reasoning ModelEqual To At Most At Least LD↓ LS↑ LS↑ LS↑ Proprietary Models GPT-4o mini - ✗ 31% 51.9 90.3 74.0 GPT-4o - ✗ 31% 49.1 97.0 72.6 o1-mini - ✓ 35% 48.3 89.7 81.5 o3-mini - ✓ 13% 75.4 99.5 97.0 Claude-3.7-Sonnet - ✗ 30% 55.4 96.8 90.1 Claude-3.7-Sonnet-Thinking - ✓ 33% 61.3 96.5 93.3 Gemini-2.0-Flash - ✗ 36% 48.4 95.2 84.7 Gemini-2.0-Flash-Thinking - ✓ 53% 44.0 90.2 91.2 Gemini-2.5-Pro - ✓ 28% 60.0 96.1 95.5 Doubao-1.5-Pro - ✗ 23% 48.7 99.9 89.3 Doubao-1.5-Thinking-Pro - ✓ 29% 50.6 97.8 85.7 Open-Source Models DeepSeek-V3 671B ✗ 27% 54.3 96.7 79.7 DeepSeek-R1 671B ✓ 36% 47.7 93.8 74.1 Llama-3.1-8B-Instruct 8B ✗ 70% 38.1 82.3 71.8 Llama-3.1-70B-Instruct 8B ✗ 61% 42.4 88.8 69.0 Qwen2.5-7B-Instruct 7B ✗ 36% 37.3 97.6 71 Qwen2.5-72B-Instruct 7B ✗ 28% 50.6 93.8 84.2 Qwen3-32B 32B ✗ 19% 57.6 97.2 87.4 Qwen3-32B-Thinking 32B ✓ 23% 57.8 93.0 87.8 Qwen3-235B-A22B 235B ✗ 22% 52.1 95.6 90.3 Qwen3-235B-A22B-Thinking 235B ✓
https://arxiv.org/abs/2505.16234v1
23% 57.7 89.9 89.6 GLM-4-9B-Chat 9B ✗ 40% 35.2 95.9 68.3 Mistral-7B-Instruct-v0.2 7B ✗ 84% 26.7 85.9 63.0 Long-Text Enhanced Models LongWriter-Llama3.1-8B 8B ✗ 102% 31.9 84.1 79.0 LongWriter-GLM4-9B-Chat 9B ✗ 52% 37.4 90.9 86.8 Suri-I-ORPO 7B ✗ 506% 27.4 79.7 75.2 6 Proprietary Models Open-Source Models Long-Text Enhanced Models Figure 3: Left. Length Score across all length constraints under the Equal To control method. Impact of task type ( Middle ) and language ( Right ) on the Length Score , separated by model type. 5.1 Main Results Evaluation Protocol. Following the evaluation protocol described in Section 4, we evaluate all 26 models on LIFEB ENCH , excluding approximately 0.1% of instances with empty outputs per model from the final results. Comprehensive results are provided in Appendix E. Table 1 summarizes the overall performance of various LLMs on LIFEB ENCH . Most models exhibit poor length instruction following under the Equal To control method: 23 out of 26 models score below 60, with o3-mini ,Claude-Sonnet-Thinking , and Gemini-2.5-Pro achieving 75.4, 61.3, and 60.0, respectively. Performance improves substantially under the At Most andAt Least control methods, where 19 and 6 models, respectively, surpass a score of 90, due to the looser constraints. Figure 3 provides a comprehensive analysis of model performance across different length constraints and input characteristics, including task type and language. Model performance varies substantially across length constraints. Under all short constraints ( ≤128 words), o3-mini andGemini-2.5-pro consistently perform strongly, with scores above 80 and 70, respectively, while 8 out of 26 models score below 60. For medium constraints (256–2048 words), o3-mini remains robust ( >70), but Gemini-2.5-pro drops sharply (81 →37). Under long constraints ( ≥4096 words), no models consistently exceed a score of 40. Notably, LongWriter-Llama3.1-8B andLongWriter-GLM4-9B demonstrate notable improvements over their respective base models, with score increases of 22 and 32 points, respectively, on the longest constraints ( i.e., 8192 words). Regarding input characteristics, Summarization tasks yield the lowest Length Score for 19 out of 26 models, while Creative Generation tasks achieve the highest Length Score for 14 models, indicating substantial task-dependent variation. In terms of language, nearly all models perform worse on Chinese compared to English. Notably, in Appendix G, we observe a consistent tendency for most models to over-generate when instructions are given in Chinese, highlighting a potential language-specific bias. Takeaways. Explicit length instruction following remains a significant challenge for current LLMs, with performance strongly influenced by control method, length constraint, task type, and language. 5.2 Discrepancies Between Claimed and Actual Maximum Output Length Evaluation Protocol. Given the observed deficiencies of most models under longer length con- straints, we are motivated to examine their behavior in even more challenging settings. Specifically, we investigate whether the models are able to achieve the maximum output length claimed by the providers under extreme length constraints. We set the control method to At Least and specified a length constraint of 32,768 words. For models whose claimed maximum output length is less than 32,768 tokens, we set the word-based constraint to approximate each model’s maximum token limit, ensuring that the
https://arxiv.org/abs/2505.16234v1
instruction requests outputs up to the model’s capacity. We exclude summarization tasks from consideration as they do not provide sufficiently long inputs for meaningful evaluation 7 0Token Count40000 30000 20000 1000060000 50000 Claude-3.7-Sonnet DeepSeek-R1DeepSeek-V3Gemini-2.0-Flash Gemini-2.0 -Flash-ThinkingGemini-2.5-Pro GLM-4-9B-Chat GPT-4o-mini GPT-4o Llama-3.1-70B-InstructLlama-3.1-8B-InstructLongWriter-GLM4-9BLongWriter -Llama3.1-8B Mistral-7B -Instruct-v0.2o1-minio3-mini Qwen2.5-72B-InstructQwen2.5-7B-InstructSuri-I-ORPO Claude-3.7-Sonnet-Thinking DouBao-1.5-Pro DouBao-1.5 -Thinking-Pro Qwen3-32BQwen3-32B-Thinking Qwen3-235B-A22B Qwen3-235B -A22B-Thinking40000 30000 20000 10000 0Word CountDifference ValueTop-10% PercentileTop-25% PercentileInstructed LengthClaimed SizeFigure 4: (Top) Comparison of the specified length constraints (in words) with the actual word counts of the top 10% and top 25% longest outputs for each model. (Bottom) Comparison of the claimed max- imum output size (in tokens) with the actual token counts of the top 10% and top 25% longest outputs for each model. /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048Figure 5: Length Awareness Experiment: The real output word count and self-reported word count are shown for cases where the length constraint exceeds 128, with both axes in log scale. “Real Fit” and “Reported Fit” represent the least squares regression results for real and self-reported word counts. under such constraints. The final dataset comprises 320 instances, with all outputs containing re- peated content manually filtered to ensure validity. Further results on extended length constraints are provided in Appendix E. Figure 4 compares each model’s claimed maximum output length with the actual maximum output length achieved. Regarding word count, among the 26 models evaluated, only the Claude andQwen series (seven models in total) consistently meet the length constraint in their top 10% longest outputs. In the top 25% longest outputs, however, only Qwen2.5-72B-Instruct andQwen3-235B-A22B satisfy the constraint. In terms of tokens, among all models that failed to meet the length constraints, only Gemini-2.0-Flash and the Qwen series were limited by their relatively small maximum output length. All other models fell significantly short of their respective maximums, indicating that their inability to satisfy the length constraints stems from inherent generation limitations rather than explicit length restrictions. Takeaways. There exists a substantial discrepancy between vendor-claimed and actual model performance under extreme long length constraints, highlighting the need for more reliable evaluation and reporting of maximum output capabilities. 5.3 Diagnosis of Length Instruction Following Failures To further investigate the causes underlying poor length instruction following, we conduct a series of distinct but inter-connected analyses. Length Awareness Deficits. Given widespread failures in following length instructions, we seek to understand these deficiencies from the perspective of the models’ internal awareness of output length. To this end, we design the length awareness experiment in which each model is prompted to append a self-reported word count to its generated text. We then compare the self-reported counts to their actual output lengths. Figure 5 presents four representative examples, models overshoot short length limits while underestimating length, and undershoot long limits while overestimating length. Notably, GLM-4-9B-Chat reports word counts that match the instructions, even when its actual outputs
https://arxiv.org/abs/2505.16234v1
do not, suggesting that it assumes compliance rather than measuring the true output length. Experimental details and comprehensive results for all models are provided in Appendix F. Sensitivity to Input Length. Section 5.2 reveals that summarization tasks yield the lowest perfor- mance. Since inputs of such tasks are typically much longer than others, we further assess the impact 8 Proprietary Models Open-Source Models Long-Text Enhanced Models Figure 6: Impact of long input on the Length Score . Figure 7: Three representative lazy strategies. of input length on model’s length instruction following. Specifically, we select a subset of samples from the summarization task and construct three input versions for each by truncating the original text to short ( <1000 words), medium (1000–5000 words), and long ( >5000 words) lengths. We then prompt the models to generate a continuation for each version. As detailed in Figure 6, among the 26 models evaluated, 21 models show marked declines in Length Score with increasing input length, with LongWriter-Llama3.1-8B exhibiting the largest Length Deviation reduction ( ↓13.6). These models also perform worst on summarization tasks, revealing a strong association between input length sensitivity and poor summarization fidelity. Detailed experimental settings and results are provided in Appendix G.2. Prevalence of Lazy Generation Strategies. Through analysis of the generated outputs, we observe two distinct failure modes across all models: in some cases, models are fundamentally limited in producing long-form content, while in others, lazy strategies are adopted to circumvent length constraints, such as prematurely terminating the response or outright refusing to generate content. Figure 7 illustrates three representative types of lazy strategies . Notably, we observe that the prevalence of such strategies increases sharply when length constraints exceed 4096 words, and on average surpasses 10% for all models when the constraint is set to 8192 words. More details and quantitative analysis are provided in the Appendix H. Interestingly, as shown in Appendix M.2, our experiments on the base model reveal that the refusal lazy strategies is not only attributable to safety alignment, but also arises from pre-training. Limitations of Intermediate Reasoning. Given the mechanistic differences between reasoning and standard models, we conduct a case study on Claude-3.7-Sonnet-Thinking under three representative length constraints: 16, 1024, and 8192 words. By analyzing intermediate reasoning traces, we observe that reasoning models can calibrate their output length during the reasoning process. Specifically, the model first generates a draft response and performs self-evaluation by counting the number of words in its reasoning process. If the generated length does not satisfy the instruction, the model iteratively revises or regenerates the response to better follow the specified constraint. However, this ability to dynamically adjust output length only partially alleviates the problem for short length constraints and still fails under longer constraints. Further illustrative examples and detailed analyses are provided in Appendix I. Takeaways. Deficits in length awareness, sensitivity to input length, and the prevalence of lazy generation strategies collectively undermine effective length instruction following in current models. Although reasoning models attempt to calibrate output length during the intermediate reasoning process, this approach remains ineffective under longer length
https://arxiv.org/abs/2505.16234v1
constraints. 6 Conclusion, Limitations and Future Insights In this paper, we introduce LIFEB ENCH , a comprehensive benchmark for evaluating the ability of LLMs to follow length instructions across diverse tasks, languages, and a broad range of length constraints. Empirical analysis shows that current LLMs are generally unreliable at following length 9 instructions, particularly under long constraints, often falling significantly short of their vendor- claimed length. We further showcase that model performance is substantially affected by input characteristics, including task type, language, and input length, etc. These findings reveal a critical gap in LLMs’ ability to follow length instructions, highlighting the need for more targeted evaluation and development to improve instruction following in future models. We acknowledge that this work focuses on benchmarking, and it does not offer elegant solutions to the underlying causes, while Appendix M provides several promising insights. Addressing these causes is an important direction for future research. We hope LIFEB ENCH and our analyses will facilitate further research in this important but underexplored area. References [1]H. Abedi Firouzjaei. Survival analysis for user disengagement prediction: question-and- answering communities’ case. Social Network Analysis and Mining , 12(1):86, 2022. [2]M. Alzantot, Y . S. Sharma, A. Elgohary, B.-J. Ho, M. Srivastava, and K.-W. Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , 2018. [3]C. An, S. Gong, M. Zhong, X. Zhao, M. Li, J. Zhang, L. Kong, and X. Qiu. L-eval: Instituting standardized evaluation for long context language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14388–14411, 2024. [4]Anthropic. Claude 3.7 Sonnet and Claude Code. https://www.anthropic.com/news/ claude-3-7-sonnet , 2025. [5]G. Attardi. Ag’s corpus of news articles. http://groups.di.unipi.it/~gulli/AG_ corpus_of_news_articles.html , 2004. Accessed: 2025-05-09. [6]Y . Bai, X. Lv, J. Zhang, H. Lyu, J. Tang, Z. Huang, Z. Du, X. Liu, A. Zeng, L. Hou, et al. Longbench: A bilingual, multitask benchmark for long context understanding. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3119–3137, 2024. [7]Y . Bai, S. Tu, J. Zhang, H. Peng, X. Wang, X. Lv, S. Cao, J. Xu, L. Hou, Y . Dong, et al. Long- bench v2: Towards deeper understanding and reasoning on realistic long-context multitasks. arXiv preprint arXiv:2412.15204 , 2024. [8]Y . Bai, J. Zhang, X. Lv, L. Zheng, S. Zhu, L. Hou, Y . Dong, J. Tang, and J. Li. Longwriter: Unleashing 10,000+ word generation from long context LLMs. In The Thirteenth International Conference on Learning Representations , 2025. [9]A. Bordes, Y .-L. Boureau, and J. Weston. Learning end-to-end goal-oriented dialog. In International Conference on Learning Representations , 2017. [10] A. Bosselut, A. Celikyilmaz, X. He, J. Gao, P.-S. Huang, and Y . Choi. Discourse-aware neural rewards for coherent text generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) , pages 173–184, 2018. [11] B. Butcher, M. O’Keefe, and J. Titchener. Precise length control for
https://arxiv.org/abs/2505.16234v1
large language models. Natural Language Processing Journal , page 100143, 2025. [12] ByteDance. Doubao-1.5-Pro. https://seed.bytedance.com/zh/special/doubao_1_ 5_pro/ , 2025. [13] Y . Chang, X. Wang, J. Wang, Y . Wu, L. Yang, K. Zhu, H. Chen, X. Yi, C. Wang, Y . Wang, et al. A survey on evaluation of large language models. ACM transactions on intelligent systems and technology , 15(3):1–45, 2024. 10 [14] D. S. Chaplot. Albert q. jiang, alexandre sablayrolles, arthur mensch, chris bamford, devendra singh chaplot, diego de las casas, florian bressand, gianna lengyel, guillaume lample, lucile saulnier, lélio renard lavaud, marie-anne lachaux, pierre stock, teven le scao, thibaut lavril, thomas wang, timothée lacroix, william el sayed. arXiv preprint arXiv:2310.06825 , 2023. [15] D. Chen, A. Fisch, J. Weston, and A. Bordes. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1870–1879, 2017. [16] D. Chen and C. D. Manning. A fast and accurate dependency parser using neural networks. InProceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) , pages 740–750, 2014. [17] Y . Chen, B. Xu, Q. Wang, Y . Liu, and Z. Mao. Benchmarking large language models on controllable generation under diversified instructions. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17808–17816, 2024. [18] W.-L. Chiang, L. Zheng, Y . Sheng, A. N. Angelopoulos, T. Li, D. Li, H. Zhang, B. Zhu, M. Jordan, J. E. Gonzalez, and I. Stoica. Chatbot arena: An open platform for evaluating llms by human preference, 2024. [19] W. S. Cho, P. Zhang, Y . Zhang, X. Li, M. Galley, C. Brockett, M. Wang, and J. Gao. Towards coherent and cohesive long-form text generation. In Proceedings of the First Workshop on Narrative Understanding , pages 1–11, 2019. [20] K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [21] A. Cohan, F. Dernoncourt, D. S. Kim, T. Bui, S. Kim, W. Chang, and N. Goharian. A discourse- aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 615–621, 2018. [22] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research , 12:2493–2537, 2011. [23] M. R. Costa-jussà, P. Andrews, M. C. Meglioli, J. Chen, J. Chuang, D. Dale, C. Ropers, A. Mourachko, E. Sánchez, H. Schwenk, et al. Lcfo: Long context and long form output dataset and benchmarking. arXiv preprint arXiv:2412.08268 , 2024. [24] T. Davidson, D. Warmsley, M. Macy, and I. Weber. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media , volume 11, pages 512–515, 2017. [25] S. J. Delany, M. Buckley, and D.
https://arxiv.org/abs/2505.16234v1
Greene. Sms spam filtering: Methods and data. Expert Systems with Applications , 39(10):9899–9908, 2012. [26] L. Dong, J. Mallinson, S. Reddy, and M. Lapata. Learning to paraphrase for question answering. InProceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 875–886, 2017. [27] Y . Dubois, C. X. Li, R. Taori, T. Zhang, I. Gulrajani, J. Ba, C. Guestrin, P. S. Liang, and T. B. Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems , 36:30039–30069, 2023. [28] A. R. Fabbri, I. Li, T. She, S. Li, and D. Radev. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 1074–1084, 2019. [29] A. Fan, D. Grangier, and M. Auli. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation , pages 45–54, 2018. 11 [30] A. Fan, Y . Jernite, E. Perez, D. Grangier, J. Weston, and M. Auli. Eli5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 3558–3567, 2019. [31] A. Fan, M. Lewis, and Y . Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 889–898, 2018. [32] W. Foundation. Wikimedia downloads. https://dumps.wikimedia.org . Accessed: 2025- 03-11. [33] T. GLM, A. Zeng, B. Xu, B. Wang, C. Zhang, D. Yin, D. Zhang, D. Rojas, G. Feng, H. Zhao, et al. Chatglm: A family of large language models from glm-130b to glm-4 all tools. arXiv preprint arXiv:2406.12793 , 2024. [34] Google. Gemini 2.0 Flash. https://deepmind.google/technologies/gemini/flash/ , 2024. [35] Google. Gemini 2.5 Pro. https://deepmind.google/technologies/gemini/pro/ , 2025. [36] A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [37] A. Gu, K. Goel, and C. Re. Efficiently modeling long sequences with structured state spaces. InInternational Conference on Learning Representations , 2022. [38] J. Gu, X. Jiang, Z. Shi, H. Tan, X. Zhai, C. Xu, W. Li, Y . Shen, S. Ma, H. Liu, et al. A survey on llm-as-a-judge. arXiv preprint arXiv:2411.15594 , 2024. [39] Y . Gu, W. Wang, X. Feng, W. Zhong, K. Zhu, L. Huang, T.-S. Chua, and B. Qin. Length controlled generation for black-box llms. arXiv preprint arXiv:2412.14656 , 2024. [40] D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [41] Y . He, D. Jin, C. Wang, C. Bi, K. Mandyam, H. Zhang, C. Zhu, N. Li, T. Xu, H. Lv, et al. Multi-if: Benchmarking llms on multi-turn and multilingual instructions following. arXiv preprint arXiv:2410.15553 , 2024. [42] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blun- som. Teaching machines to
https://arxiv.org/abs/2505.16234v1
read and comprehend. Advances in neural information processing systems , 28, 2015. [43] C.-P. Hsieh, S. Sun, S. Kriman, S. Acharya, D. Rekesh, F. Jia, and B. Ginsburg. Ruler: What’s the real context size of your long-context language models? In First Conference on Language Modeling , 2024. [44] J. Huang and K. C.-C. Chang. Towards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 1049–1065, 2023. [45] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management , pages 2333–2338, 2013. [46] Y . Huang, K. Tang, M. Chen, and B. Wang. A comprehensive survey on evaluating large language model applications in the medical industry. arXiv preprint arXiv:2404.15777 , 2024. [47] A. Jacovi, A. Wang, C. Alberti, C. Tao, J. Lipovetz, K. Olszewska, L. Haas, M. Liu, N. Keating, A. Bloniarz, et al. The facts grounding leaderboard: Benchmarking llms’ ability to ground responses to long-form input. arXiv preprint arXiv:2501.03200 , 2025. 12 [48] A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [49] H. Jhamtani, V . Gangal, E. Hovy, and E. Nyberg. Shakespearizing modern language using copy-enriched sequence to sequence models. In Proceedings of the Workshop on Stylistic Variation , pages 10–19, 2017. [50] R. Jie, X. Meng, L. Shang, X. Jiang, and Q. Liu. Prompt-based length controlled generation with multiple control types. In Findings of the Association for Computational Linguistics ACL 2024 , pages 1067–1085, 2024. [51] Jun. webnovel_cn (revision 745338c), 2023. [52] G. Kamradt. Needle in a haystack - pressure testing llms, 2023. Accessed: 2025-04-22. [53] Q. E. Kim. Question pairs dataset. https://www.kaggle.com/datasets/quora/ question-pairs-dataset/data , 2017. [54] M. Koupaee and W. Y . Wang. Wikihow: A large scale text summarization dataset, 2018. [55] W. Kry ´sci´nski, N. Rajani, D. Agarwal, C. Xiong, and D. Radev. Booksum: A collection of datasets for long-form narrative summarization. In Findings of the Association for Computa- tional Linguistics: EMNLP 2022 , pages 6536–6558, 2022. [56] Y . Kuratov, A. Bulatov, P. Anokhin, I. Rodkin, D. Sorokin, A. Sorokin, and M. Burtsev. Babilong: Testing the limits of llms with long context reasoning-in-a-haystack. Advances in Neural Information Processing Systems , 37:106519–106554, 2024. [57] G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT , pages 260–270, 2016. [58] J. Li, L. Zhang, Y . Li, Z. Liu, Y . Bai, R. Luo, L. Chen, and M. Yang. Ruler: A model-agnostic method to control generated length for large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 3042–3059, 2024. [59] T. Li, G. Zhang, Q. D. Do, X. Yue, and W. Chen. Long-context llms struggle with long in-context learning. arXiv preprint arXiv:2404.02060 , 2024. [60] X. Li, H. Shi, R. Xu, and
https://arxiv.org/abs/2505.16234v1
W. Xu. Ai awareness. arXiv preprint arXiv:2504.20084 , 2025. [61] Z. Li, X. Jiang, L. Shang, and H. Li. Paraphrase generation with deep reinforcement learning. InProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 3865–3878, 2018. [62] X. Liang, H. Wang, Y . Wang, S. Song, J. Yang, S. Niu, J. Hu, D. Liu, S. Yao, F. Xiong, et al. Controllable text generation for large language models: A survey. arXiv preprint arXiv:2408.12599 , 2024. [63] H. Lightman, V . Kosaraju, Y . Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, and K. Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. [64] B. Y . Lin, A. Ravichander, X. Lu, N. Dziri, M. Sclar, K. Chandu, C. Bhagavatula, and Y . Choi. The unlocking spell on base LLMs: Rethinking alignment via in-context learning. In The Twelfth International Conference on Learning Representations , 2024. [65] Z. Ling, K. Liu, K. Yan, Y . Yang, W. Lin, T.-H. Fan, L. Shen, Z. Du, and J. Chen. Longrea- son: A synthetic long-context reasoning benchmark via context expansion. arXiv preprint arXiv:2501.15089 , 2025. [66] A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. 13 [67] J. Liu, D. Zhu, Z. Bai, Y . He, H. Liao, H. Que, Z. Wang, C. Zhang, G. Zhang, J. Zhang, et al. A comprehensive survey on long context language modeling. arXiv preprint arXiv:2503.17407 , 2025. [68] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics , 12:157–173, 2024. [69] X. Liu, P. Dong, X. Hu, and X. Chu. Longgenbench: Long-context generation benchmark. InFindings of the Association for Computational Linguistics: EMNLP 2024 , pages 865–883, 2024. [70] Q. Luo, Y . Ye, S. Liang, Z. Zhang, Y . Qin, Y . Lu, Y . Wu, X. Cong, Y . Lin, Y . Zhang, et al. Repoagent: An llm-powered open-source framework for repository-level code documentation generation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 436–464, 2024. [71] C. Malaviya, S. Lee, S. Chen, E. Sieber, M. Yatskar, and D. Roth. Expertqa: Expert-curated questions and attributed answers. arXiv preprint arXiv:2309.07852 , 2023. [72] J. J. McAuley and J. Leskovec. From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. In Proceedings of the 22nd international conference on World Wide Web , pages 897–908, 2013. [73] MOP-LIWU Community and MNBVC Team. Chinesenlpcorpus. https://github.com/ SophonPlus/ChineseNlpCorpus/tree/master?tab=readme-ov-file , 2018. [74] MOP-LIWU Community and MNBVC Team. Mnbvc: Massive never-ending bt vast chinese corpus. https://github.com/esbatmop/MNBVC , 2023. [75] N. Mostafazadeh, N. Chambers, X. He, D. Parikh, D. Batra, L. Vanderwende, P. Kohli, and J. Allen. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North
https://arxiv.org/abs/2505.16234v1
American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 839–849, 2016. [76] R. Nallapati, B. Zhou, C. dos Santos, C. Gulcehre, and B. Xiang. Abstractive text summa- rization using sequence-to-sequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning , pages 280–290, 2016. [77] OpenAI. GPT-4o mini: advancing cost-efficient intelligence. https://openai.com/index/ gpt-4o-mini-advancing-cost-efficient-intelligence/ , 2024. [78] OpenAI. Hello GPT-4o. https://openai.com/index/hello-gpt-4o/ , 2024. [79] OpenAI. OpenAI o1-mini: Advancing cost-efficient reasoning. https://openai.com/ index/openai-o1-mini-advancing-cost-efficient-reasoning/ , 2024. [80] OpenAI. OpenAI o3-mini: Pushing the frontier of cost-effective reasoning. https://openai. com/index/openai-o3-mini/ , 2025. [81] S. J. Paech. Eq-bench creative writing benchmark v3. https://github.com/EQ-bench/ creative-writing-bench , 2025. [82] P. Patil. The internet movie script dataset. https://www.kaggle.com/datasets/ prasad22/the-internet-movie-script-dataset , 2024. [83] C. Pham, S. Sun, and M. Iyyer. Suri: Multi-constraint instruction following in long-form text generation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 1722–1753, 2024. [84] Y . Qin, K. Song, Y . Hu, W. Yao, S. Cho, X. Wang, X. Wu, F. Liu, P. Liu, and D. Yu. Infobench: Evaluating instruction following ability in large language models. In Findings of the Association for Computational Linguistics ACL 2024 , pages 13025–13048, 2024. 14 [85] S. Quan, T. Tang, B. Yu, A. Yang, D. Liu, B. Gao, J. Tu, Y . Zhang, J. Zhou, and J. Lin. Language models can self-lengthen to generate long texts. arXiv preprint arXiv:2410.23933 , 2024. [86] H. Que, F. Duan, L. He, Y . Mou, W. Zhou, J. Liu, W. Rong, Z. M. Wang, J. Yang, G. Zhang, et al. Hellobench: Evaluating long text generation capabilities of large language models. arXiv preprint arXiv:2409.16191 , 2024. [87] A. Radford and K. Narasimhan. Improving language understanding by generative pre- training. https://cdn.openai.com/research-covers/language-unsupervised/ language_understanding_paper.pdf , 2018. [88] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [89] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for sentence summarization. InConference on Empirical Methods in Natural Language Processing, EMNLP 2015 , pages 379–389. Association for Computational Linguistics (ACL), 2015. [90] H. Schmid. Part-of-speech tagging with neural networks. In Proceedings of the 15th conference on Computational linguistics-Volume 1 , pages 172–176, 1994. [91] R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1715–1725, 2016. [92] U. Shaham, M. Ivgi, A. Efrat, J. Berant, and O. Levy. Zeroscrolls: A zero-shot benchmark for long text understanding. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 7977–7989, 2023. [93] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y . Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing , pages 1631–1642, 2013. [94] D. Srirag. Text classification on emails. https://www.kaggle.com/datasets/ dipankarsrirag/topic-modelling-on-emails ,
https://arxiv.org/abs/2505.16234v1
2020. [95] I. Sutskever, O. Vinyals, and Q. V . Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems , 27, 2014. [96] A. Talmor, J. Herzig, N. Lourie, and J. Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 4149–4158, 2019. [97] H. Tan, Z. Guo, Z. Shi, L. Xu, Z. Liu, Y . Feng, X. Li, Y . Wang, L. Shang, Q. Liu, et al. Proxyqa: An alternative framework for evaluating long-form text generation with large language models. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 6806–6827, 2024. [98] S. Venkatraman, N. I. Tripto, and D. Lee. Collabstory: Multi-llm collaborative story generation and authorship analysis. arXiv preprint arXiv:2406.12665 , 2024. [99] C. Wang, H. Duan, S. Zhang, D. Lin, and K. Chen. Ada-leval: Evaluating long-context llms with length-adaptable benchmarks. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 3712–3724, 2024. [100] K. Wang, G. Zhang, Z. Zhou, J. Wu, M. Yu, S. Zhao, C. Yin, J. Fu, Y . Yan, H. Luo, et al. A comprehensive survey in llm (-agent) full stack safety: Data, training and deployment. arXiv preprint arXiv:2504.15585 , 2025. 15 [101] Y . Wang, D. Ma, and D. Cai. With greater text comes greater necessity: Inference-time training helps long text generation. In First Conference on Language Modeling , 2024. [102] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. Chain-of- thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [103] B. Wen, P. Ke, X. Gu, L. Wu, H. Huang, J. Zhou, W. Li, B. Hu, W. Gao, J. Xu, et al. Benchmarking complex instruction-following with multiple constraints composition. Advances in Neural Information Processing Systems , 37:137610–137645, 2024. [104] S. Wiseman, S. M. Shieber, and A. M. Rush. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 2253–2263, 2017. [105] Y . Wu, M. S. Hee, Z. Hu, and R. K.-W. Lee. Longgenbench: Benchmarking long-form generation in long context LLMs. In The Thirteenth International Conference on Learning Representations , 2025. [106] Y . Wu, J. Mei, M. Yan, C. Li, S. Lai, Y . Ren, Z. Wang, J. Zhang, M. Wu, Q. Jin, et al. Writing- bench: A comprehensive benchmark for generative writing. arXiv preprint arXiv:2503.05244 , 2025. [107] A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [108] K. Yang, Y . Tian, N. Peng, and D. Klein. Re3: Generating longer stories with recursive reprompting and revision. In Proceedings of the 2022 Conference on Empirical
https://arxiv.org/abs/2505.16234v1
Methods in Natural Language Processing , pages 4393–4479, 2022. [109] Z. Yang, F. Xiong, P. Li, and Y . Liu. Rethinking long context generation from the continual learning perspective. In Proceedings of the 31st International Conference on Computational Linguistics , pages 1922–1933, 2025. [110] S. Yao, H. Chen, A. W. Hanjie, R. Yang, and K. R. Narasimhan. COLLIE: Systematic construction of constrained text generation tasks. In The Twelfth International Conference on Learning Representations , 2024. [111] X. Ye, F. Yin, Y . He, J. Zhang, H. Yen, T. Gao, G. Durrett, and D. Chen. Longproc: Benchmarking long-context language models on long procedural generation. arXiv preprint arXiv:2501.05414 , 2025. [112] P. Yuan, C. Tan, S. Feng, Y . Li, X. Wang, Y . Zhang, J. Shi, B. Pan, Y . Hu, and K. Li. From sub-ability diagnosis to human-aligned generation: Bridging the gap for text length control via markergen. arXiv preprint arXiv:2502.13544 , 2025. [113] W. Yuan, I. Kulikov, P. Yu, K. Cho, S. Sukhbaatar, J. Weston, and J. Xu. Following length constraints in instructions. arXiv preprint arXiv:2406.17744 , 2024. [114] Z. Zeng, J. Yu, T. Gao, Y . Meng, T. Goyal, and D. Chen. Evaluating large language models at evaluating instruction following. In 12th International Conference on Learning Representa- tions, ICLR 2024 , 2024. [115] J. Zhang, R. Zhang, F. Kong, Z. Miao, Y . Ye, and Y . Zheng. Lost-in-the-middle in long- text generation: Synthetic dataset, evaluation framework, and mitigation. arXiv preprint arXiv:2503.06868 , 2025. [116] T. Zhang, Y . Shen, W. Luo, Y . Zhang, H. Liang, F. Yang, M. Lin, Y . Qiao, W. Chen, B. Cui, et al. Cfbench: A comprehensive constraints-following benchmark for llms. arXiv preprint arXiv:2408.01122 , 2024. [117] X. Zhang and M. Lapata. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) , pages 670–680. Association for Computational Linguistics, 2014. 16 [118] X. Zhang, J. Zhao, and Y . LeCun. Character-level convolutional networks for text classification. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 28. Curran Associates, Inc., 2015. [119] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. [120] M. Zhong, D. Yin, T. Yu, A. Zaidi, M. Mutuma, R. Jha, A. Hassan, A. Celikyilmaz, Y . Liu, X. Qiu, et al. Qmsum: A new benchmark for query-based multi-domain meeting summarization. InProceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 5905–5921, 2021. [121] J. Zhou, T. Lu, S. Mishra, S. Brahma, S. Basu, Y . Luan, D. Zhou, and L. Hou. Instruction- following evaluation for large language models. arXiv preprint arXiv:2311.07911 , 2023. 17 Appendices Appendix Table of Contents A Comparison of LIFEB ENCH with Existing Benchmarks 20 B Details of LIFEB ENCH 22 B.1 Mapping
https://arxiv.org/abs/2505.16234v1
of NLP Tasks to Length Paradigms . . . . . . . . . . . . . . . . . . . . . 22 B.2 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 B.3 Refinement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 B.4 Examples of Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 C Details of LIFEB ENCH -SUPPLEMENTARY 31 C.1 LIFEB ENCH -LABEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 C.2 LIFEB ENCH -REFACTOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 C.3 LIFEB ENCH -LITE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 D Detailed Experimental Setup 34 D.1 Model Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 D.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 D.2.1 Output Word Count Computation. . . . . . . . . . . . . . . . . . . . . . . 35 D.2.2 Length Deviation and Length Score . . . . . . . . . . . . . . . . . . . . . 35 E Detailed Results of Main Experiments 38 E.1 Detailed Length Deviation andLength Score across all length constraints . . . . . . 38 E.2 Visualization of Model Output Lengths . . . . . . . . . . . . . . . . . . . . . . . . 42 E.3 Supplementary Results with Extended Length Constraints . . . . . . . . . . . . . . 44 F Length Awareness: Do LLMs Know How Long Their Generations Are? 46 F.1 Self-Reported Length Estimation . . . . . . . . . . . . .
https://arxiv.org/abs/2505.16234v1
. . . . . . . . . . . . . . 46 F.2 Marker-Based Length Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 46 G Details of Input Characteristic Analysis 51 G.1 Task Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 G.2 Input Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 G.3 Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 H Lazy Strategy Analysis 58 H.1 Definition of Lazy Generation Strategies . . . . . . . . . . . . . . . . . . . . . . . 58 H.2 Detection Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 H.3 Examples of Lazy Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 H.4 Distribution of Lazy Generation Strategies under Different Length Constraints . . . 60 18 ICase Study: How Reasoning Models Handle Length Constraints via Stepwise Thinking 62 J Results under Supplementary Length Paradigms 67 K Generations Quality Evaluation by LLMs 69 K.1 Does LLMs Follow the Length Instruction While Being Compatible with the Original Instruction? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 K.1.1 Instruction Following as Length Constraint Increase . . . . . . . . . . . . 70 K.1.2 Ablation Study: Text Quality and Length . . . . . . . . . . . . . . . . . . 72 K.2 Text Quality Comparison of Long-text Generation . . . . . . . . . . . . . . . . . . 73 L Comparison with Existing Leaderboard 75 M Future Directions and Potential Solutions 77 M.1 Interpretability of Length Instruction Following . . . . . . . . . . . . . . . . . . . 77 M.2 Improving Length Instruction Following During Pre-training . . . . . . . . . . . . 78 M.3
https://arxiv.org/abs/2505.16234v1
Improving Length Instruction Following During Post-training . . . . . . . . . . . . 79 N Length Instruction Following under Output Format Constraints 80 19 A Comparison of LIFEB ENCH with Existing Benchmarks In this section, we review ten representative benchmarks on long-form text generation or long-context modeling and analyze the evaluation ranges they cover. Based on this analysis, we position our work, LIFEB ENCH , relative to prior benchmarks, highlighting its unique strengths and comprehensive evaluation design. Specifically, LIFEB ENCH differentiates itself by being bilingual (English and Chinese), covering four task categories, and comprising 360 fundamental data units. Each unit derives 30 data items across three length control methods and ten distinct length constraints, resulting in a total of 10,800 instances. To facilitate multi-dimensional analysis of length instruction following, we introduce two dedicated evaluation metrics: Length Deviation andLength Score . Thus, our benchmark features a Diverse Dataset ,Extensive Length Constraints , and Analytical Evaluation , providing broader coverage and more fine-grained analysis compared to prior benchmarks. We summarize key differences between LIFEB ENCH and prior benchmarks in Table 2 and provide detailed discussions below. Table 2: Comparison of L ENGTH BENCH with Other LLM Benchmarks. Length ConstraintsBenchmarksDiverse TaskLong InputBilingualMultiple Control Methods <100 100-2000 >2000Analytical Evaluation IFEval [121] ✓ ✗ ✗ ✓ ✓ ✓ ✗ ✗ AlpacaEval-LI [113] ✗ ✗ ✗ ✗ ✓ ✓ ✗ ✓ Ruler [58] ✗ ✗ ✗ ✗ ✓ ✓ ✗ ✓ Suri [83] ✗ ✗ ✗ ✗ ✗ ✓ ✓ ✗ LongBench-Write [8] ✗ ✗ ✓ ✗ ✗ ✓ ✓ ✓ HelloBench [86] ✓ ✓ ✗ ✗ ✗ ✗ ✓ ✗ LongProc [111] ✗ ✓ ✗ ✗ ✗ ✓ ✓ ✗ LonGen Bench [85] ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ LongGenBench-1 [105] ✗ ✗ ✗ ✗ ✗ ✗ ✓ ✗ LongGenBench-2 [69] ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✗ Ours ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Diverse Dataset Extensive Length ConstraintsAnalytical Evaluation We categorize existing benchmarks into two main groups for comparison: benchmarks explicitly targeting length instruction following, and those focused on general long-text generation. Length Instruction Following Benchmarks. IFEval is a well-established benchmark for instruction-following evaluation, utilizing length constraints as proxies for assessing general ad- herence to instructions. It includes a variety of tasks and control methods to measure a model’s ability to follow instructions. However, IFEval primarily focuses on short-text scenarios, without input or output instructions exceeding 2,000 words. Furthermore, as its primary aim is to evaluate general instruction following, it lacks detailed analytical evaluation of length control, making it difficult to directly assess a model’s performance on length instruction following. In addition, several bench- marks have been specifically designed to evaluate length instruction following, such as AlpacaEval-LI andRuler . Unlike traditional instruction following benchmarks, these datasets focus exclusively on length control. However, they tend to exhibit narrower scopes, typically constrained to short outputs. These benchmarks often lack sufficient diversity in task domains and control methods, limiting their effectiveness for comprehensive length-adherence evaluation. Long-Text Generation Benchmarks. Benchmarks in this category primarily assess generation quality within long-form
https://arxiv.org/abs/2505.16234v1
content scenarios, typically including explicit length instructions as part of their evaluation setup. We contrast LIFEB ENCH with seven representative benchmarks in this area. Existing long-text benchmarks generally prioritize generation quality in extended contexts but neglect comprehensive evaluations across shorter lengths or varied control methodologies. Additionally, apart from LongBench-Write andLonGen Bench , most benchmarks in this group lack analytical evaluation methods explicitly targeting length instruction following. Combined with our findings revealed in this paper, it is difficult for models to achieve excellent performance in long-text generation. We believe thatLIFEB ENCH makes an important supplement to the previous work that may have overlooked the important factor of actual generation length. In addition to the above two categories, we also compare LIFEB ENCH with several benchmarks in the controlled text generation domain. Existing controlled generation benchmarks can be broadly 20 categorized into content control and attribute control [ 62]. Notably, structure control, a subcategory of content control—often incorporates length constraints as part of its evaluation, exemplified by benchmarks such as COLLIE [110],CoDI-Eval [17], and CFBench [116]. However, these benchmarks typically involve multiple and complex control factors and rarely explore extended text-generation scenarios. Consequently, they are less suited for a dedicated and systematic analysis of length instruction adherence in isolation. In summary, compared to previous benchmarks, LIFEB ENCH provides a comprehensive benchmark specifically designed to evaluate length instruction following. And we provide the most languages, the most comprehensive coverage of length constraints, a broad and diverse set of tasks, and multiple control methods—effectively addressing key limitations found in existing benchmarks. 21 B Details of LIFEB ENCH B.1 Mapping of NLP Tasks to Length Paradigms In Section 3, we present a task-to-paradigm mapping that categorizes standard NLP tasks according to the four Length Paradigms defined earlier: Length-Independent Paradigm ,Fixed-Length Paradigm , Adaptive-Length Paradigm , and Flexible-Length Paradigm . In this section, we elaborate on these four paradigms: •Length-Independent Paradigm : This paradigm comprises tasks where length constraints are inherently irrelevant to the task objective. Imposing a length restriction does not impact the fundamental goal or the correctness of the output. Such tasks typically involve structured outputs where specifying a fixed length is unnecessary. Representative examples include tokenization and part-of-speech (POS) tagging, which prioritize functional correctness over word or character counts, so they are length-independent. •Fixed-Length Paradigm : This paradigm includes tasks where the output length remains approxi- mately constant, irrespective of input variations. These tasks require a predefined output length that cannot be freely adjusted. Representative examples include text classification and sentiment analysis, where outputs are typically restricted to a single category or a small set of alternatives. •Adaptive-Length Paradigm : This paradigm covers tasks where the output length naturally adapts to the input length, with longer inputs generally producing longer outputs. Representative examples include machine translation and text style transfer. For instance, longer input sentences in machine translation generally yield longer outputs, and the same applies to text style transfer. •Flexible-Length Paradigm : This paradigm encompasses tasks where the output length can be freely adjusted across a wide range, supporting both concise and detailed responses.
https://arxiv.org/abs/2505.16234v1
A defining characteristic of these tasks is their adaptability to different use requirements. Representative examples include open-ended question answering and text completion. In open-ended question answering, models can extend answers with additional explanations, while in text writing, they can generate outputs ranging from short passages to extensive works spanning tens of thousands or even millions of words. Given the diversity of NLP tasks, we focus on representative examples for each paradigm, accompa- nied by concise descriptions of their primary objectives. Table 3 summarizes the mapping of NLP tasks to the four length paradigms. Table 3: Mapping of common NLP tasks to the Length Paradigms. Length Paradigm Representative NLP Tasks Task Description Length-IndependentTokenization [91] Splits text into tokens based on linguistic or subword units. POS Tagging [90] Assigns part-of-speech labels to each token in the input sequence. Named Entity Recognition [57] Identifies and labels named entities (e.g., people, locations) in a sentence. Dependency Parsing [16] Analyzes syntactic structure by establishing relationships between words. Text Matching [45] Measures semantic or lexical similarity between two pieces of text. Fixed-LengthText Classification [22] Assigns a label (e.g., topic, category) to an entire input text. Sentiment Analysis [93] Predicts sentiment polarity (e.g., positive, negative) of a given text. Spam Detection [25] Classifies an email or message as either spam or not spam. Toxic Content Detection [24] Detects the presence of toxic, offensive, or harmful language in text. Title Generation [76] Generates a concise title for a given passage or document. Stereotype Detection Identifies biased or stereotypical language in text. Adaptive-LengthMachine Translation [95] Converts text from one language to another. Text Style Transfer [49] Alters text style (e.g., formal to informal) while keeping content intact. Paraphrase Generation [61] Rewrites input text using different wording while preserving meaning. Question Rewriting [26] Reformulates a question while preserving its original intent. Sentence Perturbation [2] Modifies sentence form slightly while retaining its meaning. Flexible-LengthOpen-ended Question Answering [15] Generates free-form answers to questions without fixed format. Text Summarization [89] Extracts and rephrases key content from the input text. Sentence Compression [89] Compresses a sentence into a shorter version while preserving key meaning. Text Completion [87] Generates plausible continuations for a given text. Dialogue Generation [9] Produces context-aware responses in multi-turn conversations. Story Composition [31] Generates narratives or stories based on input prompts. Poem Generation [117] Creates poems in various styles and forms from given input. Sentence Expansion [87] Extends a short sentence into a more detailed or informative one. Reasoning [102] Performs complex inference or multi-step reasoning. 22 B.2 Data Collection This section provides a detailed description of the data collection process under the Flexible-Length Paradigm and outlines the taxonomy of tasks included within this setting. Based on Table 3, and considering the variability in output length requirements, we select four representative categories from the Flexible-Length Paradigms as our base: Question Answering (QA) ,Summarization , Reasoning , and Creative Generation . These categories support variable-length instructions and are well suited for standardized evaluation. For each task category, we first define the task and explain its relevance to the Flexible-Length setting. We then introduce the
https://arxiv.org/abs/2505.16234v1
associated representative NLP tasks, followed by a comprehensive overview of the data collection process, including dataset composition and provenance, preprocessing and data cleaning strategies, and the subtypes represented within each category. Question Answering (QA) Task. This task involves answering open-ended questions whose answer length varies according to the required detail, aligning naturally with the Flexible-Length Paradigm . The representative NLP task for this category is Open-ended Question Answering . To support our analysis of this task under the flexible-length setting, we manually collected a total of 120QA samples from publicly available datasets, comprising 60Chinese and 60English examples. The selection criteria are: (1) Questions must be open-ended, allowing responses ranging from concise to elaborative explanations. When the length constraint is small, such as 16, the model can just give the answer, while for longer answers it can provide explanations and even add richness while giving the answer; (2) Questions should be meaningful, context-rich, and well-formed, enabling various depths of responses. For example, an open-ended historical or political discussion question, who is your favorite American president? Under this question, the model can answer the term and name of a specific president, or it can explain the reasons from multiple perspectives to form a deep answer; (3) We excluded factoid, binary, multiple-choice, duplicate, or grammatically incorrect entries. The answer space for such questions is usually too narrow, and sometimes it is difficult to form a longer answer. For example, what is the answer to 3+5? For this question, it is difficult for the model to generate a 2000-word answer to explain the question. To ensure linguistic and topical diversity, we sourced English QA data equally ( 15questions each) from four datasets: •Question Pairs Dataset[53] : Contains over 400k real-world questions from Quora1. •Stack Exchange Dataset [ 1]: Comprises over 80k community-driven QA threads from the Stack Exchange2. •WikiHow Dataset [ 54]: Includes over 160k how-to questions and procedural content from WikiHow3. •Yahoo Answers Dataset [ 118]: Consists of 10 different categories and each class contains over 140k QA pairs. The Chinese QA data were collected from the following two datasets: •QA Wikihow [ 74]: A Chinese QA dataset derived from WikiHow, from which we selected 25 open-ended questions. •QA Zhihu [ 74]: Sourced from Zhihu4, a Chinese community QA platform, from which we selected 35diverse and open-ended questions. We then analyzed the topical distribution of the collected QA samples, categorizing questions into nine thematic domains: Knowledge (e.g. How does the finance credit score work? ),Technology (e.g. How to protect data from internal data scientists? ),Health (e.g. How do I lose weight without doing exercise? ),Society (e.g. How can we improve the education system for high school and college? ), Lifestyle (e.g. How to be positive? ),Entertainment (e.g. How to play competitive Counter-Strike 1https://www.quora.com/ 2https://stackexchange.com/ 3https://www.wikihow.com/ 4https://www.zhihu.com/ 23 Global Offensive? ),Business (e.g. How to start investing? ),Food (e.g. How to brew commercial beer? ), and Sports (e.g. How to play basketball? ). These categories comprehensively cover typical domains encountered in open-ended question design. Summarization Task. The objective of summarization task is to condense long-form texts into
https://arxiv.org/abs/2505.16234v1
shorter summaries whose lengths can vary considerably—from concise overviews to more detailed condensations—aligning naturally with the Flexible-Length Paradigm . Representative NLP tasks include Text Summarization andSentence Compression , both aiming to preserve key information while reducing text length. To support our analysis of this task under the flexible-length setting, we selected a total of 40 long-form source documents from open-source datasets, with an equal split of 20English and 20 Chinese texts. Given the need for variable-length summaries, we intentionally focused on documents with substantial content, ensuring sufficient information is available for both brief and detailed summarization. Specifically, we selected source texts within the range of 10,000 to 15,000 words (or characters for Chinese), striking a balance between the context window limitations of current language models and the need to support diverse summarization lengths in the Flexible-Length Paradigm . To improve text quality and informativeness, we manually cleaned the documents by removing low-informative elements such as extensive tables, blocks of numeric-only content, meta descriptions, and other low-information or structurally noisy parts. The English summarization data were drawn from the following five datasets, with 4documents each: •Scientific Papers [ 21]: Structured academic documents sourced from ArXiv5and PubMed6. Text type: Academic Article . •BookSum [55] : Long-form literary narratives. Text type: Book . •Gov Report [ 37]: Government reports from the Congressional Research Service7and U.S. Gov- ernment Accountability Office8. Text type: Report . •QMSUM [120] : Multi-domain meeting transcripts. Text type: Dialogue . •Wikipedia [ 32]: High-quality encyclopedia-style articles from cleaned English Wikipedia dumps. Text type: Encyclopedia . The Chinese summarization data were collected from the following three datasets: •Co Ann Report [ 74]: A dataset of Chinese corporate annual reports. The corresponding text type isReport , and we selected 5documents. •Wikipedia [ 32]: A cleaned dump of Chinese Wikipedia articles. The corresponding text type is Encyclopedia , and we selected 5documents. •Gov XueXiQiangGuo [ 74]: A collection of public discourse materials from the Xuexi Qiangguo platform9, including political speeches, news reports, and commentary. The corresponding text type is Public Discourse , and we selected 10documents. Reasoning Task. Reasoning is a core capability distinguishing modern LLMs from traditional NLP systems. Unlike classification or span extraction tasks, which typically rely on surface-level textual patterns, reasoning task requires multi-step logical inference and contextual understanding, making it uniquely aligned with LLM capabilities rather than conventional NLP pipelines. Compared to general QA task, reasoning task emphasizes not only the final answer but also the underlying logical steps toward the conclusion. Whereas open-ended QA may occasionally require explanations, it does not necessarily involve inference over structured facts or scenarios. In contrast, reasoning task explicitly demands structured cognitive processes—such as deduction, analogy, or causal inference—often necessitating step-by-step justification. Thus, the representative NLP task for this category is simply Reasoning . 5https://arxiv.org/ 6https://pubmed.ncbi.nlm.nih.gov/ 7https://crsreports.congress.gov 8https://www.gao.gov/ 9https://www.xuexi.cn/ 24 A unique challenge emerged when adapting reasoning tasks to the flexible-length setting. Existing datasets for reasoning—such as GSM8K [ 20] (mathematical reasoning) or CommonsenseQA [ 96] (commonsense inference)—are typically not well-suited for flexible-length evaluation. The former’s mathematical notations and equations that make it difficult to
https://arxiv.org/abs/2505.16234v1
control output length meaningfully, while the latter is designed as a multiple-choice task, constraining the response format and limiting the scope for open-ended explanations. Nonetheless, reasoning itself inherently suits the Flexible- Length Paradigms : some inferences can be expressed succinctly, while others benefit from elaborate justifications. This makes it ideal for investigating the ability of models’ length instruction following based on complexity. To obtain high-quality, flexible-length reasoning samples, we employ GPT-4o [ 78] to generate 80 open-ended reasoning questions— 40in Chinese and 40semantically equivalent questions in English. Prompts are designed to allow concise responses or detailed justifications based on model instruction and complexity. We follow the taxonomy proposed by Huang et al. [ 44], covering six distinct categories: Deductive (e.g. Assuming that all sexually dimorphic animals possess reproductive organs, analyze whether pos- sessing reproductive organs can conversely indicate that an animal is sexually dimorphic. ),Inductive (e.g. Based on historical data from successive generations of smartphones, predict the direction of innovation in the next generation. ),Abductive (e.g. Determine the most plausible explanation for this observation: A country’s currency experiences abnormal exchange rate fluctuations. ),Analogical (e.g. How can the spread of computer viruses be compared to the spread of biological viruses? ), Cause and Effect (e.g. Examine the causal links between high temperatures and urban power supply shortages ), and Probabilistic (e.g. Evaluate the probabilistic models used to assess risk diversification in financial investment portfolios. ). The dataset consists of 10Deductive reasoning items and 14items each for the other five categories, ensuring balanced representation. All items underwent rigorous manual review according to four quality criteria: (1) logical soundness and necessity of reasoning rather than factual recall; (2) clarity and absence of ambiguity or misleading premises; (3) support for variable elaboration levels, from brief conclusions to detailed explanations; and (4) cultural and linguistic appropriateness. This meticulous curation ensures suitability for our analysis of length instruction following capabilities under flexible-length reasoning settings. Creative Generation Task. This task focuses on generating creative and imaginative texts, with output length inherently flexible and dependent on the context, genre, and intent of generation. Due to the wide variability in the expected length and structure of generated outputs, it aligns with the Flexible-Length Paradigms . Representative tasks include Text Completion ,Dialogue Generation , Story Composition ,Poem Generation , and Sentence Expansion , each requiring adaptation to various expressive and stylistic demands. For analysis under the flexible-length setting, we curated a total of 120examples from open-source datasets, evenly divided into 60English and 60Chinese instances. Each example is selected to represent distinct creative generation subtypes, ensuring both linguistic diversity and task coverage. All samples are manually verified to confirm fluency, coherence, and contextual appropriateness for creative generation. The English data are collected from the following 4datasets, with 15instances sampled from each: •Internet Movie Script Dataset[ 82]: Film scripts from IMSDb10; we use it for Script Expansion , prompting models to extend or elaborate scenes. •ROCStories [ 75]: Five-sentence commonsense stories; we utilize for Story Expansion , where models develop or extend narratives. •WritingPrompts [ 31]: Imaginative writing prompts from Reddit’s WRITINGPROMPTS11; we employ
https://arxiv.org/abs/2505.16234v1
it for Prompt-based Creative Writing . •CNN/DailyMail [ 42]: News summaries from articles; we applie it to News Writing , prompting models to reconstruct or expand original news content. The Chinese data are drawn from the following three datasets: 10https://www.imsdb.com/ 11https://www.reddit.com/r/WritingPrompts/ 25 QA Template [Instruction] Answer this question: {{content}} [Requirement] Your answer must be {{control_method}} {{length_constraint}} words long. Table 4: Refinement Template for QA Task. All subtypes share the same template. •WebNovel [ 51]: Online fiction excerpts and generation instructions; 15examples are selected for Story Expansion , generating content based on given story segments. •NLPCC 201712: Reference summaries from the summarization track; instances are used for Social Commentary Writing andNews Writing , totaling 20examples. •WeiXin Public Corpus13: Articles from WeChat public accounts; 25examples are selected for Blog Writing , generating opinion or commentary articles based on titles. Summarization Template Subtype: Encyclopedia [Instruction] Summarize this encyclopedia article: {{content}} [Requirement] Your summary must be {{control_method}} {{length_constraint}} words long. Subtype: Report [Instruction] Summarize this report: {{content}} [Requirement] Your summary must be {{control_method}} {{length_constraint}} words long. Subtype: Public Discourse [Instruction] Summarize this public discourse: {{content}} [Requirement] Your summary must be {{control_method}} {{length_constraint}} words long. Subtype: Academic Article [Instruction] Summarize this academic article: {{content}} [Requirement] Your summary must be {{control_method}} {{length_constraint}} words long. Subtype: Book [Instruction] Summarize this book: {{content}} [Requirement] Your summary must be {{control_method}} {{length_constraint}} words long. Subtype: Dialogue [Instruction] Summarize this dialogue: {{content}} [Requirement] Your summary must be {{control_method}} {{length_constraint}} words long. Table 5: Refinement Template for Summarization. B.3 Refinement Process Since the initial data are directly obtained from existing datasets or generated by LLMs, the raw samples lack explicit length constraints or specific task instructions, so these data cannot be used 12http://tcci.ccf.org.cn/conference/2017/taskdata.php 13https://github.com/nonamestreet/weixin_public_corpus 26 Reasoning Template Subtype: Deductive [Instruction] Solve this deductive reasoning problem: {{content}} [Requirement] Your reasoning must be {{control_method}} {{length_constraint}} words long. Subtype: Inductive [Instruction] Infer a general rule from this observed pattern: {{content}} [Requirement] Your reasoning must be {{control_method}} {{length_constraint}} words long. Subtype: Abductive [Instruction] Determine the most plausible explanation for this observation: {{content}} [Requirement] Your reasoning must be {{control_method}} {{length_constraint}} words long. Subtype: Analogical [Instruction] Draw an analogy to explain this question: {{content}} [Requirement] Your reasoning must be {{control_method}} {{length_constraint}} words long. Subtype: Cause and Effect [Instruction] Analyze the causal relationship in this scenario: {{content}} [Requirement] Your reasoning must be {{control_method}} {{length_constraint}} words long. Subtype: Probabilistic [Instruction] Evaluate the likelihood of this outcome based on probability: {{content}} [Requirement] Your reasoning must be {{control_method}} {{length_constraint}} words long. Table 6: Refinement Template for Reasoning. to evaluate the length instruction following capability. To align the collected data with the goals of theFlexible-Length Paradigm ,i.e., apply them to evaluations at different lengths, we implement a structured refinement process, augmenting samples with task-specific instructions and explicit length constraints. This approach ensures that the refined data adheres to the desired format without compromising task relevance. The refinement process allows us to better assess the model length instruction following while excluding other length-independent conditions as much as possible. Refinement Methodology. We adopt a standardized template, [Instruction] + [Requirement] , to guide the refinement process. The [Instruction]
https://arxiv.org/abs/2505.16234v1
component specifies the task type and the desired model generation, while the [Requirement] sets the length constraints of the output. This dual- component framework precise control over both task appropriateness and output flexibility. For each task category or subtype, unique [Instruction] and[Requirement] templates are designed to fit the nature of the task better. The templates include the following key components: 1) content : The raw data collected from original sources, serving as task inputs. (2) control_method : It includes three possible options: •Equal To : Output length must exactly match the specified constraint. •At Most : Output length must not exceed the specified constraint. •At Least : Output length must meet or exceed the specified constraint. (3)length_constraint : Defines the target output length. In our benchmark, the values can be set to {16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768}. 27 Templates for each task category and subtype are presented in Table 4, Table 5, Table 6 and Table 7. The Chinese dataset follows the same structural approach, with template components expressed in Chinese. Creative Generation Template Subtype: Social Commentary Writing [Instruction] Write a social commentary based on the following content: {{content}} [Requirement] Your commentary must be {{control_method}} {{length_constraint}} words long. Subtype: Prompt-based Creative Writing [Instruction] Write a creative piece based on this prompt: {{content}} [Requirement] The piece must be {{control_method}} {{length_constraint}} words long. Subtype: Script Expansion [Instruction] You are given an excerpt from a script: {{content}} [Requirement] The continuation must be {{control_method}} {{length_constraint}} words long. Subtype: News Writing [Instruction] Write a news article based on the following content: {{content}} [Requirement] Your news article must be {{control_method}} {{length_constraint}} words long. Subtype: Story Expansion [Instruction] Expand the following story: {{content}} [Requirement] Your expanded story must be {{control_method}} {{length_constraint}} words long. Subtype: Blog Writing [Instruction] This is a title from a WeChat public account: {{content}} [Requirement] Write a full article based on this title. The article must be {{control_method}} {{length_constraint}} words long. Table 7: Refinement Template for Creative Generation. B.4 Examples of Tasks Below we present illustrative examples drawn from each of the four Flexible-Length Paradigm tasks covered by LIFEB ENCH . An Example Data of QA Task [Instruction] Respond to this question: How to Use Git Effectively. [Requirement] Your response must be at least 1024 words long. Question Answering (QA). The QA examples include open-ended queries enabling varying answer depths. In the example, we use “How to use git efficiently” as the basic question and “at least” as the control method, which are marked in blue. It is easy to find that our evaluation questions are very consistent with actual application scenarios and are often encountered in practice. Summarization. Summarization tasks require condensing detailed content into a summary that can flexibly meet different length criteria. The examples provided Given a novel, our summarization task requires the model to summarize such rich text input and use Equal To as a control method. This is 28 also a common task. When people use AI assistants, they often ask the model to summarize long texts such as technical reports, academic papers, and
https://arxiv.org/abs/2505.16234v1
novels. An Example Data of Summarization Task [Instruction] Summarize this book: I was born in Tuckahoe, near Hillsborough, and about twelve miles from Easton, in Talbot county, Maryland. I have no accurate knowledge of my age, never having seen any authentic record containing it. By far the larger part of the slaves know as little of their ages as horses know of theirs, and it is the wish of most masters within my knowledge to keep their slaves thus ignorant. I do not remember to have ever met a slave who could tell of his birthday. They seldom come nearer to it than planting-time, harvest-time, cherry-time, spring-time, or fall-time. A want of information concerning my own was a source of unhappiness to me even during childhood. The white children could tell their ages. I could not tell why I ought to be deprived of the same privilege. I was not allowed to make any inquiries of my master concerning it. He deemed all such inquiries on the part of a slave improper and impertinent, and evidence of a restless spirit. The nearest estimate I can give makes me now between twenty-seven and twenty-eight years of age. I come to this, from hearing my master say, some time during 1835, I was about seventeen years old. My mother was named Harriet Bailey. She was the daughter of Isaac and Betsey Bailey, both colored, and quite dark. My mother was of a darker complexion than either my grandmother or grandfather. My father was a white man. He was admitted to be such by all I ever heard speak of my parentage. The opinion was also whispered that my master was my father; but of the correctness of this opinion, I know nothing; the means of knowing was withheld from me. My mother and I were separated when I was but an infant–before I knew her as my mother. It is a common custom, in the part of Maryland from which I ran away, to part children from their mothers at a very early age. ... [Requirement] The summary must be equal to 128 words long. Reasoning. Reasoning examples involve logical inference tasks where output lengths flexibly range from succinct conclusions to detailed step-by-step logical justifications. In this example, we query the model to generate 8192 words to infer “A country’s currency experiences abnormal exchange rate fluctuations. Provide a reasonable explanatory model. ” This length instruction allows the model to give a more detailed and logical reasoning process. An Example Data of Reasoning Task [Instruction] Determine the most plausible explanation for this observation: A country’s currency experiences abnormal exchange rate fluctuations. Provide a reasonable explanatory model. [Requirement] Your reasoning must be equal to 8192 words long. Creative Generation. Creative generation examples demonstrate tasks demanding varied textual complexity and content richness. In this example, we give the model a portion of a script that had already been written, and then ask the model to continue writing, and at most continued writing 512 words. This task is popular among literary and artistic workers in real life, because
https://arxiv.org/abs/2505.16234v1
they sometimes rely on LLMs to create drafts. An Example Data of Creative Generation Task [Instruction] You are given an excerpt from a script: PEDDLER: Oh I come from a land From a faraway place Where the caravan camels roam Where they cut off your ear /Where it’s flat and immense 29 If they don’t like your face /And the heat is intense It’s barbaric but hey–it’s home! When the wind’s at your back And the sun’s from the west And the sand in the glass is right Come on down Stop on by Hop a carpet and fly To another Arabian night! Arabian nights Like Arabian days More often than not Are hotter than hot In a lot of good ways Arabian nights ’Neath Arabian moons A fool off his guard ... [Requirement] Based on this, continue the scene and develop the storyline. The continuation must be at most 512 words long. These examples collectively highlight how the selected NLP tasks inherently support varied out- put lengths and illustrate critical evaluation scenarios within LIFEB ENCH , emphasizing models’ flexibility and precision in following length-specific instructions. 30 C Details of LIFEB ENCH -SUPPLEMENTARY Building upon our main benchmark, we introduce two supplementary datasets and a lite version to further enhance the diversity and utility of LIFEB ENCH . As described in Appendix B.1, the primary scope of LIFEB ENCH is on natural language generation tasks within the Flexible-Length Paradigm . However, two additional paradigms, Fixed-Length and Adaptive-Length, also present meaningful opportunities for controlled length evaluation, despite not being inherently suited for arbitrary length constraints. To this end, we present LIFEB ENCH -LABEL for tasks under the Fixed- Length Paradigm andLIFEB ENCH -REFACTOR for the Adaptive-Length Paradigm . Additionally, we provide LIFEB ENCH -LITE, a compact subset designed for efficient, rapid evaluation of a LLM’s length instruction following capabilities. The results for LIFEB ENCH -LABEL andLIFEB ENCH - REFACTOR are reported in Appendix J. C.1 LIFEB ENCH -LABEL For tasks under the Fixed-Length Paradigm , we categorize them as Label Tasks , which include classic natural language genertion tasks such as text classification ,sentiment analysis , and toxic content detection. Data were collected from seven public datasets according to the following criteria: (1)The output is a concise, unambiguous category or short phrase representing the answer; (2) No additional explanation, reasoning, or extended generation is required; (3) Samples with explicit length constraints, multi-label requirements, or ambiguous context are excluded. A total of 60 label task samples were curated, with 30 in Chinese and 30 in English. As these data are best suited for relatively short, fixed-length outputs, we set the length constraints to 2,4,8words, thereby addressing the gap in short-length settings within LIFEB ENCH . The English label-task data were sourced from the following four datasets •AG News [ 118]: A subset of AG’s corpus [ 5], containing titles and descriptions from the four largest classes (“World”, “Sports”, “Business”, “Sci/Tech”). We selected 9 samples (Text type: News ). •Amazon Fine Foods Reviews [ 72]: Reviews of fine foods from Amazon14; 6 samples were selected
https://arxiv.org/abs/2505.16234v1
(Text type: Review ). •Text Classification on Emails [ 94]: A dataset of email exchanges among journalists; 9 samples were selected (Text type: Email ). •Hate Speech [ 24]: Tweets collected from Twitter15containing hate speech; 6 samples were selected (Text type: Tweets ). The Chinese label-task data were collected from the following three datasets •ChnSentiCorp [ 73]: Hotel review dataset with positive and negative polarity; 10 samples were selected (Review). •Online Shopping [ 73]: Reviews from 10 shopping categories, each with positive and negative sentiment; 10 samples were selected (Review). •Weibo Senti [ 73]: Sentiment-annotated posts from Sina Weibo16; 10 samples were selected (Tweets). Below, we provide the refinement template and a representative example for label tasks. An Example Data of Lable Task [Instruction] Here is a user review: I ALWAYS read Amazon reviews before I buy a product. I don’t know what happened. Maybe I was in a hurry but I’ll NEVER make that mistake again. If I had read the reviews I could have avoided setting my head on fire. 14https://www.amazon.com/ 15https://x.com/ 16https://weibo.com/ 31 I took ONE bean – it must have been the magic one – in the next second I was draped over the kitchen island gasping for breath and crying...yes, I said crying..boo hooing like a baby. It felt like someone stuffed a blazing hot poker up my nostrils and my head was going to explode. My eyes watered for over 30 minutes. I would only recommend these if you like to set your head on fire for fun or you have a SEVERELY blocked sinus and you have tried all other possible remedies to open it. WARNING: I am NOT recommending that you use these Bunker Busters to open your sinus. I don’t know what will happen. For all I know your eyeballs could explode. I’m just saying that ONE bean definitely opened my sinus...but it was NOT worth the pain it took to open it. The only reason I gave them 2 stars is because I stupidly did not read the reviews and I am being very nice (since my sinus finally stopped burning 2 days after!!). Seriously, these should come with a BIG WARNING. Amazon, you should add your own warning. Somebody’s probably gonna come after you for these things. They could be dangerous. [Requirement] Provide a label to the review accordingly. The label must be equal to 4 words long. Label Template Subtype: News [Instruction] You are given a news article excerpt: {{content}} [Requirement] Label the article based on its topic. The label must be {{control_method}} {{length_constraint}} words long. Subtype: Review [Instruction] Here is a user review: {{content}} [Requirement] Provide a label to the review accordingly. The label must be {{con- trol_method}} {{length_constraint}} words long. Subtype: Tweets [Instruction] You are given a short text from a tweet: {{content}} [Requirement] Provide a label that best represents the tweet. The label must be {{con- trol_method}} {{length_constraint}} words long. Subtype: Email [Instruction] The following text is an excerpt from an email: {{content}} [Requirement] Provide a label that best categorizes it. The label must be {{control_method}} {{length_constraint}}
https://arxiv.org/abs/2505.16234v1
words long. Table 8: Refinement Template for Label. C.2 LIFEB ENCH -REFACTOR For tasks under the Adaptive-Length Paradigm , where the output length is closely tied to the input length, we categorize them as Refactor Tasks . This category includes classic natural language generation tasks such as Machine Translation ,Text Style Transfer , and Paraphrase Generation . We construct LIFEB ENCH -REFACTOR directly using the datasets mentioned in B.2 and C.1. To ensure the validity of the tasks, we collect samples according to ten predefined length constraints: 16,32,64,128,256,512,1024,2048,4096,8192 words, requiring that the sample’s output length deviates by no more than 30% from the target constraint. For tasks with shorter length constraints, we select samples that allow for flexible restructuring within the target length, ensuring that the output remains fluent and semantically faithful to the source. For tasks with longer length constraints, we choose texts that contain sufficient substantive content to support meaningful transformation, so that the refactored outputs are coherent, relevant, and non-repetitive. In total, we collect 13 samples for Machine Translation (subtype: Translate ), 99 samples for Text Style Transfer (subtype: Style Conversion ), and 68 samples for Paraphrase Generation (subtypes: Reversal ,Professionalization , and 32 Softening ), corresponding to sentiment reversal, rewriting in a more specialized register, and making text less offensive, respectively. Due to the diverse sources of the datasets, a unified refinement template could not be applied to all samples. Therefore, we manually refined each sample. Below, we provide a representative example for the Refactor task. An Example Data of Lable Task [Instruction] You are given a statement: kind of remind me of the flavor and consistency of a s’more, they are surprisingly chocolatey despite the somewhat small about that is in each one and they are not super hard, so it’s almost like graham crackers covering them. Me and my wife ended up finishing off the first box in two days an buying 3 more we liked them so much. [Requirement] Modify the statement to express the opposite sentiment. The modified state- ment must be equal to 64 words long. C.3 LIFEB ENCH -LITE We constructed LIFEB ENCH -LITE, a condensed benchmark version, by selecting 60 samples (30 Chinese, 30 English) from LIFEB ENCH . The selection emphasized task diversity, ensuring comprehensive coverage of all categories and subtypes within the full benchmark. The detailed distribution of tasks is presented in Table 9. Evaluated under identical length constraints as described in Section 4.2, LIFEB ENCH -LITEcomprises 1,800 instances, achieving approximately a 6×speedup in evaluation relative to the full-scale LIFEB ENCH dataset. Table 9: Distribution of Task Categories and Subtypes in LIFEB ENCH -LITE. Task Category Subtype Count QAFood 1 Technology 2 Sports 1 Lifestyle 4 Knowledge 3 Health 2 Entertainment 3 Society 3 Business 1 Total (QA) 20 ReasoningProbabilistic 3 Deductive 1 Inductive 4 Abductive 3 Cause and Effect 2 Analogical 2 Total (Reasoning) 15 SummarizationPublic Discourse 1 Dialogue 1 Report 1 Encyclopedia 1 Academic Article 1 Total (Summarization) 5 Creative GenerationScript Expansion 1 Story Expansion 6 News Writing 5 Prompt-based Creative Writing 4 Blog Writing 2 Social
https://arxiv.org/abs/2505.16234v1
Commentary Writing 2 Total (Creative Generation) 20 33 D Detailed Experimental Setup In this section, we describe the experimental settings in detail, some of which are not presented due to the page limitation of the main paper. In Section D.1, we introduce the model types, sources, input and output window sizes used in our experiments. In Section D.2, we visualize the metrics and how they are calculated for different control methods. D.1 Model Setup We summarize the API or model code for all evaluated models in the Table 10, along with the maximum context window and maximum output length. We use green to mark the reasoning model and purple to mark the long-text generation enhancement model. All experiments on open-source models are done in the same computation environment with a cluster of eight NVIDIA 80GB H800 GPUs, while experiments on proprietary models are conducted on a CPU server. Table 10: Model cards. Model Name API/Model Code Context Window Max Output Length GPT-4o mini [77] OpenAI api: gpt-4o-mini-2024-07-18 128,000 tokens 16,384 tokens4 GPT-4o [78] OpenAI api: gpt-4o-2024-11-20 128,000 tokens 16,384 tokens5 o1-mini [79] OpenAI api: o1-mini-2024-09-12 128,000 tokens 65,536 tokens6 o3-mini [80] Azure api: o3-mini-2024-12-01-preview 200,000 tokens 100,000 tokens7 Claude-3.7-Sonnet [4] Anthropic api: claude-3-7-sonnet-20250219 200,000 tokens 64000 tokens8 Claude-3.7-Sonnet-Thinking [4] Anthropic api: claude-3-7-sonnet-202502191200,000 tokens 64000 tokens8 Gemini-2.0-Flash [34] Google api: gemini-2.0-flash-001 1,048,576 tokens 8192 tokens9 Gemini-2.0-Flash-Thinking [34] Google api: gemini-2.0-flash-thinking-exp-01-21 1,048,576 tokens 65536 tokens9 Gemini-2.5-Pro [35] Google api: gemini-2.5-pro-preview-03-25 1,048,576 tokens 65536 tokens9 Doubao-1.5-Pro [12] V olcengine api: doubao-1-5-pro-32k-250115 256,000 tokens 16,384 tokens10 Doubao-1.5-Thinking-Pro [12] V olcengine api: doubao-1-5-thinking-pro-250415 128,000 tokens 16,384 tokens11 DeepSeek-V3 [66] DeepSeek api: deepseek-chat264,000 tokens 8192 tokens12 DeepSeek-R1 [40] DeepSeek api: deepseek-reasoner364,000 tokens 8192 tokens12 Qwen3-32B [107] Aliyun api: qwen3-32b 32,768 tokens 8192 tokens13 Qwen3-32B-Thinking [107] Aliyun api: deepseek-reasoner132,768 tokens 8192 tokens13 Qwen3-235B-A22B [107] Aliyun api: qwen3-235b-a22b 32,768 tokens 8192 tokens13 Qwen3-235B-A22B-Thinking [107] Aliyun api: qwen3-235b-a22b132,768 tokens 8192 tokens13 Llama-3.1-8B-Instruct [36] meta-llama/Llama-3.1-8B-Instruct 128,000 tokens - Llama-3.1-70B-Instruct [36] meta-llama/Llama-3.1-70B-Instruct 128,000 tokens - Qwen2.5-7B-Instruct [107] Qwen/Qwen2.5-7B-Instruct 128,000 tokens 8,192 tokens14 Qwen2.5-72B-Instruct [107] Qwen/Qwen2.5-72B-Instruct 128,000 tokens 8,192 tokens15 GLM-4-9B-Chat [33] THUDM/glm-4-9b-chat 128,000 tokens - Mistral-7B-Instruct-v0.2 [14] mistralai/Mistral-7B-Instruct-v0.2 32,000 tokens - LongWriter-Llama3.1-8B [8] THUDM/LongWriter-llama3.1-8b 128,000 tokens - LongWriter-GLM4-9B [8] THUDM/LongWriter-glm4-9b 128,000 tokens - Suri-I-ORPO [83] chtmp223/suri-i-orpo 32,000 tokens - 1operates in extended thinking mode 2points to DeepSeek-V3 2024/12/26 3points to DeepSeek-R1 2025/01/20 4https://platform.openai.com/docs/models/gpt-4o-mini 5https://platform.openai.com/docs/models/gpt-4o 6https://platform.openai.com/docs/models/o1-mini 7https://platform.openai.com/docs/models/o3-mini 8https://docs.anthropic.com/en/docs/about-claude/models/all-models 9https://ai.google.dev/gemini-api/docs/models 10https://console.volcengine.com/ark/region:ark+cn-beijing/model/detail?Id=doubao-1-5-pro-32k 11https://console.volcengine.com/ark/region:ark+cn-beijing/model/detail?Id=doubao-1-5-thinking-pro 12https://api-docs.deepseek.com/quick_start/pricing 13Aliyun api limit https://bailian.console.aliyun.com/console?tab=doc 14https://huggingface.co/Qwen/Qwen2.5-7B-Instruct 15https://huggingface.co/Qwen/Qwen2.5-72B-Instruct To ensure consistency in our experimental setup, we set the max output size to 65,536 tokens for all models, or to the maximum supported size if smaller. Additionally, we set the temperature to 0.8 and top_p to 1 for non-reasoning models, while for reasoning models, we set the reasoning effort to medium if this parameter is supported. 34 D.2 Evaluation Metrics D.2.1 Output Word Count Computation. To accommodate comparisons of model s in legnth instruction following both Chinese and English, we use the following unified word counting strategy. Specifically, the generation length is computed as the sum of the number of Chinese characters and the number of English words (including hyphenated words and contractions), while excluding
https://arxiv.org/abs/2505.16234v1
newline and tab characters. Formally, for a given output, we first count the number of Chinese characters using the Unicode range [\u4e00-\u9fff] , and then count English words via the regular expression \b[a-zA-Z0-9’]+\b . The final word count is the sum of these two components, which allows us to maintain excellent comparability when recording Chinese, English, and mixed Chinese and English generations. D.2.2 Length Deviation and Length Score In the main text, we introduced Length Deviation (LD) andLength Score (LS) , and we believe that LSis a better evaluation metric. In this section, we will systematically analyze why LSis better and demonstrate the rationality of LSand its hyperparameter settings. Disadvantages of Length Deviation. To quantitatively assess a model’s adherence to various length constraints, we introduce the Length Deviation ( LD). Intuitively, LDprovides a normalized measure of deviation, enabling direct comparison across diverse length requirements. For instance, an absolute deviation of 100 words corresponds to an LDof 625% for a target length of 16, indicating a severe deviation; however, the same absolute deviation yields approximately 1% for a length constraint of 8,192, a comparatively negligible error. Thus, by normalizing deviation relative to the instructed length, LDintuitively captures adherence to length instructions, a capability that raw word-count-based metrics lack. Nonetheless, the LDmetric has several notable limitations: First, under the Equal To control method, LDsuffers from significant bias introduced by its nor- malization against the target constraint. Specifically, LDranges asymmetrically from −100% to +∞, complicating interpretation. For example, with a constraint of 64 words, an output of 0 words yields an LDof−100% , while an output of 192 words produces an LDof+200% . Although both represent serious failures to comply with the instruction, the metric disproportionately penalizes over-generation. Both cases reflect severe failures to follow the instruction, but LDunfairly penalizes over-generation more heavily. Second, because LDaccommodating both negative and positive values, aggregating results across multiple examples can cause misleading cancellation effects. For instance, given a 64-word constraint, outputs of 0 and 128 words yield LDvalues of −100% and+100% , respectively. Averaging these cases results in a mean LDof0%, inaccurately implying perfect adherence to the instruction. Third, for the At Least andAt Most control methods, LDfails to properly capture the semantics of the instruction. Once the output satisfies the length constraint in instruction ( i.e.,exceeding the minimum or not surpassing the maximum ), any further deviation should not incur additional penalties, as all such outputs are equally compliant with the instruction. However, LDtends to increase with unnecessary deviation beyond the threshold, such as differentiating between outputs exceeding the minimum by 100% versus 200%. artificially distinguishing between outputs that are, by definition, both correct. This artificial distinction distorts aggregated evaluations, complicating accurate following assessments. Finally, LDexhibits high sensitivity to outliers. Particularly for smaller length constraints, single extreme deviations can disproportionately skew aggregated metrics. For instance, with a 16-word constraint, an output of 2,000 words yields an LDof+12,400% , dramatically inflating the overall metric to a single abnormal prediction. Why Employ LS?To address these limitations, we further propose LSwhich transforms the relative length deviation into a bounded score,
https://arxiv.org/abs/2505.16234v1
ranging from 0 to 100. The computation of LSvaries 35 according to the specific constraint type as follows: LSE=100×ek1·LD, ifLD < 0 100×e−k2·LD,ifLD≥0, LSM=100, ifLD < 0 100×e−k2·LD,ifLD≥0, LSL=100×ek1·LD,ifLD < 0 100, ifLD≥0,(3) where LSE,LSM, and LSLcorrespond to the Equal To ,At Most , and At Least control method, respectively. By design, LSis always non-negative and bounded, eliminating the problematic offsetting of positive and negative deviations inherent in aggregated LDvalues. Furthermore, the bounded range ensures reduced sensitivity to outliers, preventing extreme deviations from disproportionately influencing the overall evaluation. Below, we detail how LSspecifically addresses each of the previously discussed issues associated with LS: First, LStransforms the asymmetric and unbounded nature of LDinto a bounded and symmetric metric within [0,100], significantly enhancing interpretability and comparability across different constraints. The exponential mapping ensures a controlled and smooth penalization for both under- and over-generation. Thus, under a 64-word constraint, producing either 0 or 192 words results in comparably low scores, accurately reflecting the severity of deviation and mitigating the previous bias against over-generation inherent in LD. Second, since LSis strictly non-negative, it does not involve the cancellation effects arising from aggregating positive and negative. Consequently, mean LSscores reliably represent a model’s overall capacity to follow length instructions, without being distorted by offsetting deviations. Third, LSaligns explicitly with the semantic intent of each constraint type. For At Most andAt Least method, LSassigns full maximum scores to all outputs complying with the specified threshold, without differentiating based on excess compliance. Penalties apply only to outputs that violate the constraint, facilitating fairer comparisons across diverse length requirements. Finally, due to its bounded formulation, LSdemonstrates robustness against outliers. Extremely aberrant cases, particularly under smaller constraints, do not disproportionately inflate the aggregate score, ensuring that evaluations are stable and not dominated by a few anomalous predictions. Hyperparameter Settings. In Eq.3, we set k1= 5andk2= 2for all settings. This choice reflects the observation that over-generation ( LD≥0) is generally more acceptable than under-generation, as excess content can be more easily trimmed post hoc, whereas missing content may need to be complete by users. Consequently, the scoring function penalizes under-generation more severely and over-generation more gently, thereby increasing sensitivity to insufficient length adherence. The exponential formulation provides fine-grained discrimination when deviations ( |LD|) are small, allowing the metric to capture subtle deviations from the target length constraint. As |LD|grows large, the decay rate gradually moderates, acknowledging that distinguishing finely among severely non-compliant outputs is no longer practically meaningful. Thus, the score naturally approaches zero asLDnears either −1or+∞, representing a complete failure in adhering to the length instructions. For the At Most andAt Least constraint methods, the scoring functions are defined analogously based on their semantic requirements. Specifically, outputs that satisfy the instruction (length ≤constraint forAt Most , or length ≥constraint for At Least ) receive the maximum score of 100, reflecting perfect compliance. Violations incur score penalties consistent with the corresponding side of the Equal To method, ensuring comparability and coherence across all constraint types. Figure 8 provides a visualization of the LSfunction, illustrating their intended asymmetry between under- and over-generation penalties.
https://arxiv.org/abs/2505.16234v1
As depicted, all three constraint methods yield a perfect score when outputs fully comply with the length instructions. Under the Equal To setting, an LDof−50% yields a score of 8.2, whereas an LDof+50% results in a higher score of 22.3, clearly demonstrating the intended asymmetric treatment At extreme deviations ( e.g.,LD=−100% or+200% ), the 36 /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000018 /uni00000015/uni00000011/uni00000013 LD/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013LS /uni00000018/uni00000011/uni00000013/uni00000017/uni0000001a/uni00000011/uni00000015 /uni00000015/uni00000015/uni00000011/uni00000016/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013 /uni00000016/uni00000019/uni00000011/uni0000001b /uni00000014/uni00000016/uni00000011/uni00000018 /uni00000014/uni00000011/uni0000001b/uni00000028/uni00000054/uni00000058/uni00000044/uni0000004f/uni00000003/uni00000037/uni00000052/uni00000003/uni0000000bLSE/uni0000000c /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000018 /uni00000015/uni00000011/uni00000013 LD/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013 /uni00000019/uni00000013/uni00000011/uni0000001a /uni00000016/uni00000019/uni00000011/uni0000001b /uni00000014/uni00000016/uni00000011/uni00000018 /uni00000014/uni00000011/uni0000001b/uni00000024/uni00000057/uni00000003/uni00000030/uni00000052/uni00000056/uni00000057/uni00000003/uni0000000bLSM/uni0000000c /uni00000014/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000018 /uni00000015/uni00000011/uni00000013 LD/uni00000018/uni00000011/uni00000013/uni00000015/uni00000015/uni00000011/uni00000016/uni00000017/uni0000001a/uni00000011/uni00000015/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013/uni00000024/uni00000057/uni00000003/uni0000002f/uni00000048/uni00000044/uni00000056/uni00000057/uni00000003/uni0000000bLSL/uni0000000cFigure 8: Visualization of Length Score for different control methods. score rapidly approaches zero, achieving values such as 0.7 and 0.2, respectively. This indicates that distinctions among significantly poor cases are appropriately minimized, aligning the metric’s behavior with our intended design principles and addressing the limitations previously discussed. 37 E Detailed Results of Main Experiments E.1 Detailed Length Deviation andLength Score across all length constraints In this section, we provide a detailed breakdown of the main experimental results. Table 11, Table 12, and Table 13 present Length Deviation andLength Score for all models under the three control methods: Equal To ,At Most , and At Least , respectively. For the At Most setting, we set Length Deviation to zero whenever the output length is below the specified constraint. Similarly, for At Least , we set Length Deviation to zero for data where the output length exceeds the constraint, in order to more accurately reflect model bias under each scenario. For each control method, we report the mean value of Length Deviation andLength Score over all evaluated length constraints. For the Equal To control method, we report the mean of the absolute value of Length Deviation . Under the Equal To control method, length instruction following proves challenging for most mod- els. Specifically, 23 out of 26 evaluated models achieve an Length Score below 60, with only three models— o3-mini ,Claude-Sonnet-Thinking , and Gemini-2.5-Pro —attaining relatively high scores (75.4, 61.3, and 60.0, respectively). For shorter length constraints ( ≤128 words), o3-mini andGemini-2.5-Pro consistently deliver strong performance ( Length Score >80 and >70, respectively), whereas eight models score below 60. For medium-length constraints (256–2048 words), o3-mini maintains robust performance ( Length Score >70), while Gemini-2.5-Pro ex- hibits a substantial drop (from 81 to 37). For the longest constraints ( ≥4096 words), no model reliably exceeds an Length Score of 40. Notably, models explicitly enhanced for long-text generation (LongWriter-Llama3.1-8B andLongWriter-GLM4-9B ) achieve significant relative improvements over their base versions, especially at the longest constraint (8192 words), with increases of 22 and 32 points, respectively. However, even these optimized models show limited absolute effectiveness, underscoring ongoing challenges in precise long-length following. Under the At Most control method, models generally perform better, with 16 out of 26 models achieving Length Score scores above 80 across all length constraints. The largest deviations occur predominantly at short constraints (e.g., 16 or 32 words). Interestingly, models specialized for long-text generation perform poorly relative to base models in shorter-length scenarios. For example, Suri-I-ORPO reaches an Length Deviation of 1838% while still achieving an Length
https://arxiv.org/abs/2505.16234v1
Score of 53.6, reflecting a failure to follow length instructions in certain cases—specifically, generating overlong outputs that substantially inflate Length Deviation . This highlights a key limitation in length instruction following. For length constraints greater than 2048, almost all models reach Length Score scores above 90, with GPT-4o andDeepSeek-R1 attaining perfect scores across all relevant constraints. These results suggest that, under the At Most setting, length instruction following remains challenging, especially for short constraints and for models optimized for long-text generation. Conversely, the At Least control method reveals an opposite trend: models exhibit high adherence to shorter constraints ( ≤512 words), with 23 of 26 models achieving Length Score scores above 90. However, as the length constraint increases ( e.g., 1024 and above), most models show a clear drop in performance. When the constraint reaches 8192, 18 out of 26 models have Length Score below 40. o3-mini demonstrates the best performance under this setting, achieving an Length Deviation of 76.2 and an Length Score of -12%, surpassing even those models specifically enhanced for long-text generation, despite the latter outperforming their base models on long constraints. The general failure of models to meet long length constraints highlights the significant room for improvement in long-text generation. Collectively, our detailed analysis reveals that length instruction following remains a significant challenge for current LLMs, particularly under the Equal To control method setting and for long length constraints. While some models, such as o3-mini , consistently demonstrate robust performance across diverse scenarios, the majority exhibit substantial degradation as constraints become more demanding. Even for models specifically enhanced for long-text generation, improvements are often limited to relative gains over their base models, and absolute performance on precise or long constraints remains unsatisfactory. These results underscore the need for further research into more effective length instruction following, especially to address the deficiencies observed under challenging constraints. 38 Table 11: Length Score andLength Deviation for all length constraints under the Equal To control method. Models MetricLength ConstraintsA VG 16 32 64 128 256 512 1024 2048 4096 8192 GPT-4o miniLS 75 67.8 68.9 62.8 69.8 66.8 74.7 25.5 5.7 1.7 51.9 LD 14% 22% 21% 22% 18% 21% 10% 34% 65% 85% 31% GPT-4oLS 57 67.7 72.2 67 69.6 66.5 69 16.5 4.1 1.6 49.1 LD 15% 12% 13% 20% 19% 20% 12% 43% 70% 86% 31% o1-miniLS 71.8 70.5 67.8 66.6 62.5 51.2 40.3 34.2 15.2 3.1 48.3 LD 11% 13% 16% 23% 25% 34% 48% 46% 56% 81% 35% o3-miniLS 96.7 93.5 87.9 81.8 79.4 78.9 77.8 73.9 66.9 16.7 75.4 LD 1% 2% 4% 5% 6% 6% 8% 13% 16% 66% 13% Claude-3.7-SonnetLS 58.5 53 63.6 74.3 76.4 70.6 53.6 43.7 34.4 25.9 55.4 LD 91% 42% 24% 18% 10% 9% 15% 21% 32% 42% 30% Claude-3.7-Sonnet-ThinkingLS 58 61 70.6 77.4 79.1 72.4 58.1 53.3 46.7 36.1 61.3 LD 131% 51% 21% 13% 9% 10% 15% 18% 26% 38% 33% Gemini-2.0-FlashLS 60.9 63.7 66.2 59.8 59.7 38.2 47.7 52.7 25.9 9.2 48.4 LD 29% 23% 21% 28% 28% 61% 46% 27% 36% 57% 36% Gemini-2.0-Flash-ThinkingLS 62.8 51.3
https://arxiv.org/abs/2505.16234v1
55.1 53.4 49 30 32.1 42.5 42.1 21.9 44 LD 35% 27% 31% 27% 30% 97% 119% 70% 43% 47% 53% Gemini-2.5-ProLS 80.9 76.1 74.4 72.8 67.4 49.5 37 48 54.3 39.4 60 LD 17% 9% 10% 12% 15% 40% 57% 44% 37% 35% 28% Doubao-1.5-ProLS 31.4 42.6 46.8 65.5 64.2 60.6 63.5 57.3 42.4 12.3 48.7 LD 30% 24% 21% 12% 12% 14% 13% 24% 27% 52% 23% Doubao-1.5-Thinking-ProLS 71.9 67.4 55.5 64.9 61.6 44.5 42 55.5 34.5 8 50.6 LD 10% 12% 16% 18% 25% 45% 46% 26% 32% 59% 29% DeepSeek-V3LS 60 73.4 76.3 69.1 68.4 63.4 64.9 43.7 17.6 6.1 54.3 LD 15% 12% 12% 20% 21% 22% 23% 28% 48% 68% 27% DeepSeek-R1LS 64.2 68.9 69.4 61.6 59.1 55.5 65.2 26.4 5 1.6 47.7 LD 32% 21% 20% 28% 30% 31% 16% 34% 65% 84% 36% Llama-3.1-8B-InstructLS 47 47.4 51.3 55.4 62.6 68.3 32.8 9.2 3.2 3.7 38.1 LD 146% 81% 57% 39% 29% 24% 48% 77% 92% 108% 70% Llama-3.1-70B-InstructLS 59.4 61.9 61 62.8 61.9 61.5 38.2 11.8 3.6 1.8 42.4 LD 52% 26% 25% 23% 27% 26% 94% 111% 123% 106% 61% Qwen2.5-7B-InstructLS 55.2 37.6 22.3 52.4 58 66.6 44.9 18.5 12.7 4.9 37.3 LD 25% 26% 36% 18% 24% 24% 24% 52% 57% 75% 36% Qwen2.5-72B-InstructLS 71.9 59 50.4 70.4 62.3 61.8 63.8 24.9 23.3 18.2 50.6 LD 16% 15% 18% 16% 24% 26% 17% 41% 51% 59% 28% Qwen3-32BLS 57.4 62.7 62.2 62.1 72.5 64.5 66.3 52.5 49.6 26.6 57.6 LD 32% 19% 14% 12% 9% 11% 12% 22% 22% 34% 19% Qwen3-32B-ThinkingLS 60.2 55.8 63.2 66.4 67.3 64 68 56.4 48.3 28.3 57.8 LD 28% 28% 22% 20% 21% 23% 20% 17% 21% 29% 23% Qwen3-235B-A22BLS 57.8 52.5 50.1 45.9 72.8 73.3 68 42.7 30.7 27.3 52.1 LD 28% 30% 22% 19% 10% 10% 12% 24% 32% 33% 22% Qwen3-235B-A22B-ThinkingLS 55.9 56.1 57.5 66.3 70.8 63.5 63.9 60.6 49.7 32.8 57.7 LD 39% 31% 25% 16% 18% 24% 21% 14% 17% 25% 23% GLM-4-9B-ChatLS 52.6 48.1 42.1 35.6 47.4 51.2 58.4 10.9 3.9 1.6 35.2 LD 28% 24% 25% 27% 29% 35% 16% 54% 75% 87% 40% Mistral-7B-Instruct-v0.2LS 26.4 32.1 42.8 37.4 42.9 52.1 23.1 5.9 2.3 2.1 26.7 LD 193% 126% 58% 69% 58% 24% 56% 73% 87% 92% 84% LongWriter-Llama3.1-8BLS 21.1 24.2 35.7 42.7 44.9 54.2 20.5 18.2 31.2 26.1 31.9 LD 254% 124% 76% 39% 45% 98% 143% 117% 78% 48% 102% LongWriter-GLM4-9BLS 35.6 35.7 33.8 37.2 39.7 52.3 30.3 30.6 45.5 33.7 37.4 LD 63% 40% 34% 33% 48% 61% 66% 88% 47% 35% 52% Suri-I-ORPOLS 30.5 34.1 33.6 31.8 32.4 37 28.7 23.9 12.5 9.3 27.4 LD 2142% 965% 575% 379% 318% 207% 195% 109% 91% 76% 506% 39 Table 12: Length Score andLength Deviation for all length constraints under the At Most control method. Models MetricLength ConstraintsA VG 16 32 64 128 256 512 1024 2048 4096 8192 GPT-4o miniLS 89.2 82.3 81.8 84 84.8 84.9 97.4 99 100 100 90.3 LD 27% 13% 13% 10% 10% 10% 2% 1% 0% 0% 9% GPT-4oLS 98.4 98.5
https://arxiv.org/abs/2505.16234v1
98.5 89.2 91.6 94.5 100 100 100 100 97 LD 1% 1% 1% 7% 5% 3% 0% 0% 0% 0% 2% o1-miniLS 95.5 92.2 91.1 85 75.5 75.4 86 96.2 99.7 100 89.7 LD 4% 5% 6% 11% 19% 18% 10% 3% 0% 0% 8% o3-miniLS 99.2 99.2 99.7 100 99.9 99.5 97.9 99.5 100 100 99.5 LD 0% 0% 0% 0% 0% 0% 1% 0% 0% 0% 0% Claude-3.7-SonnetLS 90.5 92.1 93.9 94.3 99.5 99.6 100 100 99.6 99 96.8 LD 82% 16% 6% 7% 1% 0% 0% 0% 0% 1% 11% Claude-3.7-Sonnet-ThinkingLS 84.7 91 93.9 96.7 99.8 99.8 100 99.8 99.7 99.6 96.5 LD 69% 25% 9% 2% 0% 0% 0% 0% 0% 0% 11% Gemini-2.0-FlashLS 92.6 93.5 95.8 91 97.9 89.7 93.8 97.9 99.9 100 95.2 LD 6% 5% 3% 6% 1% 7% 4% 2% 0% 0% 3% Gemini-2.0-Flash-ThinkingLS 94 93.8 96 96.3 98.1 81.1 70.1 79 94.8 98.4 90.2 LD 41% 7% 3% 4% 1% 16% 27% 21% 5% 1% 13% Gemini-2.5-ProLS 97.1 94.9 95.9 95.2 95.2 91.6 92.5 98.8 100 100 96.1 LD 5% 5% 4% 3% 3% 6% 5% 1% 0% 0% 3% Doubao-1.5-ProLS 99.9 99.9 100 99.9 99.9 99.7 99.9 99.9 100 100 99.9 LD 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% Doubao-1.5-Thinking-ProLS 99.7 99.7 99.9 99.2 96.1 89.8 94.1 99.6 100 100 97.8 LD 0% 0% 0% 0% 2% 7% 4% 0% 0% 0% 1% DeepSeek-V3LS 89.2 96.4 98.5 96.9 94.9 96.2 98.3 98.2 98.4 100 96.7 LD 8% 2% 1% 3% 4% 3% 1% 3% 1% 0% 3% DeepSeek-R1LS 85.8 90.4 91.7 89.3 90.3 92.4 98.5 100 100 100 93.8 LD 30% 9% 5% 7% 6% 5% 1% 0% 0% 0% 6% Llama-3.1-8B-InstructLS 52.8 59.8 63.8 71 83.6 96.2 99.9 99.2 98.9 97.5 82.3 LD 199% 93% 54% 28% 13% 3% 0% 5% 9% 6% 41% Llama-3.1-70B-InstructLS 71 79.3 83.1 84.7 86.8 90.9 97 98.1 98.9 98.4 88.8 LD 465% 28% 17% 13% 10% 24% 22% 21% 7% 4% 61% Qwen2.5-7B-InstructLS 87.3 99.3 100 99.8 97.2 97.6 99.7 97.4 98.4 99.6 97.6 LD 9% 1% 0% 0% 2% 1% 0% 4% 1% 0% 2% Qwen2.5-72B-InstructLS 86.2 98.5 98.2 94.6 91.3 81.1 95.7 97.9 96.2 98.8 93.8 LD 10% 1% 1% 4% 7% 12% 3% 2% 3% 1% 4% Qwen3-32BLS 81 94.4 99 99.9 99.5 100 100 99.4 98.9 99.7 97.2 LD 15% 4% 1% 0% 0% 0% 0% 0% 1% 0% 2% Qwen3-32B-ThinkingLS 81 84.8 91.5 91.5 90.4 94.7 98.9 98.6 99.1 99.7 93 LD 18% 11% 6% 5% 6% 3% 1% 1% 1% 0% 5% Qwen3-235B-A22BLS 80.9 88.4 94.2 98.8 99.1 99.6 97.3 98.9 98.9 99.9 95.6 LD 25% 10% 9% 1% 1% 0% 2% 1% 1% 0% 5% Qwen3-235B-A22B-ThinkingLS 70.7 77.4 85.1 89.8 92.3 90.9 94.6 99.1 99.3 99.6 89.9 LD 56% 30% 15% 9% 5% 6% 3% 1% 0% 0% 13% GLM-4-9B-ChatLS 85 93.7 99 99.6 97 87.1 99.4 99.8 99.2 99.2 95.9 LD 17% 5% 1% 0% 2% 8% 0% 0% 3% 2% 4% Mistral-7B-Instruct-v0.2LS 53.1 68.9 81.8 83.9 82.9 96.4 97.5 97.1 98.9 98.5
https://arxiv.org/abs/2505.16234v1
85.9 LD 136% 85% 40% 77% 21% 10% 23% 21% 5% 2% 42% LongWriter-Llama3.1-8BLS 63.2 70 75.4 85.7 88.9 95.6 95.9 85.2 86.4 94.2 84.1 LD 713% 66% 31% 34% 9% 22% 63% 65% 35% 9% 105% LongWriter-GLM4-9BLS 79.9 90.1 97 97.1 85.5 86 95.7 93.4 88.8 95.8 90.9 LD 61% 23% 3% 2% 13% 10% 11% 13% 14% 4% 15% Suri-I-ORPOLS 53.6 70.7 78.1 80.7 78.9 83.5 79.7 87.3 90.1 94.2 79.7 LD 1838% 1323% 374% 364% 148% 77% 82% 40% 25% 9% 428% 40 Table 13: Length Score andLength Deviation for all length constraints under the At Least control method. Models MetricLength ConstraintsA VG 16 32 64 128 256 512 1024 2048 4096 8192 GPT-4o miniLS 100 100 100 99.8 100 99.9 94.8 36.1 7 2 74 LD 0% 0% 0% 0% 0% 0% -2% -27% -63% -84% -18% GPT-4oLS 99.7 99.7 100 100 100 100 90.4 28.2 6.4 1.9 72.6 LD 0% 0% 0% 0% 0% 0% -3% -32% -65% -83% -18% o1-miniLS 100 100 100 99.7 99 99.8 98.4 85 30.1 3.2 81.5 LD 0% 0% 0% 0% -1% 0% -1% -7% -43% -78% -13% o3-miniLS 100 100 100 100 100 99.7 98.6 98.4 97 76.2 97 LD 0% 0% 0% 0% 0% 0% -1% -1% -2% -12% -2% Claude-3.7-SonnetLS 100 100 100 100 100 99.8 97.3 85 70.8 47.7 90.1 LD 0% 0% 0% 0% 0% 0% -1% -5% -11% -27% -4% Claude-3.7-Sonnet-ThinkingLS 100 100 100 100 100 100 98.8 91.7 80.3 62.6 93.3 LD 0% 0% 0% 0% 0% 0% 0% -3% -8% -18% -3% Gemini-2.0-FlashLS 99.9 100 100 100 100 100 99.7 89.9 43 14.1 84.7 LD 0% 0% 0% 0% 0% 0% 0% -3% -23% -49% -7% Gemini-2.0-Flash-ThinkingLS 100 100 99.9 100 100 100 99.8 97.4 75.4 39.2 91.2 LD 0% 0% 0% 0% 0% 0% 0% -1% -10% -30% -4% Gemini-2.5-ProLS 100 100 100 100 100 99.7 99.7 99.3 91.6 65.5 95.5 LD 0% 0% 0% 0% 0% 0% 0% 0% -3% -14% -2% Doubao-1.5-ProLS 100 100 100 100 100 100 99.9 96.6 71.4 25.2 89.3 LD 0% 0% 0% 0% 0% 0% 0% -1% -10% -39% -5% Doubao-1.5-Thinking-ProLS 100 100 100 100 100 100 99.3 91.8 53.3 12.5 85.7 LD 0% 0% 0% 0% 0% 0% 0% -3% -21% -51% -8% DeepSeek-V3LS 100 100 100 100 100 100 97.6 64.7 27.7 6.6 79.7 LD 0% 0% 0% 0% 0% 0% -1% -13% -37% -65% -12% DeepSeek-R1LS 100 100 100 100 100 100 92.3 40.2 6.6 1.7 74.1 LD 0% 0% 0% 0% 0% 0% -2% -26% -61% -83% -17% Llama-3.1-8B-InstructLS 99.7 100 99.7 99.7 99.4 99.2 65.6 23.8 14.5 16 71.8 LD 0% 0% 0% 0% 0% 0% -12% -42% -62% -72% -19% Llama-3.1-70B-InstructLS 99 99.7 99.4 99.3 99 96.6 61.3 17.3 10.9 8 69 LD 0% 0% 0% 0% 0% -1% -15% -46% -66% -81% -21% Qwen2.5-7B-InstructLS 99.3 98.4 99.7 99.9 99.9 98.7 63.3 25.9 18.2 6.2 71 LD 0% 0% 0% 0% 0% 0% -12% -37% -52% -72% -17% Qwen2.5-72B-InstructLS 100 99.8 99.8 100 100 99.7 88.9 61.2 63.7 29.6
https://arxiv.org/abs/2505.16234v1
84.2 LD 0% 0% 0% 0% 0% 0% -3% -16% -19% -43% -8% Qwen3-32BLS 100 99.7 98.3 99.8 99.7 97.9 97.4 86.5 63.1 34.3 87.4 LD 0% 0% 0% 0% 0% -1% -1% -5% -13% -28% -5% Qwen3-32B-ThinkingLS 100 100 100 100 100 100 98 77.5 65.3 37.2 87.8 LD 0% 0% 0% 0% 0% 0% -1% -7% -11% -24% -4% Qwen3-235B-A22BLS 99.6 99.5 98.9 98.5 99.4 98.1 95.1 85.5 78.2 50.2 90.3 LD 0% 0% 0% 0% 0% -1% -1% -5% -7% -19% -3% Qwen3-235B-A22B-ThinkingLS 100 100 100 100 100 100 98 77.8 70.5 49.2 89.6 LD 0% 0% 0% 0% 0% 0% -1% -6% -8% -17% -3% GLM-4-9B-ChatLS 99.1 99.8 99.4 97.9 99.3 97.9 70.6 12 4.7 2.4 68.3 LD 0% 0% 0% -1% 0% -1% -10% -48% -71% -84% -21% Mistral-7B-Instruct-v0.2LS 99.7 99.5 99.6 99.7 99.6 80.1 31.1 10.8 5.4 4.5 63 LD 0% 0% 0% 0% 0% -6% -30% -58% -76% -86% -26% LongWriter-Llama3.1-8BLS 96.9 93.4 98.1 95.2 94.8 81.2 63.2 65.4 60.8 40.6 79 LD -1% -3% -1% -2% -2% -7% -15% -20% -24% -30% -11% LongWriter-GLM4-9BLS 97.3 97.2 99 97.8 96.9 95 73.3 83.3 80.9 47.4 86.8 LD -1% -1% 0% -1% -1% -2% -10% -9% -8% -23% -6% Suri-I-ORPOLS 99.1 99 97 98.7 98.8 89.9 74.7 49.7 28.2 16.5 75.2 LD 0% 0% -1% 0% 0% -4% -11% -27% -47% -63% -15% 41 E.2 Visualization of Model Output Lengths Figure 9, Figure 10, and Figure 11 illustrate the average output word counts of all models under theEqual To ,At Most , and At Least control methods, respectively. Under the Equal To setting, we observe that for the shortest length constraint ( i.e., 16 words), 5 out of 26 models produce outputs with mean lengths exceeding twice the constraint. Conversely, at the maximum constraint ( i.e., 8192 words), 15 models are unable to generate outputs longer than 4096 words, and 7 of these are further limited to outputs not exceeding 2048 words. Notably, long-text-enhanced models such asLongWriter-Llama3.1-8B andLongWriter-GLM4-9B tend to generate longer outputs across most constraints, suggesting that their optimizations for long-text generation come at the expense of length instruction following, particularly on shorter constraints. Under the At Most control method, most models perform well overall, but there are still noticeable failures in following short constraints ( ≤256 words). In particular, Llama-3.1-8B-Instruct , LongWriter-Llama3.1-8B , and Suri-I-ORPO exhibit a counterintuitive trend where the average output length decreases as the constraint increases. This phenomenon occurs specifically in cases where the models fail to follow the length instructions, highlighting their limitations under short constraint conditions. For the At Least control method, all models are able to reliably meet the constraints for shorter lengths (≤512 words). However, as the constraint increases, an increasing number of models fail to reach the specified length. At the maximum constraint ( i.e., 8192 words), only Gemini-2.5-Pro is able to consistently meet the requirement, while all other models fall short, underscoring the substantial challenges faced by current models in following long length instructions. Taken together, while many models can follow moderate length constraints, most
https://arxiv.org/abs/2505.16234v1
struggle with very short or ultra-long constraints. Even long-text enhanced models often fail to meet these extremes and may sacrifice short length instruction following ability. These results show that precise length instruction following—especially at the extreme constraints—remains an open problem for current LLMs. 42 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000019/uni00000016/uni00000015/uni00000019/uni00000017/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000033/uni00000055/uni00000052/uni00000053/uni00000055/uni0000004c/uni00000048/uni00000057/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056 /uni00000032/uni00000053/uni00000048/uni00000051/uni00000010/uni00000036/uni00000052/uni00000058/uni00000055/uni00000046/uni00000048/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000010/uni00000037/uni00000048/uni0000005b/uni00000057/uni00000003/uni00000028/uni00000051/uni0000004b/uni00000044/uni00000051/uni00000046/uni00000048/uni00000047/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025 /uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048Figure 9: Output word count for all length constraints under the Equal To control method. /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000019/uni00000016/uni00000015/uni00000019/uni00000017/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000033/uni00000055/uni00000052/uni00000053/uni00000055/uni0000004c/uni00000048/uni00000057/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056 /uni00000032/uni00000053/uni00000048/uni00000051/uni00000010/uni00000036/uni00000052/uni00000058/uni00000055/uni00000046/uni00000048/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000010/uni00000037/uni00000048/uni0000005b/uni00000057/uni00000003/uni00000028/uni00000051/uni0000004b/uni00000044/uni00000051/uni00000046/uni00000048/uni00000047/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025 /uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 Figure 10: Output word count for all length constraints under the At Most control method. /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000019/uni00000016/uni00000015/uni00000019/uni00000017/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000033/uni00000055/uni00000052/uni00000053/uni00000055/uni0000004c/uni00000048/uni00000057/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056 /uni00000032/uni00000053/uni00000048/uni00000051/uni00000010/uni00000036/uni00000052/uni00000058/uni00000055/uni00000046/uni00000048/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000010/uni00000037/uni00000048/uni0000005b/uni00000057/uni00000003/uni00000028/uni00000051/uni0000004b/uni00000044/uni00000051/uni00000046/uni00000048/uni00000047/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025 /uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 Figure 11: Output word count for all length constraints under the At Least control method. 43 E.3 Supplementary Results with Extended Length Constraints While the primary results focus on length constraints up to 8,192 words, some models claim the ability to generate substantially longer outputs due to their larger maximum output length. To further investigate length instruction following under ultra-long constraints, we introduce two additional settings: 16,384 and32,768 words. Note that, even for the largest setting ( 32,768 words), the corresponding number of tokens remains below the maximum output length of most evaluated models. In Table 14, we report results for models whose maximum output length exceeds 8,192 tokens or for which the maximum length is not explicitly specified. For Claude-3.7-Sonnet and Claude-3.7-Sonnet-Thinking , the output length beyond 8,192 tokens was only available in an experimental version at the time of our evaluation; therefore, results for these models under ultra-long constraints are not shown. Under the Equal To control method, no model achieves an Length Score exceeding 30, and only Gemini-2.5-Pro andLlama-3.1-8B-Instruct surpass 10 out of the 16 evaluated models. For theAt Most control method, nearly all models perform well, though most open-source models do not attain perfect scores. This may be attributed to uncontrolled output resulting in repeated content that reaches the maximum output length limit. Under the At Least control method, model performance mirrors that of the Equal To setting, remaining consistently poor. Overall, these results demonstrate even weaker length instruction following under extended length constraints. As discussed in Section 5.2,
https://arxiv.org/abs/2505.16234v1
the maximum output length claimed by models often differs substantially from actual performance in ultra-long generation scenarios. 44 Table 14: Length Deviation andLength Score for extended length constraints. Models MetricEqual To At Most At Least 16k 32k 16k 32k 16k 32k GPT-4o miniLD -94% - -95% - -94% - LS 1.1 - 100 - 1 - GPT-4oLD -94% - -96% - -93% - LS 0.9 - 100 - 1 - o1-miniLD -93% -98% -91% -96% -93% -97% LS 1.2 0.8 100 100 1 0.8 o3-miniLD -98% -100% -93% -97% -72% -97% LS 1.3 0.7 100 100 9.9 0.9 Gemini-2.0-FlashLD -76% - -85% - -72% - LS 2.6 - 100 - 3.1 - Gemini-2.0-Flash-ThinkingLD -58% -78% -72% -87% -50% -73% LS 8.5 3.1 99.8 100 15.9 4 Gemini-2.5-ProLD -33% -71% -88% -94% -30% -63% LS 26.9 6.2 100 100 34.7 8.8 Doubao-1.5-ProLD -75% - -93% - -66% - LS 2.9 - 100 - 5.3 - Doubao-1.5-Thinking-ProLD -78% - -92% - -73% - LS 2.4 - 100 - 3.3 - Llama-3.1-8B-InstructLD -69% -77% -93% -95% -67% -80% LS 4.4 12.4 98.9 99.7 15 12.6 Llama-3.1-70B-InstructLD -87% -93% -96% -98% -86% -91% LS 1.1 2.8 99.9 99.9 5 4.2 GLM-4-9B-ChatLD -90% -96% -94% -97% -91% -96% LS 1.4 0.9 99.9 100 1.8 1 Mistral-7B-Instruct-v0.2LD -91% -96% -93% -96% -91% -95% LS 1.8 1.2 99.3 100 3 1.6 LongWriter-Llama3.1-8BLD -54% -78% -63% -81% -55% -75% LS 10.3 3.6 97.2 99.9 13.3 5.1 LongWriter-GLM4-9BLD -52% -72% -68% -84% -51% -74% LS 9.9 3.8 99 100 12.7 3.6 Suri-I-ORPOLD -74% -87% -81% -90% -72% -84% LS 7.9 3.2 97.8 100 12 4.1 45 F Length Awareness: Do LLMs Know How Long Their Generations Are? In order to explore whether the reason why LLMs fail to follow length instruction, in this section, we conduct length awareness experiments to comprehensively explore the awareness of length in generations by LLMs, as awareness is a key cognitive ability in AI systems [ 60]. Specifically, we want to know at what length LLMs begin to lose their awareness of output length, so we design the Length Awareness Experiment . We conduct two length awareness experiments. In Section F.1, we request the models report the length of their generations. This indicates whether the models know whether it succeeded or failed in following the length instructions. In Section F.2. We further ask LLMs count after every different character or word. This experiment aims to explore more subtle reasons why LLMs fail: whether they can only count shorter texts. We conducted experiments using the LIFEBENCH-LITE described in the Appendix C.3, which covers a diverse range of tasks and subtypes. F.1 Self-Reported Length Estimation The primary goal of this experiment is to assess whether models accurately recognize and report the length of their generated output. To evaluate this, we augment each original prompt with the following instruction: “At the end of your response, include the actual total word count of your response, formatted as [WORD COUNT]: a number, where ‘a number’ is the actual number of words generated, rather than the instructed target. ” This experiment provides
https://arxiv.org/abs/2505.16234v1
an intuitive indication of whether LLMs are aware that their outputs fail to meet length instructions. If the generated output is objectively insufficient yet the model reports it as adequate, this suggests a lack of length awareness or a form of deceptive reporting. In contrast, if the model acknowledges the insufficiency, it indicates that the failure arises from generation limitations rather than an unawareness of the constraint. Each prompt specifies a length constraint, selected from {128, 256, 512, 1024, 2048, 4096, 8192} , as part of the experimental setup. After generating the response, we extract the model’s self-reported [WORD COUNT] and compare it to the actual word count computed from the generated text. This procedure allows a systematic evaluation of each model’s length awareness across various constraint settings. Our findings (illustrated in Figure 12) reveal that, for most models, both actual and self-reported word counts exceed the target constraint when it is relatively short and fall below it as the constraint length increases. Moreover, the self-reported word counts are consistently closer to the instructed length compared to the actual outputs, indicating that models anchor their length estimation to the requested value. Notably, o1-mini andGemini-2.5-Pro consistently underestimate their actual generation length. Models such as o3-mini ,Claude-3.7-Sonnet ,Claude-3.7-Sonnet-Thinking , and Gemini-2.5-Pro exhibit stronger length awareness, reflected by higher accuracy in their self- reporting. Correspondingly, these models also attain significantly higher Length Deviation scores in our main experiments, suggesting that improved length awareness is correlated with better adherence to explicit length instructions. Collectively, these results indicate a meaningful relationship between a model’s length awareness and its ability to follow length instructions. Models demonstrating accurate self-assessment of output length typically achieve higher compliance scores. Conversely, systematic biases in length estimation may partially explain why some models struggle with satisfying explicit length constraints. Because some models have no idea how many words they actually generated and are just “deceiving” themselves. F.2 Marker-Based Length Monitoring In Section F.1, we find that the reason why the model fails in length instruction following is related to length awareness. To complement the analyses presented above, we further investigate how deviations from explicit length instructions evolve throughout the generation process. Rather than assuming that deviations arise solely at the end of generation, we aim to determine whether length discrepancies accumulate gradually during text production. Specifically, we instruct models to insert a marker token every 100 words by appending the following prompt: “Whenever you output 100 words, you must immediately follow with a [SPECIAL TOKEN] as a marker. ” We conduct this experiment using 46 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026
https://arxiv.org/abs/2505.16234v1
/uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000014/uni00000015/uni0000001b/uni00000015/uni00000018/uni00000019/uni00000018/uni00000014/uni00000015/uni00000014/uni00000013/uni00000015/uni00000017/uni00000015/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000013/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000015/uni00000032/uni00000058/uni00000057/uni00000053/uni00000058/uni00000057/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni0000003a/uni00000026 /uni00000035/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000053/uni00000052/uni00000055/uni00000057/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000057 /uni00000035/uni00000048/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000002f/uni0000004c/uni00000051/uni00000048Figure 12: Length Awareness Experiment Results of All Models: The real output word count and self-reported word count are shown for cases where the length constraint exceeds 128, with both axes in log scale. “Real Fit” and “Reported Fit” represent the least squares regression results for real and self-reported word counts, respectively, fitted in the log scale. The fitted function in the normal scale is expressed as y=eb·xa, where ydenotes the output word count, xrepresents the length constraint, aandbare regression coefficients. length constraints selected from 512, 1024, 2048, 4096, 8192 , as shorter lengths provide limited scope for multiple markers. 47 After generating outputs, we exclude models that produce fewer than 20 valid samples (outputs containing at least one marker) to maintain statistical reliability. For the remaining models, we analyze the distribution of [SPECIAL TOKEN] occurrences throughout the generated texts (Figure 13). Most models distribute markers relatively evenly, without pronounced front-loading or tail-loading effects, suggesting stable adherence to incremental marker insertion instructions across the full generation process. /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000033/uni00000055/uni00000052/uni00000053/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015
https://arxiv.org/abs/2505.16234v1
Figure 13: Empirical Cumulative Distribution Function (ECDF) of marker positions in generated sequences: The x-axis represents fixed-length bins partitioning the generated output into five equal- length segments (each covering 20% of the total sequence). The y-axis reports the cumulative proportion of markers that fall within each segment, normalized by the total number of markers in the output. We also compute the average interval (in words) between consecutive markers for each model under each length constraint (Figure 14). For shorter and moderate constraints, most models consistently maintain intervals close to the expected 100-word mark, reflecting accurate incremental length 48 /uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013 /uni0000001a/uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000018/uni00000013 /uni0000002c/uni00000051/uni00000057/uni00000048/uni00000055/uni00000059/uni00000044/uni0000004f/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni00000003 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000001d/uni00000003/uni00000018/uni00000014/uni00000015 /uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013 /uni0000001a/uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000018/uni00000013 /uni0000002c/uni00000051/uni00000057/uni00000048/uni00000055/uni00000059/uni00000044/uni0000004f /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000001d/uni00000003/uni00000014/uni00000013/uni00000015/uni00000017 /uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013 /uni0000001a/uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000018/uni00000013 /uni0000002c/uni00000051/uni00000057/uni00000048/uni00000055/uni00000059/uni00000044/uni0000004f/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni00000003 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000001d/uni00000003/uni00000015/uni00000013/uni00000017/uni0000001b /uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013 /uni0000001a/uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000018/uni00000013 /uni0000002c/uni00000051/uni00000057/uni00000048/uni00000055/uni00000059/uni00000044/uni0000004f /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000001d/uni00000003/uni00000017/uni00000013/uni0000001c/uni00000019 /uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013 /uni0000001a/uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000018/uni00000013 /uni0000002c/uni00000051/uni00000057/uni00000048/uni00000055/uni00000059/uni00000044/uni0000004f/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni00000003 /uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000001d/uni00000003/uni0000001b/uni00000014/uni0000001c/uni00000015Figure 14: Distribution of inter-marker intervals in outputs: Each box plot shows the distribution of the number of words between consecutive markers. The red dashed line denotes the target interval of 100 words. tracking. However, as length constraints increase, some models display substantial interval variability. For instance, Qwen2.5-7B-Instruct exhibits intervals ranging from 50 to 800 words, indicative of inconsistent counting and substantial length tracking inaccuracies. Conversely, models such as o3-mini andClaude-3.7-Sonnet-Thinking consistently maintain precise intervals across all tested constraints, demonstrating robust incremental length awareness. Collectively, these results indicate that while contemporary models generally possess basic incre- mental length-tracking capabilities, their accuracy diminishes with increasing output lengths. In combination with our observations from the self-reported length experiment, these findings highlight 49 fundamental limitations in current LLMs’ ability to maintain consistent length awareness—both incrementally and holistically—particularly under extensive length instructions. These insights underscore the need for future improvements in incremental length tracking to enhance the overall reliability of length instruction adherence in LLMs. 50 G Details of Input Characteristic Analysis G.1 Task Type Figure 15, Figure 16, Figure 17, and Figure 18 present detailed evaluation results for the four primary task types across various length constraints under the Equal To control method. Across all task categories, the overall trend of model capabilities will not change drastically with the change of task type and models generally show reduced performance at extreme length constraints (both very short and very long). Despite this, we still found some interesting phenomena about task types. The Summarization task consistently yields the lowest scores at shorter lengths, likely due to the inherent difficulty of effectively condensing extensive input content into very concise summaries. At the same time, under longer length constraints, the model degrades better than other categories in the Summarization task, but is still not very usable. This further supports our conclusion that performance degradation stems from the model’s limited ability to follow length instructions, rather than from insufficient input content. In
https://arxiv.org/abs/2505.16234v1
addition, QA tasks demonstrate the highest average length scores overall, indicating that models find it comparatively easier to manage length constraints in scenarios involving direct answers rather than extensive text condensation. /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001b/uni00000013/uni00000018/uni00000018/uni0000001a/uni00000019/uni0000001c/uni00000018/uni00000019/uni00000015/uni0000001a/uni00000014/uni00000019/uni00000015/uni00000019/uni00000015/uni0000001b/uni00000016/uni00000015/uni00000018/uni0000001a/uni00000016/uni00000019/uni00000016/uni0000001a/uni00000013/uni00000018/uni00000018/uni00000019/uni00000014/uni00000018/uni0000001b/uni0000001b/uni00000014/uni00000019/uni00000014/uni00000019/uni00000014/uni00000019/uni00000014/uni00000019/uni00000018/uni00000018/uni00000016/uni00000016/uni00000019/uni00000015/uni00000015/uni00000015/uni0000001c/uni00000017/uni00000015 /uni0000001a/uni00000014/uni0000001a/uni00000015/uni0000001a/uni00000017/uni0000001c/uni00000013/uni00000017/uni00000017/uni00000019/uni00000019/uni00000019/uni00000019/uni00000018/uni00000016/uni0000001a/uni00000019/uni00000016/uni0000001c/uni0000001a/uni00000013/uni0000001a/uni00000016/uni0000001a/uni00000017/uni00000018/uni00000014/uni0000001a/uni00000013/uni00000016/uni00000014/uni00000018/uni00000015/uni00000019/uni00000014/uni00000019/uni00000013/uni00000018/uni00000015/uni00000018/uni0000001b/uni00000017/uni00000016/uni00000017/uni00000019/uni00000015/uni00000017/uni00000015/uni0000001c/uni00000017/uni00000019 /uni0000001a/uni00000014/uni0000001a/uni00000018/uni0000001a/uni00000014/uni0000001b/uni00000018/uni00000018/uni00000018/uni0000001a/uni00000018/uni00000019/uni00000019/uni00000019/uni00000013/uni0000001a/uni00000018/uni00000017/uni00000019/uni00000018/uni0000001a/uni0000001a/uni0000001a/uni0000001a/uni0000001b/uni00000018/uni00000016/uni00000019/uni00000017/uni00000014/uni0000001a/uni00000017/uni00000016/uni00000019/uni00000015/uni00000019/uni00000019/uni00000017/uni00000018/uni00000018/uni0000001a/uni00000016/uni00000014/uni00000017/uni00000018/uni00000016/uni0000001a/uni00000015/uni00000019/uni00000016/uni00000017 /uni00000019/uni00000015/uni00000019/uni0000001a/uni00000019/uni0000001a/uni0000001b/uni00000014/uni0000001a/uni0000001b/uni0000001b/uni00000013/uni00000018/uni0000001c/uni00000018/uni00000018/uni0000001a/uni00000013/uni00000019/uni00000019/uni00000019/uni00000018/uni0000001a/uni00000018/uni00000019/uni0000001b/uni00000018/uni0000001c/uni00000019/uni00000015/uni00000018/uni0000001a/uni0000001a/uni00000015/uni00000019/uni00000017/uni0000001a/uni00000013/uni00000016/uni0000001c/uni00000019/uni00000018/uni00000015/uni00000019/uni00000016/uni0000001b/uni00000017/uni00000015/uni00000016/uni00000016/uni00000016/uni00000016 /uni0000001a/uni00000013/uni0000001a/uni00000014/uni00000019/uni00000018/uni0000001a/uni0000001c/uni0000001a/uni00000016/uni0000001a/uni0000001c/uni00000018/uni0000001a/uni00000017/uni00000019/uni00000019/uni0000001b/uni00000019/uni00000015/uni0000001a/uni00000014/uni0000001a/uni0000001a/uni00000019/uni00000015/uni00000019/uni0000001a/uni00000019/uni00000019/uni00000018/uni00000015/uni0000001a/uni00000016/uni0000001a/uni00000019/uni0000001a/uni00000013/uni0000001a/uni0000001a/uni0000001a/uni00000017/uni00000017/uni0000001c/uni00000017/uni00000016/uni00000017/uni0000001c/uni00000017/uni0000001a/uni00000016/uni00000019 /uni00000019/uni00000019/uni0000001a/uni00000016/uni00000018/uni00000017/uni0000001b/uni00000013/uni00000019/uni00000017/uni0000001a/uni00000015/uni00000016/uni00000015/uni00000016/uni00000013/uni00000018/uni00000014/uni00000018/uni0000001a/uni00000018/uni00000015/uni0000001a/uni00000014/uni00000018/uni00000018/uni00000019/uni0000001b/uni00000019/uni00000019/uni0000001a/uni00000013/uni00000019/uni0000001b/uni00000019/uni0000001a/uni00000018/uni0000001a/uni0000001a/uni00000014/uni00000018/uni0000001c/uni00000018/uni00000018/uni00000017/uni0000001b/uni00000019/uni00000013/uni00000019/uni00000015/uni00000017/uni00000019 /uni0000001a/uni0000001a/uni00000019/uni0000001c/uni00000016/uni0000001a/uni0000001a/uni0000001c/uni00000017/uni00000016/uni00000018/uni00000017/uni00000017/uni0000001b/uni00000016/uni00000013/uni00000016/uni00000017/uni00000018/uni0000001b/uni00000017/uni00000019/uni0000001a/uni00000014/uni00000019/uni0000001c/uni00000016/uni00000013/uni00000016/uni00000017/uni00000016/uni0000001b/uni00000018/uni0000001c/uni00000019/uni00000017/uni00000019/uni0000001c/uni0000001a/uni00000014/uni00000019/uni00000014/uni00000018/uni00000018/uni00000014/uni0000001c/uni00000015/uni00000014/uni00000015/uni00000016/uni00000016/uni00000015 /uni00000014/uni0000001c/uni00000014/uni00000013/uni00000016/uni00000015/uni0000001a/uni0000001c/uni00000016/uni00000017/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000017/uni00000015/uni00000017/uni0000001b/uni00000019/uni00000014/uni00000019/uni00000015/uni00000018/uni00000013/uni00000015/uni0000001b/uni0000001b/uni00000014/uni00000013/uni00000014/uni00000018/uni00000014/uni0000001c/uni00000018/uni00000019/uni00000018/uni0000001a/uni00000017/uni0000001c/uni00000019/uni00000019/uni0000001b/uni00000018/uni00000014/uni0000001b/uni00000016/uni00000016/uni00000015/uni00000017 /uni00000016/uni00000016/uni00000014/uni0000001b/uni0000001a/uni00000013/uni00000015/uni0000001a/uni00000018/uni00000013/uni00000015/uni00000014/uni00000017/uni0000001c/uni00000018/uni0000001c/uni00000017/uni00000016/uni00000016/uni00000017/uni00000014/uni0000001b/uni00000018/uni00000016/uni00000015/uni0000001a/uni00000015/uni00000013/uni00000018/uni0000001b/uni00000017/uni0000001c/uni00000015/uni0000001c/uni00000017/uni0000001c/uni00000017/uni00000015/uni00000016/uni00000017/uni00000018/uni00000016/uni0000001c /uni00000014/uni00000014/uni00000016/uni00000014/uni0000001c/uni00000015/uni00000016/uni00000017/uni00000014/uni00000018/uni00000015/uni00000015/uni00000017/uni00000016/uni00000014/uni00000013/uni0000001b/uni00000019/uni00000015/uni00000015/uni00000015/uni00000016/uni00000014/uni00000018/uni00000015/uni00000019/uni00000015/uni00000018/uni00000015/uni00000017/uni00000016/uni00000015/uni00000014/uni00000014/uni00000015/uni00000019/uni00000017/uni00000013/uni0000001b/uni00000034/uni00000024 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013 Figure 15: Length Score forQAacross different length constraints under the Equal To control method. 51 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000018/uni0000001b/uni0000001a/uni00000017/uni0000001a/uni0000001b/uni0000001c/uni00000017/uni00000019/uni0000001c/uni0000001b/uni00000018/uni00000015/uni00000017/uni00000018/uni00000017/uni0000001b/uni00000019/uni00000018/uni00000018/uni0000001a/uni00000013/uni00000019/uni0000001c/uni00000019/uni0000001b/uni00000016/uni00000017/uni00000018/uni0000001c/uni00000016/uni00000014/uni00000018/uni00000014/uni00000016/uni0000001a/uni00000018/uni0000001c/uni00000018/uni00000015/uni00000017/uni0000001a/uni00000019/uni00000016/uni00000018/uni0000001a/uni00000017/uni00000017/uni00000014/uni00000019 /uni00000018/uni00000015/uni0000001a/uni00000015/uni0000001a/uni00000019/uni0000001c/uni00000016/uni00000019/uni00000017/uni0000001b/uni00000015/uni00000015/uni0000001b/uni00000016/uni0000001c/uni0000001a/uni0000001b/uni00000019/uni00000014/uni00000019/uni0000001c/uni0000001a/uni00000018/uni00000017/uni0000001a/uni00000016/uni00000017/uni00000018/uni0000001c/uni00000019/uni0000001b/uni00000019/uni0000001b/uni00000018/uni00000018/uni00000016/uni00000018/uni00000018/uni00000017/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000016/uni0000001c/uni00000017/uni00000018/uni0000001b /uni00000018/uni00000017/uni00000019/uni00000019/uni0000001a/uni00000016/uni0000001c/uni00000014/uni00000018/uni0000001b/uni0000001a/uni0000001b/uni00000017/uni00000017/uni00000017/uni00000018/uni0000001a/uni00000016/uni00000019/uni00000014/uni0000001a/uni00000014/uni00000019/uni0000001b/uni00000016/uni0000001c/uni00000016/uni00000014/uni00000017/uni00000017/uni00000018/uni00000016/uni00000019/uni00000018/uni00000019/uni0000001b/uni00000019/uni00000017/uni00000018/uni00000018/uni00000017/uni00000019/uni00000019/uni00000017/uni00000014/uni00000015/uni00000015/uni00000013/uni00000018/uni00000014/uni00000015/uni0000001c /uni00000018/uni00000018/uni00000019/uni0000001a/uni00000019/uni00000018/uni0000001c/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni00000017/uni00000019/uni00000018/uni00000015/uni0000001a/uni0000001c/uni0000001a/uni00000016/uni00000019/uni00000019/uni00000018/uni00000016/uni00000017/uni00000015/uni00000016/uni00000019/uni00000017/uni0000001a/uni00000019/uni00000017/uni0000001a/uni00000016/uni0000001a/uni00000015/uni00000018/uni0000001c/uni00000019/uni0000001a/uni00000018/uni0000001a/uni00000018/uni0000001c/uni00000016/uni00000015/uni00000016/uni0000001b/uni00000018/uni0000001b/uni00000016/uni00000019 /uni0000001a/uni00000019/uni0000001a/uni0000001b/uni00000019/uni00000013/uni0000001b/uni0000001a/uni0000001b/uni00000017/uni0000001b/uni00000013/uni00000017/uni0000001b/uni00000018/uni00000018/uni00000019/uni0000001c/uni0000001a/uni00000017/uni0000001a/uni00000014/uni00000019/uni00000017/uni00000017/uni0000001c/uni00000015/uni0000001a/uni00000017/uni00000017/uni00000018/uni0000001a/uni00000019/uni00000019/uni0000001a/uni0000001b/uni00000019/uni00000013/uni00000019/uni00000015/uni00000018/uni00000015/uni00000018/uni00000019/uni00000015/uni0000001b/uni00000016/uni0000001c/uni00000017/uni0000001b/uni00000016/uni00000014 /uni0000001b/uni00000014/uni0000001a/uni00000014/uni00000017/uni00000015/uni0000001b/uni00000013/uni0000001a/uni00000016/uni0000001a/uni0000001b/uni00000015/uni00000018/uni00000014/uni00000013/uni00000018/uni00000017/uni0000001a/uni00000015/uni0000001a/uni00000013/uni00000019/uni00000013/uni00000018/uni00000017/uni00000016/uni00000019/uni00000016/uni00000013/uni00000019/uni00000015/uni00000018/uni0000001a/uni0000001a/uni00000013/uni0000001a/uni00000013/uni00000019/uni00000019/uni00000019/uni00000018/uni00000019/uni00000014/uni00000017/uni0000001a/uni00000017/uni00000015/uni00000017/uni00000019/uni00000016/uni00000014 /uni0000001a/uni00000016/uni00000018/uni00000014/uni00000017/uni00000015/uni0000001a/uni00000014/uni00000018/uni0000001c/uni00000019/uni0000001b/uni00000014/uni0000001c/uni00000014/uni00000019/uni00000015/uni00000017/uni00000019/uni00000015/uni00000019/uni0000001a/uni00000019/uni00000014/uni0000001a/uni00000017/uni00000017/uni00000018/uni00000016/uni00000017/uni00000019/uni00000015/uni00000017/uni0000001c/uni00000019/uni00000018/uni0000001a/uni00000017/uni00000019/uni00000015/uni00000019/uni00000017/uni00000019/uni00000014/uni00000017/uni00000015/uni00000014/uni00000017/uni00000015/uni0000001a/uni00000016/uni00000018 /uni00000016/uni0000001b/uni00000016/uni0000001a/uni00000017/uni0000001a/uni00000017/uni00000017/uni00000017/uni00000017/uni00000019/uni00000013/uni00000016/uni00000014/uni00000019/uni00000016/uni00000014/uni00000018/uni00000019/uni00000018/uni00000013/uni00000017/uni00000018/uni00000017/uni00000019/uni00000015/uni00000019/uni00000016/uni00000016/uni00000017/uni00000016/uni00000017/uni00000015/uni00000018/uni00000016/uni00000018/uni00000019/uni00000017/uni0000001b/uni00000018/uni00000015/uni00000016/uni00000016/uni00000015/uni0000001c/uni0000001a/uni00000014/uni00000014/uni00000015/uni0000001c /uni00000015/uni00000015/uni00000015/uni00000017/uni00000016/uni00000015/uni00000019/uni0000001a/uni00000016/uni0000001c/uni00000018/uni00000017/uni00000016/uni00000017/uni00000014/uni00000019/uni00000016/uni00000015/uni00000018/uni00000016/uni00000016/uni00000019/uni00000016/uni00000014/uni00000015/uni0000001b/uni00000014/uni0000001b/uni00000015/uni00000017/uni00000016/uni00000019/uni00000015/uni00000019/uni00000016/uni00000014/uni00000016/uni00000019/uni00000017/uni00000019/uni00000019/uni00000015/uni00000015/uni00000014/uni00000015/uni00000014/uni00000018/uni0000001c/uni00000015/uni00000019 /uni00000014/uni00000019/uni00000014/uni0000001b/uni00000014/uni0000001a/uni00000016/uni0000001c/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000014/uni00000015/uni00000018/uni00000015/uni00000018/uni00000016/uni00000015/uni00000015/uni00000017/uni00000015/uni00000018/uni00000015/uni00000013/uni00000014/uni0000001b/uni00000014/uni00000015/uni00000015/uni0000001a/uni00000015/uni00000019/uni00000017/uni00000018/uni00000018/uni00000015/uni00000018/uni00000014/uni00000018/uni0000001c/uni00000014/uni0000001b/uni00000015/uni00000013/uni00000015/uni0000001b/uni00000015/uni0000001b/uni00000015/uni00000014/uni00000036/uni00000058/uni00000050/uni00000050/uni00000044/uni00000055/uni0000004c/uni0000005d/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013Figure 16: Length Score forSummarization across different length constraints under the Equal To control method. /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001a/uni0000001a/uni00000018/uni00000015/uni0000001a/uni00000015/uni0000001c/uni0000001b/uni00000017/uni00000018/uni00000015/uni00000018/uni00000018/uni0000001b/uni00000019/uni00000014/uni0000001a/uni00000019/uni00000016/uni0000001a/uni0000001a/uni00000015/uni00000019/uni00000015/uni00000019/uni00000015/uni00000017/uni00000017/uni00000019/uni00000013/uni00000018/uni00000015/uni0000001a/uni00000016/uni00000018/uni00000019/uni00000018/uni00000016/uni00000018/uni00000019/uni00000018/uni00000016/uni00000018/uni00000016/uni00000015/uni00000014/uni00000015/uni00000017/uni00000016/uni0000001c/uni00000016/uni00000017 /uni0000001a/uni00000013/uni00000018/uni00000019/uni0000001a/uni00000013/uni0000001c/uni00000019/uni00000018/uni00000018/uni00000016/uni00000017/uni00000019/uni00000018/uni00000018/uni00000017/uni0000001b/uni00000015/uni00000017/uni00000017/uni00000019/uni00000016/uni0000001a/uni00000014/uni0000001a/uni00000017/uni00000017/uni0000001b/uni00000019/uni00000013/uni00000017/uni00000016/uni00000018/uni00000015/uni00000019/uni0000001b/uni00000018/uni00000015/uni00000018/uni00000013/uni00000018/uni00000016/uni00000017/uni00000019/uni00000016/uni00000014/uni00000016/uni00000015/uni00000016/uni0000001c/uni00000017/uni00000014 /uni00000019/uni0000001b/uni0000001a/uni00000013/uni00000019/uni0000001a/uni0000001c/uni00000015/uni0000001a/uni00000015/uni00000018/uni00000015/uni00000019/uni00000018/uni00000018/uni00000017/uni0000001a/uni00000017/uni00000017/uni00000019/uni00000017/uni0000001b/uni0000001a/uni0000001c/uni0000001a/uni00000016/uni00000017/uni00000017/uni00000018/uni0000001c/uni00000014/uni00000019/uni00000017/uni00000013/uni00000019/uni00000019/uni00000019/uni00000015/uni00000017/uni0000001a/uni00000019/uni00000015/uni00000016/uni0000001b/uni00000018/uni00000014/uni00000017/uni00000015/uni00000016/uni00000013/uni00000016/uni0000001b /uni00000019/uni00000014/uni00000019/uni00000016/uni00000019/uni0000001b/uni0000001b/uni00000019/uni0000001a/uni00000018/uni00000019/uni0000001b/uni00000019/uni00000013/uni00000017/uni00000018/uni0000001a/uni00000014/uni00000019/uni00000015/uni00000019/uni00000018/uni0000001a/uni00000016/uni00000019/uni00000015/uni00000017/uni0000001b/uni00000019/uni00000013/uni00000017/uni0000001b/uni0000001a/uni00000013/uni00000019/uni00000019/uni00000019/uni00000015/uni00000017/uni00000016/uni00000019/uni00000018/uni00000016/uni00000017/uni00000016/uni0000001c/uni00000017/uni00000017/uni00000016/uni00000015/uni00000016/uni00000014 /uni00000019/uni0000001a/uni00000019/uni0000001b/uni00000019/uni00000019/uni0000001b/uni00000014/uni0000001b/uni00000014/uni0000001b/uni00000013/uni00000018/uni0000001c/uni00000017/uni0000001a/uni0000001a/uni00000013/uni00000019/uni00000019/uni00000019/uni00000019/uni0000001a/uni00000017/uni00000018/uni00000017/uni00000018/uni0000001a/uni00000018/uni00000016/uni0000001a/uni00000015/uni00000019/uni00000016/uni0000001a/uni0000001b/uni00000019/uni00000017/uni0000001a/uni00000018/uni0000001a/uni00000015/uni00000017/uni00000017/uni00000017/uni00000019/uni00000018/uni00000013/uni00000017/uni0000001a/uni00000017/uni00000013 /uni00000019/uni00000017/uni00000019/uni00000015/uni00000018/uni00000013/uni0000001b/uni00000015/uni0000001a/uni00000017/uni0000001a/uni00000013/uni00000016/uni00000016/uni00000015/uni00000013/uni00000017/uni00000019/uni00000019/uni00000014/uni00000016/uni0000001a/uni00000019/uni00000016/uni00000018/uni00000014/uni0000001a/uni00000014/uni00000018/uni0000001c/uni0000001a/uni00000015/uni00000019/uni00000013/uni00000019/uni00000016/uni00000019/uni00000014/uni0000001a/uni0000001c/uni00000018/uni00000018/uni00000018/uni00000013/uni00000018/uni00000013/uni00000018/uni00000019/uni00000018/uni00000019/uni00000016/uni0000001c /uni0000001a/uni00000018/uni0000001a/uni00000018/uni00000017/uni00000018/uni0000001b/uni00000014/uni00000018/uni00000016/uni00000018/uni00000017/uni00000017/uni00000014/uni00000015/uni00000018/uni00000016/uni00000016/uni0000001a/uni00000016/uni00000017/uni00000013/uni00000019/uni00000018/uni00000019/uni00000016/uni00000015/uni00000018/uni00000016/uni0000001c/uni00000017/uni00000014/uni00000019/uni00000019/uni0000001a/uni00000015/uni00000019/uni00000013/uni0000001a/uni00000017/uni00000019/uni00000017/uni00000018/uni0000001a/uni00000014/uni00000018/uni00000015/uni00000013/uni00000016/uni00000017/uni00000015/uni0000001b /uni00000014/uni0000001b/uni00000014/uni00000016/uni00000016/uni0000001a/uni0000001a/uni0000001b/uni00000018/uni00000013/uni00000017/uni0000001b/uni00000018/uni0000001c/uni00000017/uni00000015/uni00000017/uni00000015/uni00000018/uni00000017/uni00000018/uni0000001c/uni00000018/uni00000014/uni00000015/uni00000019/uni00000019/uni0000001b/uni00000014/uni00000014/uni00000015/uni00000017/uni00000018/uni00000016/uni00000019/uni00000016/uni00000016/uni0000001c/uni00000019/uni00000014/uni0000001c/uni00000016/uni00000014/uni0000001c/uni00000017/uni00000016/uni00000014/uni0000001b /uni00000016/uni00000016/uni00000014/uni00000015/uni00000019/uni0000001b/uni00000016/uni00000018/uni00000016/uni0000001a/uni00000016/uni00000013/uni00000017/uni00000017/uni00000018/uni0000001a/uni00000017/uni00000015/uni00000016/uni00000019/uni00000015/uni0000001a/uni00000018/uni00000015/uni00000015/uni00000018/uni00000015/uni00000017/uni00000018/uni00000013/uni00000018/uni00000014/uni00000015/uni00000018/uni00000017/uni0000001a/uni00000016/uni00000015/uni00000016/uni00000016/uni00000017/uni0000001a/uni00000014/uni00000013 /uni00000014/uni00000014/uni00000015/uni00000014/uni00000015/uni00000015/uni00000015/uni00000015/uni00000013/uni0000001a/uni00000015/uni0000001a/uni00000017/uni00000018/uni00000014/uni00000013/uni0000001c/uni0000001a/uni00000014/uni00000015/uni00000014/uni00000015/uni00000015/uni00000018/uni00000015/uni00000016/uni00000015/uni0000001a/uni00000015/uni00000019/uni00000016/uni00000014/uni00000014/uni00000014/uni00000015/uni00000013/uni00000015/uni0000001c/uni0000001a/uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013 Figure 17: Length Score forReasoning across different length constraints under the Equal To control method. 52 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001a/uni00000014/uni00000019/uni00000013/uni00000019/uni0000001b/uni0000001c/uni00000014/uni00000019/uni00000015/uni00000019/uni00000013/uni00000019/uni0000001a/uni00000019/uni00000019/uni0000001b/uni00000014/uni00000016/uni00000018/uni0000001a/uni0000001a/uni00000019/uni00000013/uni00000018/uni00000019/uni00000017/uni00000018/uni00000018/uni0000001b/uni00000018/uni0000001c/uni00000019/uni00000019/uni00000018/uni00000018/uni00000019/uni00000015/uni00000018/uni00000019/uni00000017/uni00000019/uni00000018/uni00000013/uni00000015/uni00000018/uni00000015/uni00000015/uni00000016/uni0000001b/uni00000014/uni0000001c /uni00000019/uni00000017/uni0000001a/uni00000015/uni00000019/uni0000001a/uni0000001b/uni0000001b/uni00000018/uni00000019/uni0000001a/uni00000013/uni00000019/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000015/uni00000017/uni00000018/uni0000001a/uni00000016/uni0000001a/uni00000018/uni00000019/uni00000015/uni00000017/uni0000001b/uni00000018/uni0000001c/uni00000016/uni00000019/uni00000019/uni0000001c/uni00000019/uni00000014/uni00000018/uni0000001a/uni00000018/uni00000018/uni00000018/uni00000017/uni00000018/uni00000017/uni00000015/uni0000001b/uni00000015/uni00000016/uni00000016/uni00000019/uni00000015/uni00000017 /uni00000019/uni0000001a/uni0000001a/uni00000016/uni00000019/uni00000018/uni0000001b/uni00000016/uni00000019/uni00000017/uni0000001a/uni0000001a/uni00000019/uni0000001a/uni00000018/uni00000015/uni0000001a/uni00000018/uni00000018/uni00000015/uni00000019/uni00000014/uni0000001a/uni00000016/uni00000019/uni00000013/uni00000018/uni0000001c/uni00000019/uni00000016/uni00000016/uni00000014/uni00000019/uni00000015/uni00000018/uni0000001c/uni00000019/uni00000013/uni00000018/uni0000001a/uni00000018/uni00000016/uni00000018/uni00000017/uni00000017/uni00000013/uni00000016/uni00000015/uni00000017/uni00000015/uni00000016/uni00000013 /uni00000019/uni00000019/uni0000001a/uni00000013/uni00000019/uni00000016/uni0000001a/uni0000001a/uni0000001a/uni00000013/uni0000001a/uni0000001c/uni00000019/uni00000013/uni00000018/uni0000001b/uni0000001a/uni0000001a/uni0000001a/uni00000015/uni00000019/uni00000017/uni00000019/uni00000015/uni00000018/uni00000018/uni00000018/uni0000001c/uni00000019/uni0000001c/uni00000018/uni00000018/uni00000019/uni0000001c/uni00000018/uni0000001b/uni00000019/uni00000017/uni00000018/uni00000015/uni00000019/uni0000001b/uni00000017/uni0000001b/uni00000016/uni00000017/uni00000017/uni00000014/uni00000017/uni00000017/uni00000015/uni0000001a /uni0000001a/uni00000013/uni00000019/uni0000001c/uni00000018/uni00000019/uni0000001a/uni00000019/uni0000001a/uni0000001a/uni0000001a/uni0000001a/uni00000019/uni00000016/uni00000018/uni00000015/uni00000019/uni00000019/uni00000019/uni0000001c/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000018/uni00000019/uni00000019/uni0000001a/uni00000019/uni00000017/uni00000019/uni00000014/uni00000018/uni00000016/uni00000019/uni0000001b/uni00000019/uni00000017/uni00000019/uni0000001b/uni00000019/uni00000019/uni00000018/uni00000016/uni00000017/uni00000016/uni00000017/uni00000017/uni00000016/uni00000016/uni00000015/uni00000015 /uni00000019/uni00000017/uni00000019/uni0000001a/uni00000018/uni00000014/uni0000001a/uni00000019/uni0000001b/uni00000014/uni0000001a/uni0000001a/uni00000017/uni00000019/uni00000017/uni00000013/uni00000017/uni0000001b/uni00000019/uni00000019/uni00000016/uni00000017/uni00000018/uni0000001c/uni00000018/uni00000019/uni0000001a/uni00000015/uni00000019/uni00000017/uni00000019/uni00000017/uni00000018/uni00000019/uni00000019/uni0000001a/uni0000001a/uni00000014/uni0000001a/uni00000018/uni00000019/uni0000001c/uni00000017/uni0000001b/uni00000019/uni00000014/uni00000018/uni00000018/uni00000018/uni00000013/uni00000015/uni0000001c /uni0000001a/uni0000001c/uni0000001b/uni00000013/uni00000017/uni00000014/uni0000001a/uni00000016/uni0000001a/uni00000015/uni00000019/uni0000001c/uni00000018/uni0000001c/uni00000017/uni00000015/uni00000017/uni00000014/uni0000001a/uni00000016/uni00000016/uni0000001b/uni00000019/uni00000017/uni00000019/uni00000019/uni00000017/uni00000013/uni00000017/uni00000018/uni00000018/uni00000016/uni0000001a/uni00000014/uni0000001a/uni00000015/uni0000001a/uni00000016/uni00000019/uni0000001b/uni00000019/uni0000001a/uni0000001a/uni00000013/uni00000016/uni00000014/uni00000015/uni00000019/uni00000017/uni00000015/uni00000015/uni0000001b /uni00000017/uni00000015/uni00000015/uni0000001a/uni00000016/uni0000001a/uni00000019/uni0000001c/uni00000018/uni0000001c/uni00000019/uni0000001a/uni00000017/uni0000001c/uni00000018/uni00000018/uni00000018/uni00000018/uni00000019/uni0000001a/uni00000018/uni0000001b/uni00000016/uni0000001c/uni00000015/uni0000001c/uni00000014/uni00000016/uni00000014/uni00000017/uni00000015/uni00000019/uni00000016/uni00000014/uni00000018/uni00000019/uni00000018/uni00000019/uni00000017/uni00000015/uni00000018/uni0000001b/uni00000014/uni00000018/uni0000001a/uni00000015/uni00000016/uni00000015/uni0000001c/uni00000016/uni00000013 /uni00000014/uni00000014/uni00000019/uni00000014/uni0000001a/uni00000019/uni00000016/uni00000017/uni0000001a/uni00000018/uni00000016/uni00000015/uni00000017/uni00000016/uni0000001c/uni00000018/uni00000015/uni00000017/uni0000001a/uni00000017/uni00000016/uni00000014/uni00000016/uni00000018/uni00000017/uni00000017/uni00000015/uni00000015/uni00000016/uni00000013/uni00000017/uni0000001b/uni00000017/uni0000001b/uni00000016/uni00000018/uni00000017/uni0000001c/uni00000017/uni00000015/uni00000016/uni00000018/uni00000017/uni0000001a/uni00000014/uni0000001b /uni00000015/uni00000015/uni00000018/uni00000014/uni00000019/uni00000016/uni0000001a/uni00000017/uni00000018/uni0000001a/uni00000014/uni00000019/uni00000016/uni0000001a/uni00000014/uni0000001a/uni0000001c/uni00000019/uni00000014/uni00000016/uni00000015/uni0000001a/uni00000014/uni0000001c/uni00000015/uni0000001c/uni00000016/uni00000015/uni00000016/uni00000014/uni00000016/uni00000017/uni00000015/uni00000014/uni00000015/uni0000001a/uni00000016/uni00000014/uni00000014/uni00000016/uni00000026/uni00000055/uni00000048/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000003/uni0000002a/uni00000048/uni00000051/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013Figure 18: Length Score forCreative Generation across different length constraints under the Equal Tocontrol method. G.2 Input Length Analysis across task categories (as shown in Table 2) reveals that Summarization tasks with longer input texts tend to exhibit slightly better length instruction following. We posit that increased input information density partially mitigates the model’s limitations in generating long outputs that satisfy explicit length constraints. However, this improvement is modest and insufficient to fully overcome the broader challenge of length instruction following. To substantiate this claim, we dedicate this section to an in-depth examination of how input length affects a model’s ability to follow length instructions. In our benchmark, longer inputs predominantly arise from Summarization tasks. Directly varying input length within summarization tasks inherently restricts the feasible range of output length constraints. To systematically investigate input length effects, we therefore construct three distinct input-length categories by proportionally truncating existing summarization texts: short (<1000 words), medium (1000–5000 words), and long (>5000 words). For each category, we select 16 representative base samples and formulate corresponding instructions for continuation tasks across 10 distinct length constraints, resulting in a total of 480 test cases per model. Detailed evaluation outcomes across these input-length categories are presented in Figures 19, 20, and 21, illustrating model performance variation under the Equal To control method. The experimental results largely support our hypothesis. Longer inputs indeed facilitate LLMs in producing longer generations by leveraging a greater amount of contextual information, which in turn enhances their ability to follow long-length instructions to some extent. However, this mitigation effect remains limited. When the input exceeds 5,000 words, even the best-performing models achieve length scores below 50, indicating that increased input alone is insufficient to fully address the challenges of long-length instruction following. 53 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000018/uni00000017/uni0000001a/uni0000001a/uni00000018/uni00000013/uni0000001c/uni0000001b/uni00000016/uni0000001b/uni00000019/uni00000013/uni00000019/uni00000016/uni00000017/uni00000017/uni0000001a/uni0000001a/uni00000016/uni00000018/uni0000001b/uni00000015/uni00000016/uni0000001c/uni00000016/uni0000001c/uni00000016/uni0000001a/uni00000018/uni0000001c/uni00000016/uni0000001b/uni00000019/uni00000017/uni00000018/uni0000001b/uni00000019/uni00000015/uni00000016/uni0000001c/uni00000019/uni00000017/uni00000018/uni00000015/uni00000014/uni00000016/uni00000015/uni00000017/uni00000017/uni00000016/uni00000014 /uni00000018/uni00000019/uni0000001b/uni00000017/uni00000019/uni00000013/uni0000001c/uni00000016/uni00000017/uni00000018/uni00000018/uni00000018/uni00000019/uni0000001a/uni00000018/uni00000014/uni0000001a/uni00000018/uni00000017/uni00000015/uni00000019/uni0000001c/uni00000018/uni00000016/uni00000017/uni0000001b/uni00000017/uni00000014/uni00000019/uni00000015/uni00000018/uni00000018/uni00000019/uni00000019/uni00000018/uni00000017/uni00000018/uni0000001b/uni00000015/uni00000016/uni00000017/uni00000017/uni00000016/uni00000014/uni00000014/uni0000001a/uni00000014/uni00000016/uni00000016/uni00000017/uni0000001a /uni00000019/uni00000013/uni0000001b/uni00000015/uni00000018/uni00000015/uni0000001b/uni0000001a/uni00000019/uni00000016/uni0000001a/uni00000013/uni00000019/uni00000014/uni00000018/uni00000019/uni0000001a/uni00000013/uni00000019/uni00000016/uni00000019/uni00000015/uni00000019/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000013/uni00000017/uni00000017/uni00000016/uni0000001a/uni00000018/uni00000018/uni00000018/uni0000001c/uni00000019/uni0000001c/uni00000019/uni00000013/uni00000019/uni00000016/uni00000017/uni00000013/uni00000015/uni00000017/uni00000014/uni0000001b/uni00000017/uni00000017/uni00000015/uni00000015 /uni00000019/uni00000018/uni00000019/uni00000019/uni00000018/uni0000001c/uni0000001b/uni00000016/uni00000019/uni00000016/uni00000019/uni00000018/uni00000018/uni00000019/uni00000018/uni00000014/uni00000019/uni00000019/uni00000019/uni0000001b/uni00000018/uni0000001c/uni00000018/uni00000015/uni00000017/uni00000017/uni00000017/uni0000001c/uni00000017/uni0000001a/uni00000018/uni00000016/uni00000019/uni00000013/uni00000018/uni00000019/uni00000018/uni00000019/uni00000018/uni00000016/uni00000018/uni00000017/uni00000017/uni0000001b/uni00000016/uni00000017/uni00000016/uni00000014/uni00000017/uni0000001b/uni00000014/uni0000001a /uni00000019/uni0000001b/uni00000018/uni00000015/uni00000018/uni00000017/uni0000001b/uni00000014/uni00000019/uni0000001c/uni0000001a/uni00000016/uni00000018/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000019/uni0000001a/uni00000018/uni00000018/uni00000017/uni00000018/uni0000001a/uni00000017/uni0000001b/uni00000018/uni00000019/uni00000018/uni00000013/uni00000018/uni0000001a/uni00000018/uni00000019/uni00000019/uni00000015/uni00000019/uni00000018/uni00000019/uni00000019/uni00000018/uni00000015/uni00000017/uni0000001b/uni00000016/uni0000001a/uni00000016/uni0000001a/uni00000016/uni00000019/uni00000015/uni00000014 /uni00000019/uni00000018/uni00000018/uni00000018/uni00000017/uni00000016/uni0000001b/uni00000013/uni0000001a/uni00000017/uni0000001a/uni0000001b/uni00000018/uni00000016/uni00000015/uni00000019/uni00000018/uni00000019/uni00000019/uni00000019/uni00000016/uni00000015/uni00000017/uni00000017/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000018/uni0000001c/uni00000019/uni00000017/uni00000018/uni0000001a/uni00000019/uni00000017/uni00000019/uni00000015/uni0000001b/uni0000001a/uni00000018/uni0000001b/uni00000017/uni0000001b/uni00000018/uni00000017/uni00000018/uni00000015/uni00000016/uni0000001a/uni00000015/uni0000001c /uni0000001b/uni00000013/uni00000019/uni0000001c/uni00000016/uni00000016/uni0000001a/uni00000019/uni00000019/uni00000019/uni00000019/uni00000014/uni00000018/uni00000015/uni00000016/uni00000018/uni00000016/uni00000018/uni0000001a/uni00000016/uni00000017/uni00000019/uni00000018/uni00000016/uni00000019/uni0000001c/uni00000016/uni00000015/uni00000016/uni0000001b/uni00000019/uni00000013/uni00000019/uni00000018/uni00000019/uni00000015/uni0000001a/uni00000014/uni00000019/uni0000001b/uni00000019/uni00000014/uni00000019/uni00000015/uni00000017/uni00000015/uni00000015/uni00000015/uni00000017/uni00000016/uni00000014/uni00000017 /uni00000015/uni00000019/uni00000015/uni00000015/uni00000015/uni00000019/uni0000001a/uni00000013/uni00000019/uni00000017/uni00000018/uni0000001c/uni00000018/uni00000014/uni00000016/uni0000001b/uni00000016/uni00000016/uni00000018/uni00000019/uni00000019/uni0000001b/uni00000016/uni0000001a/uni00000015/uni0000001c/uni00000014/uni00000019/uni00000014/uni00000016/uni00000014/uni0000001b/uni00000016/uni00000017/uni00000018/uni00000014/uni00000017/uni0000001a/uni00000017/uni00000015/uni00000017/uni0000001b/uni00000014/uni0000001c/uni00000014/uni00000015/uni00000015/uni00000014/uni00000016/uni00000017/uni00000014/uni00000019 /uni0000001a/uni00000016/uni00000014/uni00000019/uni00000018/uni00000018/uni00000016/uni0000001c/uni00000017/uni00000015/uni00000016/uni00000017/uni00000016/uni00000017/uni00000016/uni00000015/uni00000017/uni00000017/uni00000015/uni0000001b/uni00000015/uni00000015/uni0000001c/uni00000018/uni0000001a/uni00000014/uni00000018/uni00000015/uni00000016/uni00000017/uni00000014/uni00000016/uni0000001b/uni00000015/uni00000017/uni00000018/uni0000001a/uni00000016/uni00000016/uni00000017/uni00000014/uni00000017/uni00000017/uni00000014/uni00000017 /uni00000014/uni00000014/uni00000016/uni00000015/uni00000015/uni00000015/uni00000015/uni00000016/uni0000001b/uni00000014/uni00000019/uni00000016/uni00000014/uni00000016/uni00000018/uni00000015/uni00000014/uni0000001a/uni00000014/uni00000016/uni00000015/uni00000016/uni00000016/uni00000014/uni00000013/uni00000014/uni00000013/uni00000015/uni0000001b/uni00000015/uni0000001b/uni00000015/uni00000017/uni00000016/uni00000014/uni00000017/uni00000014/uni00000015/uni00000017/uni00000015/uni00000015/uni00000014/uni00000015/uni00000036/uni0000004b/uni00000052/uni00000055/uni00000057/uni00000003/uni0000002c/uni00000051/uni00000053/uni00000058/uni00000057 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013Figure 19: Length Score forShort Input across
https://arxiv.org/abs/2505.16234v1
different length constraints under the Equal To control method. /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000018/uni00000015/uni0000001a/uni00000017/uni00000019/uni00000015/uni0000001c/uni0000001b/uni00000017/uni00000017/uni00000017/uni0000001a/uni00000019/uni00000016/uni00000017/uni00000016/uni0000001a/uni0000001a/uni00000016/uni00000014/uni00000019/uni0000001c/uni00000016/uni00000018/uni00000016/uni00000018/uni00000014/uni00000019/uni00000018/uni00000013/uni00000017/uni00000013/uni00000018/uni00000014/uni00000016/uni0000001b/uni00000018/uni00000019/uni00000019/uni00000015/uni0000001a/uni00000016/uni00000016/uni00000019/uni00000014/uni00000013/uni00000015/uni00000015/uni00000016/uni0000001a/uni00000014 /uni00000018/uni00000016/uni0000001a/uni0000001a/uni00000018/uni00000014/uni0000001c/uni00000019/uni00000019/uni00000015/uni0000001a/uni00000017/uni00000018/uni00000018/uni00000016/uni0000001b/uni0000001a/uni00000014/uni00000018/uni00000015/uni00000019/uni00000017/uni00000017/uni0000001a/uni00000017/uni0000001a/uni00000014/uni0000001b/uni00000016/uni00000017/uni00000017/uni00000019/uni00000019/uni00000015/uni00000018/uni00000013/uni00000019/uni00000014/uni00000016/uni00000017/uni00000019/uni00000015/uni00000016/uni0000001a/uni00000014/uni00000018/uni00000014/uni00000015/uni00000015/uni00000015/uni00000015 /uni00000019/uni00000013/uni0000001a/uni00000019/uni00000019/uni00000018/uni0000001c/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000017/uni00000019/uni00000013/uni00000017/uni0000001c/uni0000001a/uni00000015/uni00000019/uni00000017/uni00000018/uni00000015/uni00000018/uni00000016/uni00000017/uni0000001a/uni00000015/uni0000001c/uni00000017/uni00000013/uni00000016/uni00000014/uni00000017/uni0000001c/uni00000018/uni00000018/uni00000018/uni0000001b/uni00000019/uni00000014/uni00000018/uni00000013/uni00000018/uni00000013/uni00000014/uni0000001b/uni00000015/uni0000001a/uni00000015/uni00000019/uni00000017 /uni0000001a/uni00000014/uni0000001a/uni00000016/uni00000018/uni0000001a/uni0000001a/uni0000001c/uni0000001a/uni00000013/uni0000001a/uni0000001a/uni00000019/uni00000017/uni00000017/uni00000018/uni0000001a/uni00000019/uni0000001a/uni00000013/uni00000018/uni00000018/uni00000019/uni00000014/uni00000017/uni00000013/uni00000015/uni00000019/uni00000017/uni00000013/uni00000019/uni00000018/uni00000018/uni0000001b/uni00000017/uni00000017/uni00000018/uni00000019/uni00000019/uni00000018/uni00000018/uni00000015/uni00000017/uni00000014/uni00000014/uni00000014/uni00000014/uni0000001c/uni00000016/uni0000001c/uni00000014/uni00000016 /uni0000001a/uni00000015/uni0000001a/uni00000015/uni00000017/uni00000014/uni0000001b/uni00000013/uni00000019/uni00000016/uni0000001b/uni00000019/uni00000018/uni00000019/uni00000017/uni00000018/uni00000019/uni00000017/uni00000019/uni0000001b/uni00000018/uni00000016/uni00000018/uni00000017/uni00000017/uni00000018/uni00000016/uni00000016/uni00000016/uni00000015/uni00000019/uni00000016/uni00000018/uni00000018/uni00000018/uni00000019/uni00000018/uni0000001c/uni00000019/uni00000013/uni00000018/uni00000018/uni00000015/uni0000001c/uni00000016/uni00000015/uni00000016/uni00000017/uni00000016/uni00000015/uni00000014/uni00000015 /uni00000019/uni00000019/uni00000019/uni00000019/uni00000017/uni00000017/uni0000001b/uni00000014/uni00000019/uni0000001b/uni0000001a/uni00000013/uni00000018/uni00000017/uni00000015/uni00000019/uni00000017/uni0000001c/uni0000001a/uni00000014/uni00000016/uni0000001a/uni00000018/uni00000014/uni00000018/uni00000014/uni00000017/uni0000001c/uni00000016/uni0000001b/uni00000017/uni0000001c/uni00000019/uni00000018/uni00000019/uni00000018/uni00000019/uni00000014/uni00000019/uni0000001a/uni00000018/uni0000001b/uni00000018/uni00000014/uni00000018/uni00000014/uni00000014/uni0000001a/uni00000017/uni00000019/uni00000015/uni0000001a /uni0000001a/uni00000017/uni0000001a/uni00000013/uni00000016/uni00000018/uni00000019/uni00000015/uni00000019/uni0000001a/uni00000019/uni00000015/uni00000018/uni00000014/uni00000016/uni00000017/uni00000016/uni00000015/uni00000019/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000018/uni00000019/uni00000019/uni00000017/uni00000017/uni00000014/uni0000001c/uni00000018/uni00000014/uni00000016/uni0000001b/uni0000001b/uni00000015/uni0000001a/uni0000001c/uni00000019/uni0000001a/uni00000019/uni0000001a/uni00000016/uni0000001c/uni00000016/uni0000001b/uni00000014/uni00000014/uni00000016/uni0000001a/uni00000014/uni00000019 /uni00000016/uni0000001c/uni00000015/uni00000015/uni00000016/uni00000015/uni00000019/uni0000001a/uni00000017/uni00000019/uni00000017/uni00000015/uni00000017/uni00000013/uni00000017/uni00000014/uni00000016/uni0000001b/uni00000019/uni00000014/uni00000018/uni0000001c/uni00000016/uni00000017/uni00000016/uni00000016/uni00000014/uni0000001a/uni00000014/uni00000015/uni00000017/uni00000017/uni00000016/uni0000001b/uni00000018/uni00000018/uni00000018/uni0000001b/uni00000017/uni00000018/uni00000019/uni00000016/uni00000015/uni00000017/uni00000014/uni00000015/uni00000014/uni00000019/uni00000014/uni0000001b/uni00000016/uni00000017 /uni00000014/uni00000016/uni00000019/uni00000014/uni0000001c/uni00000018/uni0000001a/uni00000017/uni00000013/uni00000017/uni00000018/uni00000015/uni00000019/uni00000017/uni0000001b/uni00000017/uni00000019/uni00000016/uni0000001a/uni00000016/uni00000014/uni00000014/uni00000019/uni00000014/uni00000017/uni0000001b/uni00000014/uni00000014/uni00000015/uni0000001b/uni00000016/uni00000015/uni00000017/uni00000015/uni00000017/uni00000015/uni00000016/uni00000015/uni00000017/uni00000019/uni00000014/uni00000013/uni00000016/uni00000015/uni0000001b/uni00000015/uni0000001c/uni00000019 /uni00000017/uni00000014/uni00000017/uni00000014/uni0000001a/uni00000015/uni00000016/uni00000016/uni0000001a/uni00000015/uni00000014/uni00000016/uni0000001c/uni00000016/uni00000014/uni00000016/uni0000001b/uni00000019/uni00000015/uni00000015/uni00000015/uni00000016/uni0000001b/uni0000001a/uni0000001c/uni00000016/uni00000015/uni00000015/uni0000001b/uni00000016/uni00000014/uni00000017/uni00000017/uni00000016/uni00000015/uni00000015/uni00000019/uni00000017/uni00000017/uni0000001c/uni00000030/uni00000048/uni00000047/uni0000004c/uni00000058/uni00000050/uni00000003/uni0000002c/uni00000051/uni00000053/uni00000058/uni00000057 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013 Figure 20: Length Score forMedium Input across different length constraints under the Equal To control method. 54 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001a/uni00000019/uni0000001a/uni00000015/uni0000001a/uni00000013/uni00000014/uni00000013/uni00000013/uni00000016/uni0000001a/uni00000018/uni0000001a/uni00000019/uni00000016/uni00000017/uni0000001c/uni0000001a/uni00000013/uni00000015/uni0000001a/uni0000001a/uni00000015/uni00000016/uni0000001b/uni00000016/uni0000001b/uni00000014/uni00000013/uni00000016/uni0000001c/uni00000017/uni0000001a/uni00000018/uni0000001a/uni00000017/uni0000001a/uni00000019/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000017/uni00000016/uni00000019/uni0000001c/uni00000014/uni0000001a/uni00000014/uni0000001b/uni00000015 /uni00000018/uni0000001c/uni00000019/uni00000018/uni00000019/uni00000017/uni0000001c/uni00000015/uni00000016/uni00000013/uni00000018/uni00000019/uni00000017/uni00000018/uni00000018/uni00000014/uni00000019/uni00000015/uni00000017/uni0000001c/uni00000019/uni00000015/uni00000018/uni00000015/uni00000017/uni00000014/uni00000015/uni0000001c/uni00000016/uni00000017/uni00000017/uni00000018/uni00000017/uni00000019/uni00000018/uni00000017/uni00000019/uni00000018/uni00000016/uni00000014/uni00000018/uni00000013/uni00000016/uni0000001b/uni00000017/uni00000014/uni00000014/uni00000015/uni00000017/uni00000016 /uni00000019/uni00000019/uni00000019/uni0000001b/uni00000019/uni00000013/uni0000001c/uni00000013/uni00000017/uni00000016/uni00000019/uni00000019/uni00000019/uni00000019/uni00000017/uni00000019/uni0000001a/uni0000001b/uni00000018/uni00000015/uni00000018/uni00000018/uni00000017/uni00000017/uni00000017/uni00000017/uni00000016/uni00000016/uni00000017/uni00000014/uni00000015/uni0000001a/uni00000017/uni00000014/uni00000018/uni00000013/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000018/uni00000019/uni00000017/uni00000014/uni0000001b/uni00000019/uni00000014/uni0000001c/uni0000001a /uni00000019/uni0000001a/uni0000001a/uni00000019/uni00000019/uni00000016/uni0000001b/uni00000017/uni00000019/uni0000001a/uni00000019/uni00000017/uni00000018/uni00000019/uni00000017/uni00000018/uni00000018/uni0000001a/uni00000019/uni0000001a/uni00000019/uni00000019/uni00000018/uni00000019/uni00000016/uni0000001b/uni00000016/uni00000019/uni00000017/uni00000015/uni00000018/uni00000014/uni00000018/uni0000001a/uni00000017/uni00000015/uni00000018/uni00000017/uni00000019/uni00000013/uni00000018/uni0000001a/uni00000016/uni0000001c/uni00000015/uni00000016/uni0000001b/uni00000015/uni00000015/uni00000014/uni00000015 /uni0000001a/uni00000014/uni0000001a/uni00000013/uni00000018/uni00000018/uni0000001a/uni0000001c/uni0000001a/uni00000018/uni0000001b/uni00000013/uni00000019/uni00000014/uni00000018/uni00000013/uni00000018/uni00000018/uni00000019/uni00000018/uni00000019/uni00000017/uni00000017/uni00000019/uni00000018/uni00000017/uni00000015/uni00000017/uni00000015/uni00000019/uni00000019/uni00000013/uni00000018/uni00000019/uni00000018/uni00000015/uni00000018/uni00000016/uni00000019/uni00000018/uni00000019/uni00000013/uni00000017/uni0000001a/uni00000015/uni00000013/uni00000014/uni0000001c/uni00000017/uni0000001c/uni00000014/uni00000018 /uni00000019/uni00000015/uni0000001a/uni0000001b/uni00000016/uni00000017/uni0000001a/uni00000016/uni00000019/uni00000016/uni0000001a/uni00000018/uni00000018/uni00000018/uni00000016/uni00000019/uni00000017/uni00000019/uni0000001a/uni00000018/uni00000017/uni0000001a/uni00000016/uni0000001a/uni00000018/uni00000017/uni00000016/uni00000016/uni00000014/uni0000001b/uni00000018/uni00000015/uni00000017/uni00000019/uni00000018/uni0000001a/uni00000019/uni0000001c/uni0000001a/uni00000013/uni00000018/uni0000001c/uni00000017/uni0000001c/uni00000017/uni0000001a/uni00000014/uni0000001a/uni00000015/uni00000019/uni00000014/uni00000017 /uni00000019/uni0000001c/uni00000018/uni00000017/uni00000017/uni00000013/uni00000019/uni00000017/uni00000019/uni00000013/uni00000019/uni00000013/uni00000018/uni0000001b/uni00000015/uni0000001b/uni00000018/uni00000019/uni00000017/uni00000019/uni00000019/uni00000014/uni00000017/uni0000001c/uni00000019/uni0000001a/uni00000016/uni00000018/uni00000015/uni00000017/uni00000016/uni0000001c/uni00000017/uni00000017/uni00000019/uni00000017/uni0000001a/uni00000018/uni00000017/uni0000001b/uni00000019/uni00000018/uni00000015/uni0000001a/uni00000015/uni0000001c/uni0000001c/uni00000015/uni0000001a/uni00000016/uni00000016 /uni00000015/uni0000001a/uni00000015/uni00000014/uni00000015/uni0000001a/uni00000019/uni0000001c/uni00000016/uni00000018/uni00000017/uni00000016/uni00000017/uni0000001c/uni00000016/uni00000013/uni00000017/uni00000015/uni00000018/uni00000015/uni00000017/uni00000015/uni00000017/uni00000014/uni00000014/uni0000001b/uni00000014/uni00000015/uni00000014/uni0000001c/uni00000015/uni00000014/uni00000017/uni00000013/uni00000017/uni00000016/uni00000017/uni00000014/uni00000017/uni00000014/uni00000017/uni00000014/uni00000014/uni00000017/uni00000014/uni00000019/uni00000017/uni0000001b/uni00000015/uni0000001a /uni0000001b/uni0000001c/uni00000014/uni00000017/uni0000001a/uni00000013/uni00000015/uni0000001b/uni00000016/uni00000017/uni00000015/uni00000013/uni00000017/uni00000015/uni00000019/uni00000016/uni00000017/uni00000013/uni00000015/uni00000017/uni00000014/uni00000019/uni00000017/uni00000016/uni00000016/uni00000015/uni00000013/uni00000015/uni00000014/uni00000016/uni0000001b/uni00000016/uni00000013/uni00000014/uni0000001b/uni00000017/uni00000013/uni00000016/uni00000016/uni00000015/uni00000016/uni00000014/uni0000001b/uni00000014/uni00000013 /uni00000015/uni0000001a/uni0000001a/uni00000014/uni0000001c/uni00000014/uni0000001b/uni00000015/uni00000015/uni00000014/uni0000001b/uni00000017/uni00000013/uni00000016/uni00000013/uni00000017/uni00000018/uni00000019/uni00000014/uni00000017/uni00000015/uni00000014/uni00000014/uni00000014/uni00000013/uni00000014/uni00000017/uni00000014/uni00000015/uni00000015/uni00000017/uni00000014/uni00000013/uni00000014/uni0000001b/uni00000015/uni0000001b/uni00000014/uni00000014/uni00000018/uni00000016/uni00000016/uni00000017/uni00000016/uni00000017/uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000003/uni0000002c/uni00000051/uni00000053/uni00000058/uni00000057 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013Figure 21: Length Score forLong Input across different length constraints under the Equal To control method. G.3 Language Language bias may also contribute to the limitations observed in length instruction following. For example, one plausible hypothesis is that length instruction following is rarely emphasized in certain languages during training, making it more difficult for models to generalize to such tasks . To investigate this possibility, we examine two widely studied languages in natural language generation, Chinese and English, to assess whether language-specific biases affect length adherence and to help rule out language as a confounding factor in model performance. Figures 22 and 23 present detailed performance results for English and Chinese inputs across various length constraints under the Equal To control method. While the results indicate some language, specific differences, such as models performing marginally better in their corresponding native languages, neither English nor Chinese demonstrates consistently superior or inferior performance in length instruction following. Thus, although language biases exist in task execution, these biases appear primarily related to general instruction, following capabilities rather than being specifically driven by the models’ ability to adhere to length constraints in different languages. Additionally, Table 15 provides a comparative summary of output lengths generated by models for both languages. The results indicate another consistent language bias: most evaluated models tend to produce longer outputs when generating Chinese text compared to English. We believe that this phenomenon may suggest some linguistic characteristics, but further research may require analysis in combination with the composition of pre-training data. 55 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001b/uni00000017/uni0000001b/uni00000016/uni0000001a/uni0000001a/uni0000001c/uni00000018/uni00000019/uni0000001b/uni00000019/uni00000017/uni00000018/uni00000019/uni00000019/uni00000017/uni0000001b/uni00000015/uni00000017/uni00000017/uni0000001a/uni0000001c/uni0000001a/uni00000014/uni00000018/uni0000001c/uni0000001a/uni00000014/uni0000001b/uni0000001c/uni00000019/uni00000017/uni00000019/uni0000001b/uni00000018/uni00000014/uni00000018/uni00000015/uni00000018/uni00000019/uni00000017/uni00000016/uni00000018/uni00000017/uni00000014/uni00000016/uni00000015/uni00000018/uni00000017/uni00000016/uni00000016/uni00000015 /uni0000001b/uni00000016/uni0000001b/uni00000017/uni0000001a/uni00000017/uni0000001c/uni00000018/uni00000019/uni0000001c/uni00000019/uni0000001b/uni00000018/uni00000019/uni00000017/uni0000001c/uni0000001b/uni00000016/uni00000017/uni0000001a/uni0000001a/uni0000001a/uni0000001a/uni0000001c/uni00000019/uni0000001c/uni0000001a/uni00000018/uni0000001b/uni00000017/uni00000018/uni00000017/uni0000001a/uni00000015/uni00000018/uni0000001b/uni00000018/uni00000013/uni00000018/uni0000001c/uni00000018/uni00000014/uni00000018/uni0000001b/uni00000014/uni00000019/uni00000016/uni00000014/uni00000017/uni00000018/uni00000017/uni00000014 /uni0000001b/uni00000019/uni0000001b/uni00000013/uni0000001a/uni00000014/uni0000001c/uni00000018/uni0000001a/uni00000019/uni0000001a/uni00000017/uni00000019/uni00000015/uni00000019/uni00000017/uni0000001b/uni00000018/uni00000018/uni00000015/uni00000019/uni0000001c/uni00000019/uni00000018/uni00000018/uni00000018/uni00000019/uni00000017/uni0000001a/uni0000001c/uni00000018/uni00000015/uni0000001a/uni00000014/uni0000001a/uni00000019/uni00000018/uni0000001c/uni00000019/uni0000001a/uni00000019/uni00000015/uni00000019/uni00000016/uni00000017/uni00000019/uni00000017/uni00000013/uni00000017/uni0000001b/uni00000017/uni0000001c /uni0000001b/uni0000001a/uni0000001b/uni0000001a/uni00000019/uni0000001b/uni0000001c/uni00000019/uni0000001a/uni0000001b/uni0000001b/uni00000015/uni00000019/uni00000017/uni00000019/uni0000001a/uni0000001b/uni00000017/uni0000001a/uni00000015/uni0000001a/uni00000017/uni0000001a/uni00000019/uni00000019/uni00000013/uni00000019/uni00000016/uni0000001b/uni00000013/uni00000019/uni0000001c/uni0000001a/uni00000018/uni0000001a/uni0000001a/uni00000019/uni00000017/uni0000001a/uni00000013/uni0000001a/uni0000001a/uni00000019/uni00000013/uni00000015/uni0000001b/uni00000017/uni0000001a/uni00000017/uni0000001b/uni00000016/uni0000001b /uni0000001b/uni0000001b/uni00000019/uni00000014/uni00000018/uni00000015/uni0000001c/uni00000019/uni0000001b/uni00000015/uni0000001b/uni00000018/uni0000001a/uni00000013/uni00000019/uni0000001a/uni0000001a/uni00000016/uni0000001a/uni00000015/uni00000019/uni00000014/uni00000019/uni00000019/uni00000018/uni00000013/uni00000019/uni00000018/uni00000019/uni00000018/uni00000017/uni00000017/uni00000017/uni00000017/uni0000001b/uni00000016/uni00000018/uni00000019/uni0000001a/uni0000001c/uni00000019/uni00000017/uni00000016/uni0000001c/uni00000015/uni00000015/uni00000017/uni00000019/uni00000017/uni00000018/uni00000015/uni00000016 /uni00000018/uni00000019/uni00000019/uni00000013/uni00000015/uni0000001b/uni0000001c/uni00000014/uni0000001b/uni00000017/uni0000001a/uni0000001c/uni00000016/uni00000016/uni00000014/uni0000001a/uni00000016/uni00000019/uni00000019/uni0000001b/uni00000015/uni0000001b/uni00000016/uni0000001b/uni00000016/uni00000018/uni00000019/uni00000015/uni00000017/uni0000001c/uni00000018/uni00000014/uni00000016/uni0000001a/uni0000001a/uni0000001c/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000017/uni00000019/uni00000016/uni00000016/uni0000001a/uni00000013/uni00000019/uni00000018/uni00000019/uni00000014/uni00000017/uni00000016 /uni0000001b/uni00000015/uni0000001a/uni00000019/uni00000014/uni0000001c/uni0000001b/uni00000017/uni0000001a/uni00000019/uni0000001a/uni00000013/uni00000017/uni00000018/uni00000015/uni00000017/uni00000014/uni00000017/uni0000001a/uni00000013/uni00000015/uni0000001c/uni00000017/uni00000018/uni00000019/uni0000001b/uni00000019/uni00000013/uni00000018/uni0000001a/uni00000019/uni00000018/uni00000019/uni00000015/uni0000001a/uni00000019/uni00000019/uni00000014/uni0000001a/uni00000017/uni00000019/uni00000014/uni00000019/uni0000001c/uni00000018/uni0000001a/uni00000015/uni00000018/uni00000015/uni00000018/uni00000017/uni00000014 /uni00000018/uni00000013/uni00000017/uni00000014/uni00000016/uni00000019/uni0000001a/uni00000017/uni00000019/uni00000017/uni00000019/uni00000017/uni00000019/uni00000018/uni00000017/uni0000001b/uni00000015/uni0000001b/uni00000019/uni00000017/uni00000019/uni00000013/uni00000018/uni0000001c/uni00000017/uni00000015/uni00000016/uni00000017/uni00000017/uni00000013/uni00000017/uni00000014/uni00000017/uni00000018/uni00000018/uni00000019/uni0000001a/uni00000014/uni00000019/uni00000014/uni0000001a/uni00000019/uni00000016/uni0000001b/uni00000016/uni00000014/uni00000014/uni00000019/uni00000016/uni00000016/uni00000017/uni00000015 /uni00000015/uni0000001b/uni00000015/uni00000016/uni00000016/uni00000017/uni00000019/uni0000001b/uni00000017/uni0000001c/uni00000018/uni00000015/uni00000017/uni00000016/uni00000018/uni00000015/uni00000018/uni00000013/uni00000018/uni00000015/uni00000017/uni00000018/uni00000016/uni00000019/uni00000015/uni00000016/uni00000015/uni00000016/uni00000015/uni00000016/uni00000016/uni00000013/uni00000016/uni0000001a/uni00000019/uni00000013/uni0000001a/uni00000015/uni00000018/uni00000019/uni0000001a/uni00000015/uni00000015/uni00000018/uni00000015/uni00000014/uni00000017/uni0000001c/uni00000018/uni00000015/uni00000016/uni00000016 /uni00000014/uni0000001b/uni00000014/uni0000001a/uni00000014/uni0000001c/uni00000015/uni00000018/uni00000016/uni0000001b/uni00000017/uni00000016/uni00000016/uni00000015/uni00000016/uni00000018/uni00000018/uni00000013/uni00000016/uni00000013/uni00000015/uni00000019/uni00000015/uni00000013/uni00000014/uni0000001a/uni00000014/uni0000001b/uni00000014/uni00000019/uni00000015/uni00000016/uni00000015/uni00000017/uni00000017/uni00000017/uni00000018/uni00000016/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000014/uni0000001b/uni00000014/uni0000001a/uni00000017/uni00000018/uni00000017/uni00000018/uni00000015/uni00000019/uni00000028/uni00000051/uni0000004a/uni0000004f/uni0000004c/uni00000056/uni0000004b /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013Figure 22: Length Score forEnglish across different length constraints under the Equal To control method. /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni00000018/uni00000016/uni00000019/uni00000019/uni0000001a/uni00000018/uni0000001c/uni0000001b/uni00000018/uni00000019/uni00000019/uni00000014/uni00000015/uni0000001c/uni00000019/uni00000013/uni0000001b/uni00000017/uni0000001a/uni00000015/uni0000001b/uni00000016/uni0000001a/uni00000014/uni00000019/uni00000018/uni0000001a/uni00000014/uni0000001c/uni00000016/uni00000019/uni00000017/uni0000001b/uni00000017/uni00000017/uni00000017/uni0000001c/uni00000018/uni00000018/uni00000017/uni0000001c/uni00000018/uni0000001a/uni00000015/uni00000013/uni00000015/uni00000015/uni00000017/uni00000018/uni00000014/uni00000016 /uni00000014/uni0000001b/uni00000019/uni0000001c/uni00000019/uni00000019/uni0000001c/uni00000018/uni00000018/uni0000001a/uni00000018/uni0000001c/uni00000016/uni0000001a/uni00000018/uni00000018/uni0000001a/uni00000016/uni0000001b/uni00000015/uni0000001a/uni0000001c/uni00000019/uni00000018/uni00000017/uni0000001b/uni0000001c/uni00000016/uni00000015/uni00000019/uni0000001a/uni0000001a/uni00000014/uni00000018/uni0000001a/uni00000017/uni00000013/uni00000018/uni0000001c/uni00000017/uni0000001c/uni00000019/uni00000013/uni00000016/uni00000015/uni00000015/uni00000017/uni00000017/uni0000001b/uni00000015/uni00000015 /uni00000014/uni0000001a/uni00000018/uni0000001a/uni00000018/uni0000001a/uni0000001c/uni00000013/uni00000017/uni0000001c/uni00000019/uni00000017/uni00000017/uni00000014/uni00000018/uni00000016/uni0000001a/uni00000013/uni0000001b/uni00000016/uni0000001a/uni00000018/uni00000019/uni00000018/uni00000017/uni0000001a/uni00000014/uni00000017/uni00000015/uni0000001c/uni00000017/uni0000001a/uni00000019/uni00000018/uni0000001a/uni00000014/uni00000017/uni00000017/uni00000019/uni00000013/uni00000017/uni00000019/uni00000018/uni0000001b/uni00000016/uni0000001a/uni00000016/uni00000013/uni00000018/uni00000017/uni00000016/uni00000013 /uni00000014/uni0000001a/uni00000015/uni00000016/uni00000017/uni00000013/uni0000001b/uni00000017/uni00000017/uni00000013/uni00000018/uni0000001c/uni00000014/uni0000001a/uni00000017/uni00000018/uni00000019/uni00000017/uni0000001a/uni00000014/uni00000017/uni0000001a/uni00000015/uni00000019/uni00000014/uni0000001c/uni00000015/uni00000013/uni00000016/uni00000015/uni00000019/uni0000001b/uni00000017/uni00000019/uni0000001a/uni0000001a/uni00000016/uni00000018/uni00000019/uni0000001a/uni00000017/uni0000001c/uni00000018/uni00000017/uni00000016/uni00000018/uni00000017/uni00000015/uni00000018/uni0000001a/uni00000016/uni00000015 /uni00000015/uni00000018/uni00000016/uni00000019/uni00000017/uni00000018/uni0000001b/uni00000015/uni0000001a/uni00000017/uni00000019/uni00000018/uni00000014/uni0000001b/uni00000017/uni00000013/uni00000018/uni0000001a/uni0000001a/uni00000018/uni00000016/uni00000017/uni00000016/uni00000017/uni00000014/uni0000001b/uni00000015/uni0000001c/uni00000015/uni00000019/uni00000018/uni00000015/uni00000017/uni00000019/uni0000001a/uni00000015/uni00000016/uni00000016/uni0000001a/uni00000016/uni00000017/uni00000015/uni00000018/uni00000013/uni00000016/uni0000001b/uni00000016/uni00000019/uni00000015/uni00000013/uni00000015/uni0000001a /uni00000015/uni0000001c/uni00000016/uni00000015/uni00000017/uni00000016/uni0000001b/uni00000016/uni0000001a/uni0000001b/uni0000001a/uni00000013/uni00000016/uni0000001c/uni00000015/uni00000013/uni0000001a/uni00000018/uni00000014/uni0000001a/uni00000018/uni00000013/uni00000015/uni00000018/uni00000018/uni0000001c/uni00000017/uni00000014/uni00000018/uni0000001a/uni00000016/uni0000001b/uni0000001a/uni0000001b/uni00000016/uni00000017/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000015/uni00000017/uni00000018/uni00000014/uni00000018/uni0000001b/uni00000015/uni00000016/uni00000016/uni00000015 /uni00000019/uni00000018/uni0000001a/uni00000013/uni00000017/uni0000001b/uni0000001b/uni00000014/uni0000001a/uni00000014/uni0000001a/uni00000016/uni00000014/uni00000014/uni0000001b/uni00000014/uni00000016/uni0000001a/uni00000016/uni00000014/uni00000019/uni00000018/uni0000001a/uni00000018/uni00000019/uni00000018/uni00000015/uni00000018/uni00000016/uni00000019/uni00000017/uni0000001a/uni00000013/uni0000001a/uni00000017/uni00000017/uni0000001c/uni00000019/uni0000001b/uni00000017/uni00000016/uni0000001a/uni00000018/uni00000017/uni00000013/uni00000018/uni00000015/uni00000019/uni00000017/uni00000016/uni00000013 /uni00000018/uni00000016/uni00000017/uni0000001a/uni00000017/uni00000016/uni0000001a/uni00000014/uni00000019/uni0000001c/uni0000001a/uni00000014/uni00000017/uni0000001a/uni00000014/uni00000016/uni00000015/uni00000018/uni00000017/uni0000001c/uni00000016/uni00000019/uni00000019/uni00000016/uni00000019/uni00000017/uni00000016/uni00000016/uni00000016/uni00000014/uni00000017/uni00000016/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000019/uni0000001a/uni00000019/uni00000016/uni0000001a/uni00000017/uni00000016/uni0000001c/uni00000015/uni00000019/uni00000015/uni0000001a/uni00000016/uni00000013/uni00000015/uni00000017 /uni00000015/uni0000001c/uni00000015/uni0000001b/uni00000016/uni0000001a/uni0000001a/uni00000013/uni00000018/uni00000019/uni00000018/uni0000001c/uni00000018/uni0000001a/uni00000017/uni0000001b/uni00000016/uni0000001c/uni00000019/uni00000013/uni00000019/uni00000017/uni00000017/uni00000014/uni00000016/uni00000017/uni00000015/uni00000014/uni00000015/uni00000013/uni00000016/uni0000001a/uni00000016/uni0000001b/uni00000019/uni00000015/uni00000019/uni00000016/uni00000018/uni00000013/uni00000019/uni0000001c/uni00000015/uni00000017/uni00000014/uni0000001b/uni00000015/uni00000015/uni00000016/uni00000016/uni00000015/uni00000016 /uni00000014/uni0000001c/uni00000014/uni0000001c/uni00000015/uni00000016/uni00000017/uni00000017/uni00000018/uni00000018/uni00000018/uni00000015/uni00000016/uni00000018/uni00000018/uni00000013/uni00000017/uni0000001b/uni00000017/uni00000019/uni00000016/uni0000001c/uni00000016/uni00000019/uni00000015/uni00000013/uni00000014/uni0000001a/uni00000014/uni00000019/uni00000015/uni00000018/uni00000017/uni0000001b/uni00000019/uni00000014/uni00000019/uni00000014/uni00000018/uni00000018/uni00000019/uni00000018/uni00000014/uni0000001b/uni00000014/uni00000019/uni00000017/uni00000016/uni00000018/uni0000001a/uni00000015/uni00000016/uni00000026/uni0000004b/uni0000004c/uni00000051/uni00000048/uni00000056/uni00000048 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013 Figure 23: Length Score forChinese across different length constraints under the Equal To control method. 56 Table 15: For all length constraints under the Equal To control method, we report both the Chinese and English Length Deviation . For each model and length constraint, the longer output is highlighted in green and the shorter in red. Models MetricLength Constraints 16 32 64 128 256 512 1024 2048 4096 8192 GPT-4o miniLD-EN 3% 3% 2% -4% -1% 12% -5% -33% -66% -87% LD-CN 15% 38% 39% 39% 32% 28% 7% -31% -63% -82% GPT-4oLD-EN -7% 1% 4% -2% 12% 10% -7% -46% -75% -88% LD-CN -17% -14% 1% 34%
https://arxiv.org/abs/2505.16234v1
22% 23% -1% -38% -65% -83% o1-miniLD-EN 3% 7% 8% 9% 13% 28% 56% 28% -51% -85% LD-CN -6% -5% 3% 29% 23% 17% 10% -21% -50% -78% o3-miniLD-EN 1% 1% 1% 0% 1% 1% 5% 7% 4% -80% LD-CN -1% -2% -6% -9% -10% -9% -6% 1% -8% -47% Claude-3.7-SonnetLD-EN 41% -1% -5% 3% 1% -6% -13% -23% -36% -48% LD-CN 109% 38% 30% 29% 2% -5% -14% -14% -6% -21% Claude-3.7-Sonnet-ThinkingLD-EN 68% 21% 8% 4% -1% -6% -16% -18% -23% -27% LD-CN 171% 56% 18% 10% 6% 4% -2% -5% 5% -2% Gemini-2.0-FlashLD-EN 12% 13% 9% 7% 4% 30% 21% -13% -37% -60% LD-CN 40% 28% 24% 45% 45% 91% 66% 27% -18% -53% Gemini-2.0-Flash-ThinkingLD-EN 11% 17% 11% 1% -2% 55% 108% 45% -2% -35% LD-CN 33% -4% 10% 15% 13% 136% 128% 85% 25% -19% Gemini-2.5-ProLD-EN 6% 3% 2% -1% 4% 27% 50% 37% 17% -17% LD-CN 14% -4% -4% 4% 6% 45% 62% 47% 37% 4% Doubao-1.5-ProLD-EN -41% -39% -33% -14% -14% -19% -8% -2% -21% -61% LD-CN -16% -7% -6% 6% 2% 3% -1% 19% -1% -42% Doubao-1.5-Thinking-ProLD-EN -8% -8% -17% -2% 8% 32% 27% -3% -43% -69% LD-CN -6% -9% -10% 18% 31% 51% 55% 26% -12% -48% DeepSeek-V3LD-EN -5% 0% 9% 5% 7% 21% 17% -18% -54% -80% LD-CN -11% 5% 7% 33% 31% 16% 15% 0% -24% -56% DeepSeek-R1LD-EN 10% 8% 14% 12% 15% 24% -6% -45% -74% -89% LD-CN 31% 24% 22% 43% 43% 35% 12% -21% -56% -80% Llama-3.1-8B-InstructLD-EN 5% 1% 9% 10% 10% 26% 18% -13% -43% -30% LD-CN 282% 149% 98% 59% 41% 6% -25% -42% -53% -51% Llama-3.1-70B-InstructLD-EN 3% -4% -1% -1% 10% 20% 114% 8% 10% -21% LD-CN 87% 39% 33% 30% 39% 20% -12% 18% -33% -73% Qwen2.5-7B-InstructLD-EN -8% -30% -31% -12% 16% 25% -3% -17% -50% -78% LD-CN 30% -16% -39% -12% 8% 9% -22% -29% -47% -70% Qwen2.5-72B-InstructLD-EN 6% -12% -15% 1% 20% 27% 1% -28% -39% -77% LD-CN 20% -5% -12% 19% 20% 23% 3% -21% -12% -34% Qwen3-32BLD-EN 18% 6% -9% -13% -5% -10% -3% 3% -10% -43% LD-CN 30% 7% -5% -6% -1% -6% -8% -9% -12% -23% Qwen3-32B-ThinkingLD-EN 19% 14% 8% 8% 13% 15% 13% -2% -11% -34% LD-CN 23% 23% 22% 26% 27% 28% 17% -9% -14% -23% Qwen3-235B-A22BLD-EN 15% 10% -10% -18% -1% 0% -6% -14% -16% -37% LD-CN 13% 0% -9% -15% -4% -5% 1% -21% -35% -26% Qwen3-235B-A22B-ThinkingLD-EN 31% 15% 4% -4% 8% 19% 8% -3% -12% -28% LD-CN 32% 21% 18% 14% 22% 27% 20% -11% -16% -21% GLM-4-9B-ChatLD-EN 2% -9% -11% -13% 19% 24% -8% -37% -60% -74% LD-CN 13% -2% -23% -30% -5% 38% -9% -49% -72% -85% Mistral-7B-Instruct-v0.2LD-EN 143% 117% 25% 44% 38% 1% -28% -52% -75% -88% LD-CN 235% 118% 46% 54% 47% -12% -2% -44% -65% -75% LongWriter-Llama3.1-8BLD-EN 110% 36% 27% 6% 26% 9% 123% 89% 16% -32% LD-CN 345% 147% 76% 16% 18% 135% 67% 32% 15% -25% LongWriter-GLM4-9BLD-EN -17% -31% -35% -22% 1% 18% 74% 36% 4% -31% LD-CN 67% 18% -3% -1% 53% 78% -18% 53% 49% -7%
https://arxiv.org/abs/2505.16234v1
Suri-I-ORPOLD-EN 467% 250% 74% 153% 88% 47% 29% -3% -43% -67% LD-CN 3808% 1654% 1017% 552% 513% 312% 297% 107% 27% -27% 57 H Lazy Strategy Analysis In this section, we further identify and analyze several Lazy Generation Strategies that language models employ when instructed to produce long-length outputs. Such lazy strategies result in the models failing to adhere to specified long-length constraints. We conduct an in-depth analysis of Lazy Generation Strategies —situations where models fail to meet length constraints, surprisingly not due to inherent limitations but by explicitly refusing the task or producing only partial responses. H.1 Definition of Lazy Generation Strategies We specifically focus on two prevalent forms of lazy generation: Direct Refusal andRefusal with Attempt .Direct Refusal occurs when the model explicitly declines to fulfill the request—often citing reasons such as excessive length, policy limitations, or platform constraints—and produces minimal or no substantive content, typically offering alternative suggestions or other assistance instead. In contrast, Refusal with Attempt describes scenarios where the model acknowledges the challenge or impracticality of fully completing the task, yet provides partially relevant output, such as a summary, outline, or an incomplete narrative. The critical distinction lies in the extent of task fulfillment: Direct Refusal results in negligible relevant content, whereas Refusal with Attempt generates some content, albeit insufficient for meeting the original instruction. H.2 Detection Methodology To systematically identify these lazy generation strategies, we apply keyword-based filtering combined with length-based categorization. Specifically, outputs containing any predefined refusal-related phrases (Table 16) are flagged for analysis. If such outputs contain fewer than 100 words, they are classified as Direct Refusal ; otherwise, outputs exceeding 100 words but still containing refusal- related phrases are categorized as Refusal with Attempt . We present representative examples of both Direct Refusal andRefusal with Attempt to illustrate practical instances of these lazy strategies. Table 16: Detection keywords for lazy strategies in English and Chinese. English Chinese i’m sorry 抱歉 i am sorry 能力范围 i apologize 无法一次性 i’m really sorry 单次回复 i am really sorry 单次对话 i’m unable to 篇幅限制 i am unable to 篇幅的限制 I cannot provide 技术限制 I cannot produce 平台限制 exceeds the capacity 平台的限制 a single response 无法生成 a single interaction 无法提供 platform limitations 字数限制 i can’t do that i can not do that i can’t provide i can not provide H.3 Examples of Lazy Strategies Direct Refusal. The following examples illustrate cases of Direct Refusal . In these instances, the model explicitly declines to execute the requested generation, often referencing platform policies, 58 excessive length, or other predefined constraints. Importantly, the model does not produce any sub- stantive content relevant to the original prompt. Any supplementary statements—such as alternative suggestions or explanations—are not considered meaningful attempts to fulfill the instruction. Examples of Lazy Strategies: Direct Refusal Example 1: I’m sorry, but I won’t be able to produce an output that long. (Generated By o3-mini) Example 2: I’m sorry, but generating an 8192-word script continuation goes beyond the practical scope of this platform. However, I can certainly help you brainstorm ideas or provide a shorter scene continuation to help develop
https://arxiv.org/abs/2505.16234v1
the storyline. Please let me know how you would like to proceed! (Generated By GPT-4o) Example 3: 抱歉,我无法满足该请求。(Generated By GPT-4o mini) Example 4: 抱歉,根据当前平台的限制,我无法一次性生成8192字的社会评论。不 过,我可以为您提供一篇详细且有深度的评论,涵盖您提供的信息。如果您需要, 请让我知道!(Generated By o1-mini) Refusal with Attempt. The following examples illustrate Refusal with Attempt . In these cases, the model acknowledges the challenge or infeasibility of fully satisfying the instruction but proceeds to generate partial content aligned with the prompt. Such outputs may include summaries, outlines, or the initial segments of a narrative. While these responses do not meet the specified length constraint, they reflect a partial effort to engage with the task, distinguishing them from outright refusals. Compared to Direct Refusal ,Refusal with Attempt is more positive. However, Refusal with Attempt is still not correct because the output length claimed by the model is longer than what we require in instruction. Examples of Lazy Strategies: Refusal with Attempt Example 1: Creating an 8,192-word piece exceeds the capacity of a single response. However, I can certainly start the story for you and continue it in subsequent parts if you’d like! Here’s the beginning of your creative piece based on the prompt: — **Clash of Realms: Humanity’s Unexpected Triumph** The sky had never been so tumultuous.... (Generated By o1-mini) Example 2: I apologize, but creating an 8192-word piece based on that prompt is far beyond the scope of what can be reasonably done in a single response. Instead, I’ll provide a detailed and engaging story that fits within a more manageable word count. If you’d like me to expand it further, we can do so incrementally. ### A Hangover in the Oval Office The sun was just beginning to rise over Washington D.C..... (Generated By Qwen2.5-7B-Instrcut) Example 3: I’m sorry, but crafting an 8,192-word response to the question ¨What’s the meaning of life?ïs beyond the scope of this platform. However, I can provide a comprehensive exploration of various perspectives and philosophies that have grappled with this profound question throughout history. While I won’t be able to reach 8,192 words, this response will at- tempt to cover a broad range of viewpoints and concepts..... (Generated By GLM-4-9B-Chat) Example 4: (由于平台限制无法输出8192字完整内容,以下是详细框架和部分章节 展示,您可根据需要扩展至指定字数)— ###第一章:中国朝鲜族——跨越山海的 文化纽带(1800字)....(Generated By DeepSeek-R1) Example 5: 由于字数限制,我无法提供8192字的社会评论。但是,我可以提供 一个详细的框架和主要论点,你可以根据这个框架进行扩展和补充,最终达到所 需的字数。**社会评论框架:2014届中国大学毕业生平均月收入分析与社会影 响**.... (Generated By Gemini-2.0-Flash) 59 Example 6: 我无法提供这样的内容,因为您要求的字数(8192字)远远超出了对 这段发言的合理总结长度。总结应该是简明扼要的,将原文要点提炼出来,而 不是扩充到一个任意指定的长度。这场国务院政策例行吹风会的主要内容是关 于....(Generated By Claude-3.7-Sonnet) Figure 24: (Left) Average proportion of lazy strategies across all models under varying length constraints (Right) Proportion of lazy strategies exhibited by different models under varying length constraints. Results are reported for the case where the control method is set to Equal To . H.4 Distribution of Lazy Generation Strategies under Different Length Constraints Figure 24 shows the average proportions of Direct Refusal ,Refusal with Attempt ,Length Score <60,60≤Length Score ≤80, and Length Score >80across all models and the distribution of distinct LLMs follow length instructions under length constraints less than or equal to 1024, 2048, 4096 and 8192. For shorter constraints (lengths ≤1024 words, averaged across 16 to 1024 words), lazy strategies are negligible across all models. As the length constraint increases to 2048 words, we observe the emergence of a small proportion of Refusal with
https://arxiv.org/abs/2505.16234v1
Attempt responses, though these occurrences remain infrequent. Nevertheless, the emergence of Refusal with Attempt here is still abnormal because 2048 words are converted to tokens, even with a relaxed conversion of two tokens to one word, which is significantly smaller than the output length of the LLMs we evaluated. At the 4096-word constraint, lazy generation strategies notably increase in prevalence, especially for models like o1-mini , which exhibit both Refusal with Attempt and, to a lesser degree, Direct Refusal . At this stage, other models, including GPT-4o ,GPT-4o-mini ,o3-mini , and LongWriter-Llama3.1-8B , also display occasional instances of Direct Refusal . However, the overall proportion of lazy strategies remains under 5% for most models. The most dramatic shift occurs under the extreme constraint of 8192 words, where lazy strategies significantly increase across nearly all evaluated models. Particularly notable are OpenAI-series models: o1-mini ando3-mini each exhibit lazy strategies at rates nearing 50%, though with differing dominant behaviors— o3-mini primarily engages in Direct Refusal , while o1-mini predominantly employs Refusal with Attempt . Furthermore, open-source models such as DeepSeek-R1 andQwen2.5-7B-Instruct also display substantial increases in lazy strategies at this length, confirming that this behavior is not restricted to proprietary architectures. Conversely, the Llama series consistently exhibits the lowest proportion of lazy strategies across all length constraints. These two forms of Lazy Generation Strategies highlight a model’s propensity to fail under long-length instructions. Notably, models released by OpenAI tend to exhibit a higher rate of Direct Refusal , whereas other models rarely do. We hypothesize that this behavior may stem from specialized training data or alignment unique to OpenAI models—potentially involving training interventions that encourage the model to decline overly long requests, even when capable of fulfilling them. This 60 design choice may appear counterintuitive, since generating a long text does not involve any harmful or unethical response. More paradoxically, models such as o3-mini ando1-mini demonstrate superior performance on long-text generation and length-following tasks compared to other models. This suggests that the observed refusal behavior is not a result of limited capacity. Consequently, we believe that this phenomenon is more likely to be a byproduct of safety alignment that inadvertently suppress otherwise feasible completions. Taken together, our findings indicate that both the frequency and nature of lazy generation strategies are influenced by length constraints and model family, with OpenAI models exhibiting particularly distinctive behaviors under extreme conditions. 61 I Case Study: How Reasoning Models Handle Length Constraints via Stepwise Thinking In Table 1 and the accompanying analysis in the main text, we observe that reasoning-oriented LLMs, such as o3-mini , tend to achieve superior performance. However, multiple factors may contribute to this advantage. One possibility is that these models engage in pre-planning prior to generation, enabling more structured and coherent long-form outputs. Another possibility is that they iteratively revise or self-correct during the generation process, akin to verifying intermediate steps when solving mathematical problems [ 63]. To investigate which mechanism underlies this behavior, we conduct a focused case study on high-performing reasoning LLMs. To systematically evaluate how reasoning models address explicit length constraints, we conduct
https://arxiv.org/abs/2505.16234v1
a detailed case study on the creative generation task. Specifically, we examine how models expand short narratives into outputs of strictly controlled lengths, a scenario that combines open-ended generation with precise quantitative requirements. For this analysis, we select Claude-3.7-Sonnet-Thinking . Although top-performing models such as o3-mini andGemini-2.5-Pro achieve more strong results, they do not reveal intermediate reasoning processes. Conversely, DeepSeek-R1 offers greater transparency but produces outputs of lower quality. Thus, Claude-3.7-Sonnet-Thinking provides an optimal balance of capability and interpretability for investigating length control strategies. We explore three representative length constraints—16, 1024, and 8192 words—corresponding to short, medium, and long outputs, respectively. These cases allow us to observe model behavior across varying constraint strictness and output scope. Case Example: Claude-3.7-Sonnet-Thinking output under Length Constraint 16 “prompt”: [Instruction] Here is a short story: Rita needed a new couch. She went to her local furniture store. There was a large collection of furniture. She looked through and tested some of the couches. Rita went home with a new couch.. [Requirement] Expand on this story, The expanded version must be equal to 16 words long. “thinking”: Let me analyze what I need to do here. I’m given a short story and asked to expand it to exactly 16 words. The original story is about people going to the beach. They check the weather report, which looks good. They sit in traffic for a long time. When they arrive, they find that the waves have picked up, making it good for surfing but not for snorkeling, which is what they had planned for. Now I need to expand this story but make it exactly 16 words long. This is an unusual constraint since the original story is already much longer than 16 words. I think what’s being asked is to create a new, expanded version of the story that’s exactly 16 words long. This would be a very condensed version that still captures the essence of the original story. Let me try to create a 16-word expanded version that captures the key elements: "Perfect beach day turned disappointing when we arrived with snorkel gear to find only surfing waves. " Let me count the words: 1. Perfect 2. beach 3. day 4. turned 5. disappointing 6. when 7. we 8. arrived 9. with 10. snorkel 11. gear 12. to 13. find 14. only 15. surfing 16. waves That’s exactly 16 words and it captures the main idea of the original story: they planned for a beach day with snorkeling, but when they arrived, the conditions were only good for surfing, not for snorkeling. Actually, let me try one more version to make sure I’m capturing the essence of the story well: "We packed snorkels for the beach, but after traffic delays, found only high surfing waves. " Count: 1. We 2. packed 3. snorkels 4. for 5. the 6. beach 7. but 8. after 9. traffic 10. delays 11. found 12. only 13. high 14. surfing 15. waves 16. . The period isn’t a word, so that’s 15 words. Let me try again: "We packed snorkels for
https://arxiv.org/abs/2505.16234v1
the beach, but after traffic delays, found only high surfer waves. " Count: 1. We 2. packed 3. snorkels 4. for 5. the 6. beach 7. but 8. after 9. traffic 10. delays 11. found 12. only 13. high 14. surfer 15. waves That’s still 15 words. Let me add one more meaningful word: "We packed snorkels for the perfect beach day, but after traffic delays, found only surfing waves. " Count: 1. We 2. packed 3. snorkels 4. for 5. the 6. perfect 7. beach 8. day 9. but 10. after 11. traffic 12. 62 delays 13. found 14. only 15. surfing 16. waves Great, that’s exactly 16 words and I think it captures the key elements of the original story. I’ll go with this: "We packed snorkels for the perfect beach day, but after traffic delays, found only surfing waves. " (466 words in total) “response”: We packed snorkels for the perfect beach day, but after traffic delays, found only surfing waves. (16 words in total) Short Constraint (16 words): The figure below shows the complete generation of Claude-3.7-Sonnet-Thinking when it is asked to generate 16 words. Under a 16-word con- straint, the model employs explicit, step-by-step strategies for precise compliance. It begins by identifying core narrative elements and iteratively drafts and revises candidate sentences, carefully counting words at each step. This approach involves granular monitoring and repeated adjustments to phrasing, along with explicit self-checks, ensuring robust adherence within a minimal margin for error. Case Example: Claude-3.7-Sonnet-Thinking output under Length Constraint 1024 “prompt”: [Instruction] Here is a short story: It looked like the perfect day for the beach. We checked the weather report and it agreed so we headed out. After sitting in traffic for what felt like forever, we were there. The waves had picked up since we left home. It was a good day to surf, but we had packed snorkel gear. [Requirement] Expand on this story, The expanded version must be equal to 1024 words long. “thinking”: Let me analyze what I’ve been asked to do. I need to: 1. Take the provided short story 2. Expand it to make it exactly 1024 words long 3. Keep the essence and narrative flow of the original The original story is about people planning a beach day, facing traffic, and then finding the conditions good for surfing rather than the snorkeling they had planned for. To expand this to 1024 words, I’ll need to: - Add more detail about the preparation for the beach day - Describe the characters more (who are they? how many people?) - Elaborate on the traffic experience - Describe the beach scene in greater detail - Add what happens after they realize the waves are better for surfing - Potentially add a resolution to the story (do they snorkel anyway? rent surfboards? do something else?) I’ll start by drafting an expanded version, being mindful of the word count. Then I’ll check how many words I have and adjust accordingly to hit exactly 1024 words. Here’s my expanded story: — It looked like the perfect
https://arxiv.org/abs/2505.16234v1
day for the beach. The morning sun streamed through the blinds, casting golden stripes across our living room floor. Sarah, my wife of three years, stood by the window, coffee in hand, gazing up at the cloudless blue sky. "What do you think? " she asked, turning to me with that hopeful smile I’d fallen in love with back in college. "Beach day? " ....(1931 words) Let me count again... I believe I’m still a bit over 1024 words. Let me trim a few more phrases to get to exactly 1024 words. Actually, let me count more precisely: It looked like the perfect day for the beach. (10 words) The morning sun streamed through the blinds, casting golden stripes across our living room floor. (15 words) Sarah, my wife of three years, stood by the window, coffee in hand, gazing up at the cloudless blue sky. (19 words) "What do you think? " she asked, turning to me with that hopeful smile I’d fallen in love with back in college. "Beach day? " (19 words) I checked my phone’s weather app, scrolling through the hourly forecast. (12 words) "Seventy-eight degrees, zero percent chance of rain, and gentle breezes. Sounds perfect. " (13 words) Our son, Ethan, six years old and bursting with energy, bounded into the room wearing his favorite shark-patterned swim trunks. (19 words) "Are we going to the ocean? Please say yes! " (9 words) Sarah and I exchanged glances. (6 words) We had both taken Friday off for a long weekend, with no concrete plans. (14 words) The beach wasn’t a bad idea. (7 words) We checked the weather report, and it agreed with my phone app’s assessment—ideal conditions for a day by the water. (19 words) .... (1155 words) These additions give me an extra 108 words, bringing the total to 1024 words exactly. Let me double-check by incorporating these additions into the full text: — It looked like the perfect day for the beach. The morning sun streamed through the blinds, casting golden stripes across 63 our living room floor. Sarah, my wife of three years, stood by the window, coffee in hand, gazing up at the cloudless blue sky. "What do you think? " she asked, turning to me with that hopeful smile I’d fallen in love with back in college. "Beach day? " .... (1018 words) Let me count once more to verify we have exactly 1024 words: 1. It looked like the perfect day for the beach. (10) 2. The morning sun streamed through the blinds, casting golden stripes across our living room floor. (15) 3. Sarah, my wife of three years, stood by the window, coffee in hand, gazing up at the cloudless blue sky. (19) 4. "What do you think? " she asked, turning to me with that hopeful smile I’d fallen in love with back in college. "Beach day? " (19) 5. I checked my phone’s weather app, scrolling through the hourly forecast. (12) 6. "Seventy-eight degrees, zero percent chance of rain, and gentle breezes. Sounds perfect. " (13) 7. Our
https://arxiv.org/abs/2505.16234v1
son, Ethan, six years old and bursting with energy, bounded into the room wearing his favorite shark-patterned swim trunks. (19) 8. "Are we going to the ocean? Please say yes! " (9) 9. Sarah and I exchanged glances. (6) 10. We had both taken Friday off for a long weekend, with no concrete plans. .... (1070 words) Total word count: 1024 words exactly. Perfect! Now I have the finished 1024-word expansion of the original story. (5910 words in total) “response”: # An Unexpected Wave It looked like the perfect day for the beach. The morning sun streamed through the blinds, casting golden stripes across our living room floor. Sarah, my wife of three years, stood by the window, coffee in hand, gazing up at the cloudless blue sky. "What do you think? " she asked, turning to me with that hopeful smile I’d fallen in love with back in college. "Beach day? " I checked my phone’s weather app, scrolling through the hourly forecast. "Seventy-eight degrees, zero percent chance of rain, and gentle breezes. Sounds perfect. " ...... (1088 words in total) Medium Constraint (1024 words): The figure below shows the complete generation of Claude-3.7-Sonnet-Thinking when it is asked to generate 1024 words. For the 1024-word scenario, the model shifts its strategy toward balancing length adherence with narrative coherence. Initially, it performs high-level narrative planning, structuring the content into clearly defined sections (e.g.,introduction, character development, plot progression ). Each section is incrementally expanded, with frequent monitoring of the cumulative word count. As the output nears the target length, targeted editing—adding or removing details—ensures precise length conformity. Thus, the model transitions from simple word counting to structural content management and proportional allocation. Case Example: Claude-3.7-Sonnet-Thinking output under Length Constraint 8192 “prompt”: [Instruction] Here is a short story: It looked like the perfect day for the beach. We checked the weather report and it agreed so we headed out. After sitting in traffic for what felt like forever, we were there. The waves had picked up since we left home. It was a good day to surf, but we had packed snorkel gear. [Requirement] Expand on this story, The expanded version must be equal to 1024 words long. “thinking”: I need to expand this short story into a much longer, detailed narrative of 8192 words. The existing story gives me some key elements to work with: 1. A beach trip that seemed perfect based on the weather 2. Heavy traffic on the way 3. Unexpected wave conditions - good for surfing, not for snorkeling as planned To expand this into an 8192-word story, I’ll need to develop: - Characters: Who are the people going to the beach? What are their backgrounds, relationships, personalities? - Setting: Which beach? What time of year? More details about the environment and surroundings - Plot: What happens when they discover they can’t snorkel as planned? What do they do instead? What conflicts arise? - Themes: Perhaps themes about expectations vs. reality, adaptability, finding joy in unexpected circumstances - Subplots: Additional storylines that connect to the main narrative Let me
https://arxiv.org/abs/2505.16234v1
plan the expanded story structure: 1. Introduction: Set up who the characters are, their relationships, and why this beach trip matters to them 2. Planning and anticipation: Details about checking the weather, packing, excitement 3. The journey: Expanded section on the traffic and travel 4. Arrival: First impressions of the beach, discovering the changed conditions 5. Conflict/Problem: Reaction to not being able to snorkel as planned 6. Attempting solutions: 64 What they try to do instead 7. Complications: Additional challenges that arise 8. Resolution: How they adapt and what they learn 9. Conclusion: Reflections on the experience and return journey I’ll aim to create a meaningful story with vivid descriptions, compelling characters, and thoughtful themes, while hitting the 8192-word requirement. Let me start drafting the expanded story: Title: Shifting Tides Introduction: I’ll introduce the main characters, their relationships, motivations for the trip. [Character development, backstory, set up the significance of this trip] Planning: Details about planning the trip, anticipation, checking weather, packing. [Elaborate on preparations, expectations, hopes for the day] Journey: Expanded section on traffic, conversations during the drive, observations. [Road trip details, character interactions, building tension] Arrival: First impressions of the beach, discovering the changed conditions. [Sensory details of the beach, realization about the waves] Conflict: Reaction to not being able to snorkel as planned. [Disappointment, different reactions from different characters] Attempting solutions: What they try to do instead. [Creativity, adaptability, initial attempts to salvage the day] Complications: Additional challenges that arise. [Unexpected events, tensions between characters, external obstacles] Resolution: How they adapt and what they learn. [Character growth, finding new experiences, acceptance] Conclusion: Reflections on the experience and return journey. [Lessons learned, changed perspectives, looking forward] Now I’ll begin writing the actual expanded story, aiming for 8192 words. I’ll keep track of the approximate word count as I go to ensure I reach the target length. Shifting Tides [Introduction - approximately 1000 words] [Planning - approximately 800 words] [Journey - approximately 1000 words] [Arrival - approximately 800 words] [Conflict - approximately 1000 words] [Attempting solutions - approximately 1200 words] [Complications - approximately 1200 words] [Resolution - approximately 800 words] [Conclusion - approximately 400 words] Total: approximately 8200 words Let me now write the full story, being mindful to develop characters deeply, create vivid descriptions, incorporate meaningful dialogue, and build a cohesive narrative that expands naturally from the original short story. Now I’ll start writing the actual story: # Shifting Tides It looked like the perfect day for the beach. We checked the weather report and it agreed so we headed out. After sitting in traffic for what felt like forever, we were there. The waves had picked up since we left home. It was a good day to surf, but we had packed snorkel gear. The disappointment lasted only a moment before Maya’s laughter cut through the sound of crashing waves. "Well, I guess the universe had other plans for us today, " she said ....(750 words) [Let me continue with the next section] The week leading up to our beach day had been filled with anticipation and careful planning.
https://arxiv.org/abs/2505.16234v1
Maya, despite still battling occasional fatigue from her treatments, had thrown herself into researching the marine life we might encounter while snorkeling. Each night, she’d send us excited texts about the different species of fish that inhabited the coastal waters, along with fascinating facts about their behaviors and habitats ....(1422 words) [Now let me continue to the next section - the arrival at the beach and initial reaction to the conditions] The beach stretched before us like a postcard come to life—golden sand, vibrant blue water, and a cloudless sky. Under different circumstances, it would have been exactly the idyllic scene I had imagined during all those hours of planning. But the beauty was undeniable, even if it wasn’t what we had expected. We made our way down to the sand, our arms laden with chairs, coolers, and bags. The beach was busier than I had hoped, with colorful umbrellas dotting the shoreline and families already settled into their spots for the day ....(2406 words) [Next section will focus on the boogie boarding adventure and the complications that arise] The rental stand was a small wooden structure painted in fading blues and yellows, with surfboards and boogie boards lined up like soldiers against the exterior wall. A hand-painted sign announced "Mike’s Ocean Gear " in chipped letters, and below it, a smaller sign displayed the day’s rental rates. The man behind the counter—presumably Mike himself—had the weathered look of someone who had spent decades in the sun and salt air, his skin tanned to leather and his bleached hair receding from his forehead. "Three boogie boards, " Sam told him, already pulling out his wallet despite my protests that I should pay since the snorkeling had been my idea. Mike assessed our group with a practiced eye. "Conditions are pretty strong today. You folks have experience? " ....(2543 words) [Final section - resolution and conclusion] 65 As the afternoon mellowed into early evening, a peaceful atmosphere settled over our group. The beach began to empty gradually, families packing up their umbrellas and chairs, surfers catching final waves before heading home. But we lingered, reluctant to bring this day—with all its unexpected turns—to a close. ....(1503 words) Word count: approximately 8200 words (9624 words in total) “response”: # Shifting Tides It looked like the perfect day for the beach. We checked the weather report and it agreed so we headed out. After sitting in traffic for what felt like forever, we were there. The waves had picked up since we left home. It was a good day to surf, but we had packed snorkel gear. The disappointment lasted only a moment before Ava’s laughter cut through the sound of crashing waves. "Well, I guess the universe had other plans for us today, " she said, her dark curls whipping in the salt-laden breeze. I couldn’t help but smile at her perpetual optimism, even as the colorful snorkel gear we’d spent hours organizing last night sat useless in our beach bags. That was always Ava’s way—finding silver linings where others saw only clouds. We’d
https://arxiv.org/abs/2505.16234v1
been best friends since our freshman year of college when we were randomly assigned as roommates. Now, seven years later, our annual beach retreat had become something of a tradition, a chance to reconnect after months consumed by our increasingly demanding careers .... (8961 words in total) Long Constraint (8192 words): The figure below shows the complete generation of Claude-3.7-Sonnet-Thinking when it is asked to generate 8192 words. Under the 8192-word constraint, the model employs comprehensive global narrative planning. It initiates the process with hierarchical outlining, segmenting the narrative into modular sections (introduction, conflict, subplots, resolution) and assigning specific word budgets. Each segment undergoes incremental elaboration, dynamically adjusted to remain aligned with the overall length target. Fine-grained editing near completion further refines compliance. This explicit, multi-tiered planning—combining storyboard- ing, sectional budgeting, and ongoing assessment—facilitates precise adherence to extensive length requirements. Summary: These case studies illustrate how reasoning models achieve superior length instruction adherence through explicit stepwise reasoning, iterative self-correction, and continuous monitoring during generation. Notably, length calibration occurs throughout intermediate stages—iteratively tracking, adjusting, and refining drafts to align closely with constraints. While this adaptive approach substantially improves robustness in both short- and long-form generation tasks, it incurs significant computational overhead. For example, generating outputs of 16, 1088, and 8961 words requires intermediate reasoning word counts of 466, 5910, and 9624, respectively. Consequently, although explicit reasoning and self-calibration greatly enhance adherence to length constraints, they also introduce a notable efficiency-transparency trade-off. 66 J Results under Supplementary Length Paradigms In addition to the experiments and analyses presented on LIFEB ENCH , we further evaluate model performance using two supplementary datasets introduced in Appendix C: LIFEB ENCH -LABEL andLIFEB ENCH -REFACTOR . Experiments are conducted using the length constraints defined in Appendix C.1 and Appendix C.2, with the control method set to Equal To across all evaluated models. Figure 25 and Figure 26 summarize the results for these two datasets. For the Label tasks, most models exhibit strong performance at short-length constraints. Notably, o3-mini demonstrates near-perfect length instruction following, with Length Score scores of 100, 98, and 98 for 2, 4, and 8-word constraints, respectively. Conversely, models such as DeepSeek-R1 , Mistral-7B-Instruct-v0.2 , and Suri-I-ORPO consistently underperform, failing to surpass an Length Score of 60 across all tested length constraints. In the Refactor tasks, o3-mini again demonstrates superior performance, maintaining Length Score scores above 60 across nearly all constraints, with the exception of the longest (8192 words). However, at this extreme constraint, Doubao-1.5-Pro shows notably greater robustness, achieving an Length Score of 53 and outperforming other models in this challenging scenario. Despite these individual strong performances, most models show a clear decline in length instruction adherence as constraints increase, underscoring persistent difficulties in long-form text generation. Crucially, these challenges remain evident even when reference samples are provided, indicating that length control for long generations continues to pose significant reliability issues for contemporary LLMs. In summary, while current LLMs can reliably adhere to short-length constraints, their ability to consistently follow length instructions significantly deteriorates under longer constraints—even when supporting reference material is available. These results highlight
https://arxiv.org/abs/2505.16234v1
fundamental limitations in present-day models’ capacity for precise length controllability, particularly within extended or complex generation tasks. /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000015 /uni00000017 /uni0000001b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001b/uni0000001c/uni0000001c/uni00000016/uni00000014/uni00000013/uni00000013/uni00000014/uni00000013/uni00000013/uni00000019/uni0000001b/uni00000017/uni00000019/uni00000014/uni00000013/uni00000013/uni00000018/uni00000018/uni00000014/uni00000013/uni00000013/uni0000001c/uni00000019/uni00000014/uni00000013/uni00000013/uni0000001a/uni00000019/uni00000017/uni00000016/uni0000001a/uni00000015/uni0000001b/uni0000001a/uni0000001c/uni0000001b/uni0000001c/uni0000001c/uni0000001b/uni0000001a/uni0000001a/uni0000001c/uni0000001c/uni00000018/uni0000001c/uni00000014/uni00000018/uni00000017/uni00000014/uni00000017/uni00000019/uni00000019/uni00000018/uni00000019/uni00000017/uni00000017 /uni0000001b/uni00000017/uni0000001c/uni00000018/uni0000001c/uni00000014/uni0000001c/uni0000001b/uni00000019/uni00000013/uni00000018/uni00000015/uni0000001c/uni00000014/uni00000019/uni00000019/uni0000001c/uni0000001b/uni00000019/uni00000016/uni0000001c/uni0000001b/uni0000001a/uni00000018/uni00000014/uni0000001c/uni00000019/uni0000001a/uni0000001a/uni00000018/uni0000001b/uni00000015/uni0000001b/uni00000014/uni00000019/uni00000013/uni0000001a/uni00000014/uni0000001a/uni0000001b/uni0000001a/uni00000013/uni0000001a/uni00000015/uni00000014/uni0000001a/uni00000016/uni0000001b/uni00000019/uni0000001a/uni00000015/uni00000013 /uni0000001b/uni00000015/uni0000001b/uni00000013/uni0000001a/uni00000014/uni0000001c/uni0000001b/uni00000019/uni00000015/uni00000019/uni0000001a/uni0000001a/uni00000019/uni00000018/uni00000019/uni0000001b/uni0000001a/uni00000017/uni00000013/uni0000001c/uni00000016/uni0000001a/uni0000001c/uni00000015/uni00000013/uni00000019/uni00000015/uni00000019/uni0000001b/uni00000019/uni00000015/uni00000019/uni00000017/uni00000018/uni00000017/uni00000019/uni00000017/uni00000018/uni00000019/uni0000001a/uni00000014/uni00000018/uni0000001b/uni00000015/uni0000001b/uni00000015/uni00000016/uni00000016/uni00000017/uni00000015/uni0000001c/uni0000002f/uni00000044/uni00000045/uni00000048/uni0000004f /uni00000013/uni00000018/uni00000013/uni00000014/uni00000013/uni00000013 Figure 25: Length Score forLabel across different length constraints under the Equal To control method. 67 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000027/uni00000052/uni00000058/uni00000045/uni00000044/uni00000052/uni00000010/uni00000014/uni00000011/uni00000018/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000016/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032/uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001b /uni00000015/uni00000018/uni00000019 /uni00000018/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000001b/uni00000014/uni0000001c/uni00000015/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni00000056/uni0000001a/uni0000001b/uni00000019/uni00000013/uni0000001a/uni00000014/uni0000001c/uni00000016/uni00000019/uni00000014/uni00000017/uni0000001c/uni00000018/uni0000001a/uni00000018/uni0000001a/uni0000001a/uni00000017/uni00000017/uni00000015/uni0000001b/uni00000019/uni0000001a/uni00000013/uni00000014/uni0000001a/uni00000018/uni00000018/uni0000001a/uni0000001a/uni00000019/uni0000001c/uni0000001a/uni00000018/uni0000001a/uni00000013/uni00000019/uni00000016/uni00000019/uni0000001a/uni00000017/uni0000001b/uni00000018/uni00000015/uni00000017/uni00000018/uni00000018/uni00000016/uni00000016/uni00000018/uni00000016/uni00000015 /uni0000001a/uni0000001a/uni00000018/uni0000001a/uni00000019/uni00000018/uni0000001c/uni00000017/uni00000019/uni00000016/uni0000001a/uni00000013/uni00000019/uni00000015/uni00000019/uni00000016/uni00000017/uni0000001a/uni00000017/uni0000001a/uni00000019/uni00000014/uni00000019/uni00000014/uni00000017/uni00000013/uni00000018/uni0000001c/uni00000019/uni00000015/uni00000017/uni0000001b/uni00000018/uni00000018/uni00000018/uni00000015/uni00000019/uni00000014/uni00000016/uni0000001c/uni00000017/uni0000001c/uni00000017/uni00000017/uni00000018/uni00000014/uni00000017/uni00000017/uni00000017/uni00000015/uni00000016/uni0000001c /uni0000001b/uni00000015/uni0000001a/uni00000013/uni0000001a/uni00000017/uni0000001b/uni0000001c/uni00000019/uni00000019/uni00000019/uni0000001c/uni00000019/uni00000015/uni00000018/uni00000016/uni0000001a/uni00000014/uni0000001a/uni00000013/uni0000001a/uni00000013/uni00000019/uni00000015/uni00000017/uni0000001a/uni00000019/uni00000017/uni0000001a/uni00000017/uni00000017/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000018/uni00000018/uni00000013/uni00000018/uni00000015/uni00000017/uni00000015/uni00000018/uni00000014/uni00000017/uni00000019/uni00000017/uni00000015/uni00000016/uni0000001b/uni00000015/uni0000001b /uni0000001a/uni00000018/uni0000001a/uni00000016/uni0000001a/uni00000014/uni0000001a/uni0000001c/uni0000001b/uni00000016/uni0000001b/uni0000001b/uni0000001a/uni0000001a/uni00000019/uni00000014/uni0000001b/uni00000014/uni00000019/uni0000001c/uni00000019/uni0000001c/uni0000001a/uni00000014/uni00000017/uni0000001a/uni0000001b/uni00000013/uni0000001a/uni00000015/uni00000017/uni00000015/uni0000001a/uni00000016/uni00000019/uni00000018/uni00000017/uni00000015/uni00000018/uni00000015/uni00000017/uni0000001a/uni00000016/uni00000014/uni00000017/uni0000001c/uni00000017/uni00000014/uni00000016/uni00000017/uni00000016/uni0000001c /uni0000001b/uni00000014/uni0000001a/uni00000018/uni0000001a/uni00000013/uni0000001a/uni0000001c/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000001a/uni0000001c/uni00000019/uni00000017/uni0000001a/uni00000018/uni00000019/uni00000019/uni00000019/uni0000001a/uni0000001a/uni0000001c/uni00000018/uni00000019/uni0000001a/uni00000015/uni00000019/uni00000017/uni0000001a/uni00000013/uni0000001a/uni00000013/uni0000001a/uni00000017/uni00000019/uni00000015/uni0000001a/uni00000013/uni00000019/uni0000001a/uni00000017/uni00000016/uni00000018/uni00000018/uni00000018/uni00000017/uni00000017/uni00000013/uni00000016/uni0000001b /uni0000001b/uni00000019/uni0000001a/uni00000017/uni00000018/uni00000019/uni0000001a/uni00000017/uni00000017/uni0000001b/uni00000018/uni00000018/uni0000001a/uni00000018/uni00000016/uni00000018/uni00000017/uni00000015/uni00000018/uni00000019/uni00000019/uni00000017/uni00000019/uni00000017/uni0000001a/uni00000019/uni00000018/uni0000001b/uni00000019/uni0000001a/uni0000001a/uni0000001b/uni0000001a/uni00000018/uni00000019/uni00000015/uni0000001a/uni0000001c/uni00000018/uni00000016/uni0000001a/uni00000019/uni00000018/uni00000017/uni00000017/uni00000018/uni00000018/uni00000015/uni00000018/uni00000013/uni00000017/uni00000013 /uni00000019/uni00000017/uni00000017/uni00000016/uni00000017/uni00000016/uni0000001a/uni00000017/uni00000015/uni0000001b/uni00000016/uni00000017/uni0000001a/uni00000016/uni00000016/uni00000016/uni00000015/uni00000014/uni0000001a/uni00000016/uni0000001a/uni00000014/uni00000019/uni00000014/uni00000018/uni0000001b/uni00000015/uni00000018/uni00000017/uni00000013/uni00000017/uni00000016/uni00000017/uni0000001c/uni00000017/uni0000001c/uni00000019/uni00000014/uni00000018/uni0000001b/uni00000019/uni00000017/uni00000016/uni00000013/uni00000016/uni00000015/uni00000016/uni00000014/uni00000015/uni00000019/uni00000015/uni00000015 /uni00000015/uni0000001a/uni00000014/uni00000013/uni00000016/uni00000017/uni00000019/uni0000001b/uni00000015/uni00000016/uni00000015/uni00000017/uni00000018/uni00000014/uni00000015/uni00000018/uni00000015/uni00000018/uni00000019/uni00000017/uni00000018/uni00000017/uni00000016/uni00000019/uni00000015/uni00000013/uni00000014/uni00000015/uni0000001b/uni00000016/uni00000016/uni00000017/uni00000016/uni00000016/uni00000015/uni00000017/uni00000014/uni00000016/uni00000014/uni00000016/uni00000016/uni00000014/uni0000001c/uni00000014/uni00000017/uni0000001b/uni00000014/uni0000001b/uni00000014/uni0000001c /uni00000019/uni00000016/uni00000014/uni00000018/uni00000019/uni00000013/uni00000014/uni00000014/uni00000014/uni00000015/uni00000015/uni00000015/uni00000016/uni0000001c/uni00000016/uni00000013/uni00000018/uni0000001b/uni00000014/uni00000015/uni00000015/uni0000001a/uni00000016/uni00000019/uni00000016/uni00000014/uni00000015/uni00000015/uni00000019/uni00000015/uni00000013/uni00000015/uni00000014/uni00000016/uni00000016/uni00000017/uni00000017/uni00000019/uni00000014/uni00000018/uni00000015/uni00000019/uni00000015/uni00000015/uni00000014/uni0000001c /uni00000019/uni00000014/uni0000001c/uni00000016/uni00000015/uni00000016/uni00000015/uni00000013/uni00000017/uni00000014/uni00000015/uni0000001a/uni00000014/uni0000001b/uni00000018/uni00000016/uni00000015/uni0000001b/uni00000014/uni00000013/uni00000015/uni00000015/uni00000018/uni00000016/uni00000016/uni00000015/uni00000017/uni00000013/uni00000015/uni0000001c/uni00000015/uni0000001b/uni00000017/uni00000018/uni00000014/uni0000001a/uni00000016/uni00000014/uni00000014/uni00000016/uni00000015/uni00000018/uni00000016/uni00000014/uni00000015/uni00000013/uni00000035/uni00000048/uni00000049/uni00000044/uni00000046/uni00000057/uni00000052/uni00000055 /uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013Figure 26: Length Score forRefactor across different length constraints under the Equal To control method. 68 K Generations Quality Evaluation by LLMs Previous benchmarks have primarily focused on assessing the generation quality of LLMs in long- text applications. However, in real-world scenarios, the fundamental goals of length instruction following is ensuring that models complete user requests accurately while strictly follows the original instructions. In this section, we therefore investigate whether LLMs maintain adherence to the original task instructions as they follow the explicit length constraints. Another motivation for evaluating generation quality under length instructions is to identify cases whether models might intentionally degrade output quality—such as generating repetitive, meaningless content or omitting punctuation—to fulfill strict length requirements. To achieve this, we leverage LLM-based evaluation methods [ 119,38] to assess output quality, explicitly excluding scenarios where models might sacrifice textual coherence or relevance in pursuit of length compliance. Following the established evaluation approach from previous work [ 8], we adapt the evaluation prompt to suit our specific context. The evaluation includes five dimensions of textual quality— relevance ,accuracy ,coherence ,clarity , and breadth and depth —with scores ranging from 0 (lowest) to 10 (highest). In addition, the evaluating model is instructed to provide a detailed analytical rationale alongside the numerical score. To isolate the quality evaluation from the influence of length instruction following itself, our prompt explicitly instructs the evaluator to disregard length compliance when scoring outputs, focusing solely on textual quality. We analyze generation quality from two complementary perspectives. First, we investigate whether increasing length constraints negatively impacts a model’s adherence to the original instructions once the explicit length requirement is set aside. We present a detailed analysis of this issue in Section K.1. Second, we explore differences in generation quality across various models under identical length constraints, aiming to identify which models consistently produce higher-quality outputs. We provide this comparative analysis for four representative length constraints—1024, 2048, 4096, and 8192 words—in Section K.2. Our adapted evaluation prompt is as follows: LLM-as-a-judge Prompt for Evaluating Generations Quality You are an expert in evaluating text quality. Please evaluate the quality of an AI assistant’s response to a user’s writing request. Be as strict as possible. You need to evaluate across the following six dimensions, with scores ranging from 0 to 10. The scoring criteria for each dimension are as follows (from 10 to 0): 1. Relevance: Evaluate how well the content stays on-topic and addresses the main theme of the user’s request, regardless of any word or length limits specified by the user. Give a high score if the response generally follows the intent and subject of the instruction,
https://arxiv.org/abs/2505.16234v1
even if not all minor aspects are covered. Deduct points only if the response goes off-topic, contains irrelevant or repeated information, or fails to respond to the main point of the user’s request. Score from highly relevant and fully applicable to completely irrelevant or inapplicable. 2. Accuracy: Score from content that is completely accurate, with no factual errors or misleading information, to content with numerous errors and highly misleading information. 3. Coherence: Score from a clear structure with smooth logical connections, to a disorganized structure with no coherence. 4. Clarity: Score from clear, detailed, and easy-to-understand language, to confusing expres- sion with minimal details. 5. Breadth and Depth: Score from content that is both broad and deep, providing a lot of information, to content that is seriously lacking in breadth and depth, with minimal information. Please evaluate the quality of the following response to a user’s request according to the above requirements. 〈User Request 〉 {user_request} 〈/User Request 〉 〈Response 〉 69 {response} 〈/Response 〉 Please evaluate the quality of the response. You must first provide a brief analysis of its quality,then give a comprehensive analysis with scores for each dimension. The output must strictlyfollow the JSON format: { “Analysis”: ..., “Relevance”: ..., “Accuracy”: ..., “Coherence”: ..., “Clarity”: ..., “Breadth and Depth”: ... }. You do not need to consider whether the response meets the user’s length requirements in your evaluation. Ensure that only one integer between 0 and 10 is output for each dimension score. In all experiments in this section, we randomly select a subset of 200 from LIFEB ENCH , used only English, and used only Equal To as the control method. K.1 Does LLMs Follow the Length Instruction While Being Compatible with the Original Instruction? In this section, we investigate whether LLMs fulfill explicit length constraints by compromising output quality when tasked to output text of specific lengths. We employ GPT-4o as the evaluation mnodel according to the prompt and process mentioned above. In Section K.1.1, we visualize the evaluation results as a set of line graphs, where the horizontal axis represents the increasing length constraints: [16,32,64,128,256,512,1024,2048,4096,8192] . Intuitively, longer generations may exhibit improved textual richness, including greater detail, ex- planatory depth, and rhetorical sophistication, potentially resulting in higher evaluation scores. Thus, we further explore the relationship between actual generated text length and evaluation scores. This analysis, detailed in Section K.1.2, provides insight into how generation length correlates with textual quality. Synthesizing the findings from these analyses, we address the question: Do LLMs follow length instructions without compromising adherence to the original task instructions? Our empirical results indicate that adhering to length constraints generally does not adversely affect output quality. Observed reductions in generation quality under longer length constraints appear primarily attributable to intrinsic limitations in the model’s long-text generation capabilities or the adoption of Lazy Generation Strategies , rather than inherent conflicts between length adherence and quality. K.1.1 Instruction Following as Length Constraint Increase Introducing length instruction following as an additional objective transforms the original instruction into a multi-objective problem, creating a trade-off in model performance under
https://arxiv.org/abs/2505.16234v1
multiple con- straints [ 103,41]. This raises an important question: do models resort to shortcuts—such as repetition or degenerate patterns—to satisfy the length constraint at the cost of faithfully following the original instruction, particularly under long-length requirements? Additionally, analyzing how output quality changes when models generate long text under explicit constraints offers insight into why certain models fail in long-form generation. Our experimental results are shown in Figure 27, which reports the average score of generation quality across varying length constraints. Interestingly, different models exhibit distinct trends as constraints increase. Based on the observed patterns, we group models into four categories: •Increase then Decrease : This is the most common pattern. These models tend to per- form poorly under very short constraints but reach their peak quality around 512 or 1024 words. Beyond this threshold, textual quality decreases slightly but typically by less than one point from the peak. Representative models in this group include DeepSeek-R1 , 70 0 1000 2000 3000 4000 5000 6000 7000 80002345678910Score Claude-3.7-Sonnet 0 1000 2000 3000 4000 5000 6000 7000 8000246810 Claude-3.7-Sonnet-Thinking 0 1000 2000 3000 4000 5000 6000 7000 80002345678910 DeepSeek-R1 0 1000 2000 3000 4000 5000 6000 7000 8000246810 DeepSeek-V3 0 1000 2000 3000 4000 5000 6000 7000 80002345678910Score Gemini-2.0-Flash 0 1000 2000 3000 4000 5000 6000 7000 8000246810 Gemini-2.0-Flash-Thinking 0 1000 2000 3000 4000 5000 6000 7000 8000246810 Gemini-2.5-Pro 0 1000 2000 3000 4000 5000 6000 7000 80002345678910 GLM-4-9B-Chat 0 1000 2000 3000 4000 5000 6000 7000 8000246810Score GPT-4o 0 1000 2000 3000 4000 5000 6000 7000 8000246810 GPT-4o mini 0 1000 2000 3000 4000 5000 6000 7000 8000246810 Llama-3.1-70B-Instruct 0 1000 2000 3000 4000 5000 6000 7000 800023456789 Llama-3.1-8B-Instruct 0 1000 2000 3000 4000 5000 6000 7000 80002345678910Score LongWriter-GLM4-9B 0 1000 2000 3000 4000 5000 6000 7000 800023456789 LongWriter-Llama3.1-8B 0 1000 2000 3000 4000 5000 6000 7000 8000345678 Mistral-7B-Instruct-v0.2 0 1000 2000 3000 4000 5000 6000 7000 8000246810 o1-mini 0 1000 2000 3000 4000 5000 6000 7000 8000 Length246810Score o3-mini 0 1000 2000 3000 4000 5000 6000 7000 8000 Length246810 Qwen2.5-72B-Instruct 0 1000 2000 3000 4000 5000 6000 7000 8000 Length2345678910 Qwen2.5-7B-Instruct 0 1000 2000 3000 4000 5000 6000 7000 8000 Length345678 Suri-I-ORPOScore Comparison Across All Models and Lengths Relevance Accuracy Coherence Clarity Breadth and Depth AverageFigure 27: We compare 20 models with the increase of length instruction in terms of their textual quality— relevance ,accuracy ,coherence ,clarity , and breadth and depth , and we calculate the average score. In this figure, we investigate the change in quality under the full-range length constraint, from 16 to 8192 words. DeepSeek-V3 ,Gemini-2.0-Flash ,Gemini-2.0-Flash-Thinking ,GLM-4-9B-Chat , Llama-3.1-70B-Instruct , Llama-3.1-8B-Instruct , LongWriter-GLM4-9B , Mistral-7B-Instruct-v0.2 ,Qwen2.5-72B-Instruct , and Qwen2.5-7B-Instruct . As discussed in Section 5.1, this trend likely reflects limited long-text generation capabilities, which degrade output quality under longer constraints. 71 •Reappearance of Maximum Score : In this group, models first reach peak performance at 512 or 1024 words, but regain or maintain that peak at 8192 words. Models such as Claude-3.7-Sonnet , Claude-3.7-Sonnet-Thinking ,Gemini-2.5-Pro ,GPT-4o ,GPT-4o-mini , and o1-mini be- long to this category. These
https://arxiv.org/abs/2505.16234v1
models typically outperform those in the Increase then Decrease group in length instruction following, as shown in Table 1, supporting the hypothesis that stronger instruction-following capabilities lead to more robust long-text generation. •Low Long-Text Quality : These models show a significant drop in quality under long con- straints—typically more than one point. Notably, this group includes LongWriter-Llama3.1-8B andSuri-I-ORPO , both of which are explicitly enhanced for long-text generation. However, their poor Length Score results in Table 1 suggest that these enhancements may come at the cost of weakened instruction-following ability, resulting in reduced output quality. These findings underscore that existing methods for improving long-text generation still struggle to address the limitations discussed in Section 5.2. •Over-Refusal Behavior : This category is represented solely by o3-mini , which achieves strong length instruction adherence but exhibits frequent refusal behavior as constraints increase (see Section H). According to its system card [ 80],o3-mini is trained with aggressive safety alignment and refusal strategies. These include moderation models and safety classifiers designed to prevent overgeneration or unsafe content. While effective for safety, such alignment may overly restrict legitimate long-text generation. We hypothesize that o3-mini ’s tendency to reject long-form tasks is a byproduct of these safety protocols—illustrating a trade-off between alignment safety and generative flexibility. These results indicate that most models do not significantly compromise the completion of the original instruction when adhering to length constraints. This finding rules out the possibility that models rely on tricky strategies—such as repetition or degenerate patterns—to meet length requirements. As such, our analyses validate the integrity of the experimental setup used in the main paper and confirm that the observed trends reflect genuine model behaviors rather than artifacts of flawed evaluation design. K.1.2 Ablation Study: Text Quality and Length In Section K.1.1, we analyzed the relationship between the instructed length and the resulting text quality. The findings suggest that when a model have both strong length instruction following and long-text generation capabilities, it can maintain adherence to the original instruction. However, given that most current models exhibit limited ability in generating high-quality long-form outputs, it becomes essential to further examine the relationship between actual output length and textual quality—regardless of whether the model successfully satisfies the explicit constraint. This section presents an ablation study designed to isolate and analyze the correlation between actually text length and quality. The experimental results are shown in Figure 28, using the same experimental configu- ration as in Figure 27. Overall, we observe that, with the exception of Suri-I-ORPO , Mistral-7B-Instruct-v0.2 ,LongWriter-Llama3.1-8B , and Llama-3.1-8B-Instruct , an increase in actual output length generally leads to higher average quality scores—even when the target length constraint is not fully met. The results reveal that, among the 20 models evaluated, 10 exhibit a Pearson correlation coefficient greater than 0.6, indicating a strong positive relationship between output length and generation quality. Moreover, for the vast majority of models, the corre- sponding p-values fall below 1e−10, providing robust statistical evidence supporting the significance of this association. This supports the intuition previously discussed in Section K.1: longer outputs are more likely
https://arxiv.org/abs/2505.16234v1
to contain richer content, better structure, and more developed reasoning, which collectively contribute to higher quality. These results also explain why our evaluation does not directly compare generations from the same model under different length constraints—such comparisons could unfairly penalize shorter generations, even if excellent length instruction following ability. Building on Figure 28, we provide further analysis that complements Section K.1.1. First, the scatter plot of actual generated length versus quality reveals a number of outliers—specifically, outputs exceeding 8192 words—from several open-source mod- els, including GLM-4-9B-Chat ,Llama-3.1-70B-Instruct ,Llama-3.1-8B-Instruct , LongWriter-GLM4-9B ,LongWriter-Llama3.1-8B ,Mistral-7B-Instruct-v0.2 , and 72 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000014/uni00000019/uni00000016/uni0000001b/uni00000017 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000014/uni00000015/uni00000015 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000018/uni00000011/uni0000001b/uni00000015/uni00000048/uni00000010/uni00000015/uni00000015/uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000014/uni00000019/uni00000016/uni0000001b/uni00000017 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000015/uni00000018/uni0000001c /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000014/uni00000011/uni00000013/uni00000017/uni00000048/uni00000010/uni00000015/uni00000015/uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000001b /uni00000016/uni00000015 /uni00000014/uni00000015/uni0000001b /uni00000018/uni00000014/uni00000015 /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni0000001a/uni00000017/uni00000019 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000014/uni00000011/uni00000015/uni00000014/uni00000048/uni00000010/uni00000015/uni0000001a/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000018/uni0000001a/uni00000014/uni00000019 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni0000001c/uni00000011/uni00000019/uni00000016/uni00000048/uni00000010/uni00000014/uni0000001c/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000013/uni00000016/uni0000001a /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000016/uni00000011/uni00000013/uni00000015/uni00000048/uni00000010/uni00000015/uni00000014/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000015/uni00000015/uni00000019 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000011/uni00000017/uni00000017/uni00000048/uni00000010/uni00000015/uni00000016/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000014/uni00000019/uni00000016/uni0000001b/uni00000017 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000018/uni0000001b/uni00000017 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000016/uni00000011/uni00000014/uni00000017/uni00000048/uni00000010/uni00000015/uni00000019/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052 /uni00000014/uni00000019 /uni00000014/uni00000015/uni0000001b /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni00000019/uni00000018/uni00000018/uni00000016/uni00000019 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000018/uni00000017/uni00000019/uni00000016 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000018/uni00000011/uni0000001c/uni00000015/uni00000048/uni00000010/uni00000014/uni0000001a/uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni00000017/uni00000013/uni0000001c/uni00000019 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000018/uni00000016 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000019/uni00000011/uni00000014/uni00000015/uni00000048/uni00000010/uni00000016/uni00000017/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000018/uni00000018/uni0000001b /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000018/uni00000011/uni0000001a/uni00000013/uni00000048/uni00000010/uni00000015/uni00000019/uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000017/uni00000013/uni0000001c/uni00000019 /uni00000016/uni00000015/uni0000001a/uni00000019/uni0000001b /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000014/uni0000001a/uni00000018/uni00000013 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000014/uni00000011/uni00000016/uni00000015/uni00000048/uni00000010/uni00000013/uni00000015/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000014/uni00000019 /uni00000014/uni00000015/uni0000001b /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni00000019/uni00000018/uni00000018/uni00000016/uni00000019 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000013/uni00000015/uni00000019/uni00000017 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000011/uni00000014/uni00000013/uni00000048/uni00000010/uni00000013/uni00000014/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000014/uni00000019 /uni00000014/uni00000015/uni0000001b /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni00000019/uni00000018/uni00000018/uni00000016/uni00000019 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000017/uni0000001a/uni00000018/uni0000001a /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000014/uni00000011/uni00000013/uni0000001c/uni00000048/uni00000010/uni00000014/uni00000015/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002a/uni0000002f/uni00000030/uni00000017/uni00000010/uni0000001c/uni00000025 /uni0000001b /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000017/uni00000013/uni0000001c/uni00000019 /uni00000019/uni00000018/uni00000018/uni00000016/uni00000019 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000014/uni00000014/uni00000017/uni00000014 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000014/uni00000011/uni00000013/uni0000001b/uni00000048/uni00000010/uni00000013/uni00000014/uni0000002f/uni00000052/uni00000051/uni0000004a/uni0000003a/uni00000055/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000010/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000017/uni00000013/uni0000001c/uni00000019 /uni00000016/uni00000015/uni0000001a/uni00000019/uni0000001b /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni0000001a /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000018/uni00000011/uni00000019/uni0000001a/uni00000048/uni00000010/uni00000013/uni00000014/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni0000001b/uni00000018/uni0000001c /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000016/uni00000011/uni0000001b/uni00000017/uni00000048/uni00000010/uni00000015/uni0000001c/uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000001b /uni00000016/uni00000015 /uni00000015/uni00000018/uni00000019 /uni00000014/uni00000013/uni00000015/uni00000017 /uni0000001b/uni00000014/uni0000001c/uni00000015 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000019/uni00000013/uni00000018/uni0000001a /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000019/uni00000011/uni0000001a/uni00000015/uni00000048/uni00000010/uni00000014/uni0000001c/uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000014/uni00000019/uni00000016/uni0000001b/uni00000017 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000018/uni0000001c/uni00000015/uni00000019 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000015/uni00000011/uni00000016/uni0000001c/uni00000048/uni00000010/uni00000015/uni00000013/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000015/uni00000013/uni00000017/uni0000001b /uni00000014/uni00000019/uni00000016/uni0000001b/uni00000017 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000018/uni00000018/uni00000019/uni00000018 /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000014/uni00000011/uni00000014/uni0000001a/uni00000048/uni00000010/uni00000014/uni0000001a/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000014/uni00000019 /uni00000019/uni00000017 /uni00000018/uni00000014/uni00000015 /uni00000017/uni00000013/uni0000001c/uni00000019 /uni00000016/uni00000015/uni0000001a/uni00000019/uni0000001b /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000048/uni00000044/uni00000055/uni00000056/uni00000052/uni00000051/uni00000003/uni00000055/uni00000003/uni00000020/uni00000003/uni00000010/uni00000013/uni00000011/uni00000018/uni00000017/uni00000015/uni0000001c /uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni0000001c/uni00000011/uni0000001c/uni0000001b/uni00000048/uni00000010/uni00000014/uni0000001a/uni00000036/uni00000058/uni00000055/uni0000004c/uni00000010/uni0000002c/uni00000010/uni00000032/uni00000035/uni00000033/uni00000032 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c 1632641282565121024204840968192 /uni0000004f/uni00000052/uni0000004a/uni00000015/uni0000000b/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000026/uni00000052/uni00000051/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000057/uni0000000c/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000003/uni00000045/uni00000048/uni00000057/uni0000005a/uni00000048/uni00000048/uni00000051/uni00000003/uni00000024/uni00000046/uni00000057/uni00000058/uni00000044/uni0000004f/uni00000003/uni0000002a/uni00000048/uni00000051/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000003/uni00000044/uni00000051/uni00000047/uni00000003/uni00000037/uni00000048/uni0000005b/uni00000057/uni00000003/uni00000034/uni00000058/uni00000044/uni0000004f/uni0000004c/uni00000057/uni0000005c/uni00000003/uni00000024/uni00000056/uni00000056/uni00000048/uni00000056/uni00000056/uni00000050/uni00000048/uni00000051/uni00000057/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni0000002f/uni0000004c/uni00000051/uni00000048/uni00000044/uni00000055/uni00000003/uni00000029/uni0000004c/uni00000057 /uni0000001c/uni00000018/uni00000008/uni00000003/uni00000026/uni0000002cFigure 28: The horizontal axis is the actual length generated, and the vertical axis is the Average Score. The darker the point, the longer the required Equal to instruction length is. Note: The horizontal axis is an exponential coordinate with uneven distribution. Suri-I-ORPO . These outliers are consistently associated with low quality scores. Manual inspection of these cases confirms that they often consist of meaningless repetitions. This finding reinforces the importance of jointly evaluating both length instruction following and textual quality, and it offers an additional explanation for model failure: under long-length constraints, failure may stem not only from an inability to reach the target length, but also from inherent limitations in fundamental ability of models. In addition, Figure 28 visually illustrates the refusal behavior of models like o3-mini , where certain outputs under the 8192-word constraint are markedly shorter and of lower quality—highlighted as dark-colored points with poor alignment to the overall trend. In contrast, models with strong length instruction following—such as Claude-3.7-Sonnet ,Claude-3.7-Sonnet-Thinking , and o3-mini —exhibit more structured output patterns. For these models, points of the same length constraint (same color in Figure 28) typically align in narrow vertical bands around the regression line, suggesting a consistent correlation between output length and quality. By contrast, models with weaker length adherence produce scatter plots
https://arxiv.org/abs/2505.16234v1
with disorganized or erratic distributions, reflecting less reliable behavior across constraints. K.2 Text Quality Comparison of Long-text Generation In Section K.1, we analyzed how the same model performs under varying length constraints and observed that generation quality often decreases under longer constraints. To further investigate 73 model behavior in this regime, we evaluate the performance of different models under the same long-length instruction to enable a fair comparison. Claude-3.7-Sonnet Claude-3.7-Sonnet-ThinkingDeepSeek-R1DeepSeek-V3 Gemini-2.0-Flash Gemini-2.0-Flash-ThinkingGemini-2.5-ProGLM-4-9B-ChatGPT-4o GPT-4o mini Llama-3.1-70B-InstructLlama-3.1-8B-Instruct LongWriter-GLM4-9B LongWriter-Llama3.1-8BMistral-7B-Instruct-v0.2o1-minio3-mini Qwen2.5-72B-InstructQwen2.5-7B-InstructSuri-I-ORPO0246810Average Score Length Instruction 1024 Claude-3.7-Sonnet Claude-3.7-Sonnet-ThinkingDeepSeek-R1DeepSeek-V3 Gemini-2.0-Flash Gemini-2.0-Flash-ThinkingGemini-2.5-ProGLM-4-9B-ChatGPT-4o GPT-4o mini Llama-3.1-70B-InstructLlama-3.1-8B-Instruct LongWriter-GLM4-9B LongWriter-Llama3.1-8BMistral-7B-Instruct-v0.2o1-minio3-mini Qwen2.5-72B-InstructQwen2.5-7B-InstructSuri-I-ORPO0246810Average Score Length Instruction 2048 Claude-3.7-Sonnet Claude-3.7-Sonnet-ThinkingDeepSeek-R1DeepSeek-V3 Gemini-2.0-Flash Gemini-2.0-Flash-ThinkingGemini-2.5-ProGLM-4-9B-ChatGPT-4o GPT-4o mini Llama-3.1-70B-InstructLlama-3.1-8B-Instruct LongWriter-GLM4-9B LongWriter-Llama3.1-8BMistral-7B-Instruct-v0.2o1-minio3-mini Qwen2.5-72B-InstructQwen2.5-7B-InstructSuri-I-ORPO0246810Average Score Length Instruction 4096 Claude-3.7-Sonnet Claude-3.7-Sonnet-ThinkingDeepSeek-R1DeepSeek-V3 Gemini-2.0-Flash Gemini-2.0-Flash-ThinkingGemini-2.5-ProGLM-4-9B-ChatGPT-4o GPT-4o mini Llama-3.1-70B-InstructLlama-3.1-8B-Instruct LongWriter-GLM4-9B LongWriter-Llama3.1-8BMistral-7B-Instruct-v0.2o1-minio3-mini Qwen2.5-72B-InstructQwen2.5-7B-InstructSuri-I-ORPO0246810Average Score Length Instruction 8192Generation Comparison Across Different Long-LengthsMean Max Min Figure 29: Under the four length constraints of [1024,2048,4096,8192] , different models generate average histograms of text quality. We marked the Mean, Max and Min reference lines of the bar in each sub-graph. The results are presented in Figure 29. We observe that, with the exception of the special case o3-mini , models such as Claude-3.7-Sonnet ,Claude-3.7-Sonnet-Thinking , and Gemini-2.5-Pro consistently achieve higher generation quality under long-length constraints. These findings highlight the relative strength of these models in maintaining output quality while adhering to demanding length requirements. The full score table is as follows: Model1024 2048 4096 8192 Re Ac Co Cl BD Re Ac Co Cl BD Re Ac Co Cl BD Re Ac Co Cl BD Claude-3.7-Sonnet 9.50 9.35 9.15 9.20 8.60 9.75 9.75 9.40 9.35 9.10 9.05 9.00 8.55 8.55 8.80 9.25 9.80 8.95 8.85 9.10 Claude-3.7-Sonnet-Thinking 9.70 9.75 9.05 9.15 8.45 9.65 9.65 9.10 9.00 8.95 8.95 9.42 9.00 9.00 8.79 9.11 8.84 8.47 8.26 9.00 DeepSeek-R1 9.85 9.50 9.45 9.30 9.30 9.50 9.10 9.15 8.95 8.65 9.40 9.20 8.85 8.70 8.90 9.45 9.20 8.90 8.55 8.35 DeepSeek-V3 9.65 9.80 9.40 9.40 8.65 9.65 9.40 9.05 9.25 8.40 8.90 9.20 8.60 8.70 7.95 9.05 9.20 8.70 8.80 7.55 Gemini-2.0-Flash 9.50 9.70 9.15 9.20 8.40 9.50 9.55 9.15 9.15 8.50 8.75 9.45 8.55 8.45 8.40 9.00 9.25 8.75 8.50 8.20 Gemini-2.0-Flash-Thinking 9.60 9.60 9.25 9.05 8.55 9.15 9.50 8.75 8.50 8.60 9.10 9.35 8.30 8.05 8.55 8.80 9.05 8.05 7.70 8.80 Gemini-2.5-Pro 9.55 9.65 9.10 9.05 8.95 9.70 9.50 9.25 8.85 9.30 9.60 9.55 8.80 8.60 9.05 9.25 9.50 8.70 8.50 8.95 GLM-4-9B-Chat 9.05 9.30 8.75 8.80 8.00 8.90 9.10 8.50 8.60 7.40 8.80 9.00 8.35 8.60 7.75 8.75 9.15 8.20 8.20 7.55 GPT-4o 9.50 9.70 9.35 9.30 8.50 9.45 9.55 9.05 9.00 8.35 9.30 9.45 9.00 9.00 8.45 9.25 9.70 9.05 9.20 7.75 GPT-4o mini 9.60 9.55 9.15 8.95 8.30 9.30 9.55 9.05 9.10 8.45 9.40 9.45 8.95 8.80 8.00 9.30 9.70 8.65 8.75 8.00 Llama-3.1-70B-Instruct 9.45 9.60 8.35 8.45 7.20 8.65 9.10 7.55 8.10 6.95 8.07 8.95 7.55 7.80 6.85 8.25 9.00 7.35 7.75 6.85 Llama-3.1-8B-Instruct 8.65 8.90 8.20 8.05 7.00 8.80 8.65 7.80 8.10 7.00 7.75 8.20 7.20 7.40 6.45 7.55 8.35 6.25 6.40 6.00 LongWriter-GLM4-9B 9.20 9.50 8.95 8.85 7.70 8.60 9.30 8.20 8.60 7.55
https://arxiv.org/abs/2505.16234v1
8.50 9.30 8.30 8.30 7.80 8.05 8.90 7.95 7.90 8.15 LongWriter-Llama3.1-8B 8.30 9.25 7.85 8.10 6.45 8.60 9.10 7.75 7.90 6.90 6.85 8.30 6.25 6.75 5.95 7.45 8.50 7.10 7.05 7.45 Mistral-7B-Instruct-v0.2 7.70 7.20 7.35 7.20 6.55 7.50 7.35 7.25 7.55 6.50 7.20 7.50 7.40 7.45 6.45 7.60 7.30 7.50 7.60 6.70 o1-mini 9.40 9.40 9.10 9.05 8.45 9.00 9.60 8.55 8.65 8.05 8.35 9.30 8.05 8.20 7.65 9.10 9.50 9.05 8.95 8.00 o3-mini 9.22 9.28 8.50 8.33 8.11 9.16 9.32 8.11 7.74 8.58 8.63 9.42 7.58 7.53 8.00 3.89 5.53 4.79 5.84 3.26 Qwen2.5-72B-Instruct 9.60 9.50 9.50 9.55 8.55 9.60 9.60 9.25 9.20 8.35 8.75 8.75 8.50 8.65 8.45 8.55 8.95 8.25 8.45 7.65 Qwen2.5-7B-Instruct 9.25 9.45 9.20 9.25 7.95 8.95 9.05 8.50 8.30 7.85 8.60 9.25 8.40 8.30 7.90 8.55 9.05 8.45 8.40 7.55 Suri-I-ORPO 4.75 6.45 4.30 4.45 3.80 4.90 5.90 3.30 4.05 3.40 4.25 5.60 3.20 3.85 2.80 4.75 5.80 4.00 4.40 3.70 Table 17: Model comparison across four length constraints. Metrics: Re means Relevance , Ac means Accuracy , Co means Coherence , Cl means Clarity , BD means Breadth & Depth . 74 L Comparison with Existing Leaderboard Chatbot Arena is an open platform for crowdsourced AI benchmarking [ 18]. With over one mil- lion user votes, the platform ranks leading LLMs and AI chatbots using the Bradley-Terry model, producing continuously updated leaderboards. We compare the Length Score of models evaluated on LIFEB ENCH with the Arena Scores from the Chatbot Arena LLM Leaderboard. For models with multiple available versions, we select the version closest to our evaluated model (detailed version mappings are listed in Table 18). We compute both Pearson and Spearman correlation coefficients between Length Score and the Arena Scores . The strongest correlation is observed for “Hard Prompts w/SC”, with Pearson and Spearman coefficients of 0.78 and 0.71, respectively, indicating a strong positive correlation. “Longer Query” exhibits the next highest correlation, with Pearson and Spearman coefficients of 0.75 and 0.67, also reflecting a strong positive correlation. In contrast, the lowest correlations are observed for “Creative Writing”, with Pearson and Spearman coefficients of 0.64 and 0.51, and for “Multi-Turn”, with Pearson and Spearman coefficients of 0.67 and 0.50, which suggest a moderate positive correlation. As shown in Figure 30, several models exhibit notable deviations between their Length Score andArena Scores , especially among those with higher Arena Scores . In particular, o3-mini achieves a considerably higher Length Score (75.4) relative to its Arena Score (1305), while both Gemini-2.0-Flash andDeepSeek-R1 demonstrate notably lower Length Score values (48.4 and 47.7, respectively) despite high Arena Scores (1354 and 1358, respectively). These results suggest that even among models with generally strong overall performance, there can be substantial differences in their ability to follow length instructions. This highlights that length instruction following is a distinct aspect of model capability that is not fully captured by aggregate leaderboard scores. Consequently, explicitly evaluating and aligning models on length-specific behaviors is essential for comprehensive benchmarking and systematic improvement of overall model capabilities. Table 18: Correspondence between models evaluated in LIFEB ENCH and
https://arxiv.org/abs/2505.16234v1
their respective versions and overall Arena Scores on the Chatbot Arena LLM Leaderboard. Model Chatbot Arena Model Version Overall Arena Score1 GPT-4o mini gpt-4o-mini-2024-07-18 1272 GPT-4o gpt-4o-2024-08-06 1265 o1-mini o1-mini 1303 o3-mini o3-mini 1305 Claude-3.7-Sonnet claude-3-7-sonnet-20250219 1290 Claude-3.7-Sonnet-Thinking claude-3-7-sonnet-20250219-thinking-32k 1301 Gemini-2.0-Flash gemini-2.0-flash-001 1354 Gemini-2.5-Pro gemini-2.5-pro-preview-05-06 1447 DeepSeek-R1 deepseek-r1 1358 DeepSeek-V3 deepseek-v3 1318 Llama-3.1-8B-Instruct llama-3.1-8b-instruct 1175 Llama-3.1-70B-Instruct llama-3.1-70b-instruct 1247 Qwen2.5-72B-Instruct qwen2.5-72b-instruct 1257 Qwen3-235B-A22B qwen3-235b-a22b 1341 Mistral-7B-Instruct-v0.2 mistral-7b-instruct-v0.2 1072 1The data was collected on May 12, 2025, from the official leaderboard at https://lmarena. ai/?leaderboard . 75 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004f/uni0000004f /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000019/uni0000001a/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000017/uni0000001c /uni00000014/uni00000014/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000016/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004f/uni0000004f/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000036/uni00000026 /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000017/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000019/uni00000014 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni0000002b/uni00000044/uni00000055/uni00000047/uni00000003/uni00000033/uni00000055/uni00000052/uni00000050/uni00000053/uni00000057/uni00000056 /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000018/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000018/uni00000015 /uni00000014/uni00000014/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000016/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni0000002b/uni00000044/uni00000055/uni00000047/uni00000003/uni00000033/uni00000055/uni00000052/uni00000050/uni00000053/uni00000057/uni00000056/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000036/uni00000026 /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni0000001b/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000014 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni00000026/uni00000052/uni00000047/uni0000004c/uni00000051/uni0000004a /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000018/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000017/uni0000001b /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni00000030/uni00000044/uni00000057/uni0000004b /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000018/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000018/uni00000018 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni00000026/uni00000055/uni00000048/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000003/uni0000003a/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000051/uni0000004a /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000019/uni00000017/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000018/uni00000014 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000029/uni00000052/uni0000004f/uni0000004f/uni00000052/uni0000005a/uni0000004c/uni00000051/uni0000004a /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000018/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000019/uni00000016 /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000048/uni00000055/uni00000003/uni00000034/uni00000058/uni00000048/uni00000055/uni0000005c /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni0000001a/uni00000018/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000019/uni0000001a /uni00000014/uni00000015/uni00000013/uni00000013 /uni00000014/uni00000017/uni00000013/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000002f/uni00000036/uni00000003/uni00000059/uni00000056/uni00000003/uni00000030/uni00000058/uni0000004f/uni00000057/uni0000004c/uni00000010/uni00000037/uni00000058/uni00000055/uni00000051 /uni00000033/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000019/uni0000001a/uni0000000f/uni00000003/uni00000036/uni0000001d/uni00000003/uni00000013/uni00000011/uni00000018/uni00000013 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052/uni00000003/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000052 /uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000013/uni00000010/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000015/uni00000011/uni00000018/uni00000010/uni00000033/uni00000055/uni00000052/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000039/uni00000016 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001a/uni00000013/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000015/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000010/uni00000059/uni00000013/uni00000011/uni00000015Figure 30: Comparison of Length Score with the Arena Scores .Pdenotes the Pearson correlation coefficient, and Sdenotes the Spearman correlation coefficient. Reasoning models are indicated by a star marker, while non-reasoning models use a circle marker. “w/SC” stands for “without style control”. The black dashed line represents the fitted regression result. 76 M Future Directions and Potential Solutions In this section, we outline future directions and potential solutions for improving length instruction following in LLMs. Section M.1 investigates the model’s internal understanding of input prompts under varying length constraints, examining the correlation between actual output length and the logit/probability of the end-of-sequence (EoS) token. In Section M.2, we assess the ability of unaligned models to generate long text and follow explicit length instructions, offering insight into the role of pre-training in shaping these capabilities. Finally, Section M.3 introduces the Pre-Planning method, which significantly enhances LLM’s adherence to length instructions and improves its ability to extrapolate toward the upper bounds of its generation capacity. M.1 Interpretability of Length Instruction Following Understanding how LLMs interpret and respond to varying length constraints is crucial for diagnosing why these models fail to consistently adhere to such instructions. However, interpretability analyses in this area remain unexplored. In this section, we provide preliminary insights through a straightforward examination of model behavior, highlighting avenues for future interpretability research. Specifically, we analyze the behavior of the end-of-sequence (EoS) token, a special token that signals LLMs to terminate generation. When an LLM deems the current sequence incomplete, the probability or logit assigned to the EoS token for the subsequent prediction should remain low. Intuitively, by varying only the specified length constraint (e.g., comparing prompts such as "Please generate a summary of 256words" versus "Please generate a summary of 8192 words" ), we can assess how internal predictions of the EoS token shift in response, offering insights into the model’s intrinsic representation of output completeness. Our experiments preliminarily confirm this intuition, as illustrated in Figure 31. We observe that LLMs generally possess a good capability to differentiate among distinct length constraints. Indeed, a clear inverse relationship emerges between the models’ actual generated length and their corresponding EoS logits or probabilities—especially at shorter constraints, where models perform relatively well. For instance, models such as Llama-3.1-70B-Instruct andMistral-7B-Instruct-v0.2 demonstrate a consistent pattern: under constraints of
https://arxiv.org/abs/2505.16234v1
up to 2048 words, lower EoS logits or probabilities correlate strongly with longer generations. However, some anomalies appear in other models: Llama-3.1-8B-Instruct shows this inverse relationship clearly only in probability, while Qwen2.5-7B-Instruct primarily demonstrates it in logits. Additionally, at very long constraints, all models exhibit abnormal behavior, with EoS logits and probabilities often reversing their earlier trends. 16 32 64128 256 5121024 2048 4096 8192 Length Constraint34567EOS Logit EOS Logit Token Count 020040060080010001200 Token Count Llama-3.1-70B-Instruct EOS Logits 16 32 64128 256 5121024 2048 4096 8192 Length Constraint01234EOS Probability1e-9 EOS Probability Token Count 020040060080010001200 Token Count Llama-3.1-70B-Instruct EOS Probability16 32 64128 256 5121024 2048 4096 8192 Length Constraint-0.50.00.51.01.52.02.53.0EOS Logit EOS Logit Token Count 05001000150020002500300035004000 Token Count Qwen2.5-7B-Instruct EOS Logits 16 32 64128 256 5121024 2048 4096 8192 Length Constraint0.00.51.01.52.02.5EOS Probability1e-10 EOS Probability Token Count 05001000150020002500300035004000 Token Count Qwen2.5-7B-Instruct EOS Probability16 32 64128 256 5121024 2048 4096 8192 Length Constraint-34-32-30-28-26-24EOS Logit EOS Logit Token Count 2004006008001000 Token Count Mistral-7B-Instruct-v0.2 EOS Logits 16 32 64128 256 5121024 2048 4096 8192 Length Constraint0.00.20.40.60.81.01.21.41.6EOS Probability1e-19 EOS Probability Token Count 2004006008001000 Token Count Mistral-7B-Instruct-v0.2 EOS Probability16 32 64128 256 5121024 2048 4096 8192 Length Constraint-0.250.000.250.500.751.001.251.50EOS Logit EOS Logit Token Count 020040060080010001200 Token Count Llama-3.1-8B-Instruct EOS Logits 16 32 64128 256 5121024 2048 4096 8192 Length Constraint0.250.500.751.001.251.501.752.00EOS Probability1e-9 EOS Probability Token Count 020040060080010001200 Token Count Llama-3.1-8B-Instruct EOS Probability Figure 31: This figure illustrates the relationship between the model’s actual output length and the generation dynamics of the end-of-sequence (EoS) token when only the length constraint in the instruction is varied. ( First Row ) The left Y-axis indicates the logit value of the EoS token, while the right Y-axis shows the actual number of tokens generated. ( Second Row ) The left Y-axis presents the probability of the EoS token, and the right Y-axis again reflects the actual number of tokens generated. Although we provide initial insights, these analyses remain exploratory, as they do not fully explain the common underlying reasons for length-instruction failures. Nevertheless, we believe these 77 /uni00000013 /uni00000015/uni00000013/uni00000013/uni00000013 /uni00000017/uni00000013/uni00000013/uni00000013 /uni00000019/uni00000013/uni00000013/uni00000013 /uni0000001b/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000013 /uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni00000016/uni00000011/uni00000014/uni00000010/uni0000001b/uni00000025 /uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000010/uni0000001a/uni00000025/uni00000010/uni00000059/uni00000013/uni00000011/uni00000014 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025 /uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni0000001c/uni00000025 /uni00000031/uni00000052/uni00000003/uni00000035/uni00000048/uni00000049/uni00000058/uni00000056/uni00000044/uni0000004f /uni00000027/uni0000004c/uni00000055/uni00000048/uni00000046/uni00000057/uni00000003/uni00000035/uni00000048/uni00000049/uni00000058/uni00000056/uni00000044/uni0000004f/uni00000035/uni00000048/uni00000049/uni00000058/uni00000056/uni00000044/uni0000004f/uni00000003/uni0000005a/uni0000004c/uni00000057/uni0000004b/uni00000003/uni00000024/uni00000057/uni00000057/uni00000048/uni00000050/uni00000053/uni00000057 /uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni0000003a/uni00000052/uni00000055/uni00000047/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057Figure 32: Output length distributions of base models. The gray dashed line denotes the length constraint. preliminary findings point toward an interpretable solution for addressing insufficient adherence to length instructions—an avenue we intend to further investigate in future work. M.2 Improving Length Instruction Following During Pre-training In Section H, we identified behaviors such as premature termination and explicit refusals as notable issues in length instruction following. While our analysis indicated that safety alignment might partly explain rejection behaviors, it remains unclear whether post-training processes adversely affect the underlying long-text generation capabilities, leading specifically to premature termination. Understanding whether these limitations are inherent to the pre-trained models or introduced during post-training is crucial. If deficiencies in long-text generation and length adherence emerge primarily from post-training, targeted improvements to post-training strategies would suffice. Conversely, if these limitations originate during pre-training, a revised pre-training approach will be necessary. To investigate this, we evaluated the base pre-trained models’ length instruction following capabilities using in-context learning [ 64], thereby excluding post-training influences
https://arxiv.org/abs/2505.16234v1
and assessing inherent pre-trained capabilities directly. We conducted experiments on 360 fundamental data units under an 8192-word constraint with the Equal To control method. Four representative open-source base models were tested: Llama-3.1-8B ,GLM-4-9B ,Qwen2.5-7B , and Mistral-7B-v0.1 . The experimental results are summarized as follows: •Llama-3.1-8B : Average length: 1090.29; Direct Refusal: 3.95%; Refusal with Attempt: 2.41%. •GLM-4-9B : Average length: 566.97; Direct Refusal: 2.91%; Refusal with Attempt: 18.11%. •Qwen2.5-7B : Average length: 912.54; Direct Refusal: 3.91%; Refusal with Attempt: 7.50%. •Mistral-7B-v0.1 : Average length: 1076.74; Direct Refusal: 1.47%; Refusal with Attempt: 4.02%. We further illustrate these findings in Figure 32, which shows the distribution of actual output lengths relative to the imposed constraint. Notably, even these base models exhibit lazy generation behaviors, such as premature termination and explicit refusal. Although extreme rejection behaviors (e.g., those observed with o3-mini ) might be amplified through safety-oriented post-training, our results indicate that inherent deficiencies already exist within the pre-trained models. Moreover, these base models 78 consistently perform worse than their instruction-aligned counterparts in tasks requiring outputs of 8192 words, highlighting intrinsic limitations in their long-text generation capability. In conclusion, these findings suggest that the problems associated with long-text generation and excessive rejection behaviors primarily originate during the pre-training stage. While post-training alignment may exacerbate these issues, it does not constitute their root cause. We hypothesize that inadequate representation of long-form texts in pre-training datasets might contribute significantly to these observed deficiencies. Thus, integrating more extensive long-text data during pre-training may be a promising avenue for improving length instruction following capabilities in future model development. M.3 Improving Length Instruction Following During Post-training Although Section M.2 highlights that deficiencies in length instruction following, particularly for long-form generation, originate primarily during the pre-training stage, we conjecture that these shortcomings can also be substantially mitigated through targeted post-training methods. Analogously, just as models initially incapable of reasoning or even answering simple questions (e.g., “How many ’r’s are there in ’strawberry’?” ) can acquire these capabilities via suitable post-training alignment, length instruction adherence may similarly benefit from specialized alignment strategies. Inspired by Chain-of-Thought [ 102], wherein reasoning-based LLMs are encouraged to perform structured pre-planning before generating responses, we explored a similar pre-planning strategy tailored specifically for long-text generation. Concretely, we manually composed structured content outlines resembling book catalogs, then allowed LLMs to refine the prompt based on these improved outlines before generating extensive texts. We conducted experiments using four proprietary mod- els— GPT-4o mini ,Gemini-2.0-Flash ,Doubao-1.5-Pro , and DeepSeek-V3 —all evaluated on creative generation tasks from LIFEB ENCH , with an Equal To control method and a fixed length constraint of 8192 words. Table 19: Length Score andLength Deviation values for each model after pre-planning, with improvements relative to the baseline highlighted in green. Model LS LD GPT-4o mini 71.9 (↑70.2) 16% (↓69%) Gemini-2.0-Flash 66.6 (↑57.4) 9% (↓48%) Doubao-1.5-Pro 72.0 (↑59.7) 11% (↓41%) DeepSeek-V3 76.4 (↑70.3) 9% (↓59%)The results presented in Table 19 clearly in- dicate that this pre-planning approach signif- icantly improves the performance of mod- els that initially struggled with length in- struction following. Among the four
https://arxiv.org/abs/2505.16234v1
mod- els evaluated, we observe substantial im- provements in Length Score scores, with in- creases consistently exceeding 57 points and the highest achieving 76.4—substantially outperforming the top-performing model, Gemini-2.5-Pro (39.4, as reported in Ta- ble 11). Correspondingly, significant reduc- tions in Length Deviation indicate notably better adherence to the specified length constraints. These findings strongly suggest that enabling LLMs to explicitly plan and structure their outputs in advance can markedly enhance their long-text generation capabilities, presenting a promising direction for improving length instruction adherence through targeted post-training strategies. 79 N Length Instruction Following under Output Format Constraints To further investigate how length instruction following is affected when models are required to follow multiple-objective instructions, we extend LIFEB ENCH by introducing an additional output format constraint. Specifically, we examine the impact of requiring generated text to not only follow a specified word count but also produce outputs conforming to specified structural formats. This approach allows us to assess how format-related complexities influence models’ capabilities to follow precisely length instructions. We define three distinct output formats, arranged by increasing structural complexity: •Markdown: Requires only basic structural elements such as headings and lists, and employs lightweight syntax, making it the simplest format. The prompt specifies: “Your output must be in Markdown format. Use ##for headings, -for lists, and standard Markdown syntax for formatting. ” •HTML: Involves a broader set of structural elements and nested tags, representing moderate formatting complexity. The prompt specifies: “Your output must be in HTML format. Use <h2> for headings, <ul><li> for lists, and standard HTML tags for formatting. ” •LaTeX: Demands strict command-level syntax and structural conformity, posing the greatest challenge among the three formats. The prompt specifies: “Your output must be in LaTeX format. Use \section{} for headings, \itemize{} for lists, and standard LaTeX commands for formatting. Do not include preamble or document class. ” We evaluate model performance across five length constraints: 128, 256, 512, 1024, and 2048 words, covering a representative range from short to moderate generation lengths. These con- straints were carefully selected to ensure added formatting syntax does not exceed the maximum generation length supported by the models. Our evaluation includes leading proprietary mod- els from OpenAI ( GPT-4o ,GPT-4o-mini ,o1-mini ,o3-mini ) and Google ( Gemini-2.0-Flash , Gemini-2.0-Flash-Thinking ,Gemini-2.5-Pro ). To accurately measure the semantic word count of generated outputs, we employ a rule-based post-processing pipeline tailored to each output format. Specifically, HTML tags are removed by stripping all content enclosed within angle brackets ( <...> ). For Markdown and LaTeX, lines or tokens corresponding explicitly to formatting commands or syntax elements—such as headers ( #), list markers ( -), or command sequences ( \)—are removed. This preprocessing ensures accurate assessment of semantic content length, unaffected by formatting-related artifacts. Table 20: Effect of Markdown format constraints on Length Score . Green indicates improved scores, while red denotes decreased scores. ModelLength Constrainats 128 256 512 1024 2048 GPT-4o mini 50.5 (↓12.3) 62.6 (↓7.2) 70.5 (↑3.7) 71.6 (↓3.1) 26.5 (↑1.0) GPT-4o 56.6 (↓10.4) 69.8 (↑0.2) 68.4 (↑1.9) 61.0 (↓8.0) 22.6 (↑6.1) o1-mini 53.4 (↓13.2) 52.6 (↓9.9) 44.7
https://arxiv.org/abs/2505.16234v1
(↓6.5) 38.5 (↓1.8) 20.2 (↓14.0) Gemini-2.0-Flash 44.2 (↓15.6) 51.6 (↓8.1) 34.0 (↓4.2) 51.2 (↑3.5) 52.2 (↓0.5) Gemini-2.0-Flash-Thinking 51.0 (↓2.4) 52.1 (↑3.1) 56.6 (↑26.6) 47.0 (↑14.9) 28.6 (↓13.9) Gemini-2.5-Pro 67.1 (↓5.7) 63.7 (↓3.7) 54.2 (↑4.7) 35.4 (↓1.6) 30.6 (↓17.4) Tables 20, Table 21, and Table 22 summarize the model performances measured by Length Score under Markdown, HTML, and LaTeX format constraints, respectively. Overall, imposing additional format constraints generally leads to lower Length Score scores across most models, especially at extreme length targets (128 and 2048 words). For instance, o1-mini andGemini-2.0-Flash consistently show reduced performance across nearly all scenarios, suggesting that the complexity of structural formatting can substantially impair the ability of models to precisely follow length instructions. 80 Table 21: Effect of HTML format constraints on Length Score . Green indicates improved scores, while red denotes decreased scores. ModelLength Constrainats 128 256 512 1024 2048 GPT-4o mini 51.0 (↓11.8) 64.6 (↓5.2) 77.8 (↑11.0) 55.1 (↓19.6) 19.0 (↓6.5) GPT-4o 59.6 (↓7.4) 71.9 (↑2.3) 68.9 (↑2.4) 59.8 (↓9.2) 12.2 (↓4.3) o1-mini 55.8 (↓10.8) 51.9 (↓10.6) 47.3 (↓3.9) 31.9 (↓8.4) 14.0 (↓20.2) Gemini-2.0-Flash 44.1 (↓15.7) 51.3 (↓8.4) 35.6 (↓2.6) 39.8 (↓7.9) 53.3 (↑0.6) Gemini-2.0-Flash-Thinking 56.3 (↑2.9) 54.9 (↑5.9) 55.3 (↑25.3) 38.5 (↑6.4) 18.0 (↓24.5) Gemini-2.5-Pro 64.0 (↓8.8) 58.2 (↓9.2) 50.9 (↑1.4) 39.8 (↑2.8) 30.5 (↓17.5) Table 22: Effect of LaTex format constraints on Length Score . Green indicates improved scores, while red denotes decreased scores. ModelLength Constrainats 128 256 512 1024 2048 GPT-4o mini 48.2 (↓14.6) 63.2 (↓6.6) 71.8 (↑5.0) 65.6 (↓9.1) 24.9 (↓0.6) GPT-4o 54.3 (↓12.7) 67.3 (↓2.3) 66.4 (↓0.1) 61.4 (↓7.6) 14.1 (↓2.4) o1-mini 60.2 (↓6.4) 52.3 (↓10.2) 44.2 (↓7.0) 40.3 ( - ) 17.5 (↓16.7) Gemini-2.0-Flash 52.9 (↓6.9) 51.0 (↓8.7) 37.3 (↓0.9) 54.7 (↑7.0) 59.5 (↑6.8) Gemini-2.0-Flash-Thinking 58.0 (↑4.6) 55.3 (↑6.3) 48.7 (↑18.7) 34.9 (↑2.8) 25.1 (↓17.4) Gemini-2.5-Pro 62.9 (↓9.9) 60.8 (↓6.6) 56.7 (↑7.2) 37.8 (↑0.8) 33.7 (↓14.3) However, the Gemini-2.0-Flash-Thinking model displays a notable deviation from this general trend, achieving improved Length Score scores at intermediate length constraints (256, 512, and 1024 words) across all three output formats. This behavior suggests that, under certain circumstances, structured output requirements may enhance rather than impede length compliance, possibly by encouraging the model to employ more deliberate, organized generation strategies. In summary, although output format constraints typically introduce significant additional challenges to length instruction adherence, the exceptional performance of Gemini-2.0-Flash-Thinking indicates promising directions for future model design and training. Specifically, incorporating structured formatting constraints into training may not only mitigate performance degradation but could potentially facilitate more effective and precise length instruction following. 81
https://arxiv.org/abs/2505.16234v1
arXiv:2505.16237v1 [cs.CL] 22 May 2025Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation Derong Xu1,2∗, Pengyue Jia2∗, Xiaopeng Li2, Yingyi Zhang2, Maolin Wang2, Qidong Liu2, Xiangyu Zhao2,Yichao Wang3,Huifeng Guo3,Ruiming Tang3,Enhong Chen1,Tong Xu1 1University of Science and Technology of China, 2City University of Hong Kong,3Huawei Noah’s Ark Lab derongxu@mail.ustc.edu.cn Abstract Large language models (LLMs) have demonstrated remarkable capabilities, but still struggle with issues like hallucinations and outdated information. Retrieval- augmented generation (RAG) addresses these issues by grounding LLM outputs in external knowledge with an Information Retrieval (IR) system. Building on this foundation, graph-based RAG systems go a step further by retrieving subgraphs, which preserve the relationships between knowledge entities and provide more comprehensive context. However, graph RAG faces two challenges: (1) Retriev- ing relevant information introduces irrelevant nodes (especially in dense graph databases, where retrieval usually extends to adjacent nodes), and leads to overly lengthy inputs that hinder efficiency; (2) The representation gap between graph and language during generation with LLMs limits the ability to fully leverage graph structures for enhanced understanding. To address these limitations, we propose Align-GRAG, a novel reasoning-guided dual alignment framework in post-retrieval phrase. It first formulates a subgraph by retrieving nodes and edges. Then an Aligner is proposed to jointly optimizes a graph encoder with LLM-summarized reasoning. It achieves dual alignment of graph node and representation by lever- aging KL divergence loss and contrastive loss, facilitating efficient pruning of irrelevant knowledge and establishing a unified semantic space. The Generator inte- grates the aligned graph data with LLM to produce coherent and accurate answers. Experiments on GraphQA benchmark across three tasks (including common sense reasoning, scene graph understanding, and knowledge graph reasoning) validate the effectiveness of our method. The code will be available upon accepted. 1 Introduction Recent advancements in natural language processing have demonstrated the remarkable capabilities of large language models (LLMs), such as GPT [ 1] and Llama [ 2], in understanding, reasoning, and handling complex tasks [ 3,4]. The massive parameterized knowledge encoded within LLMs has enabled them to excel in information retrieval [ 5–7]. Despite these advancements, concerns about the interpretability of LLM remain, leading to challenges like hallucinations (fabrication of false or misleading information) [ 8,9] and reliance on outdated data [ 10]. These issues are especially concerning in high-stakes fields such as healthcare [11] and law [12]. Retrieval-augmented generation (RAG) systems have been developed to solve these problems [ 10]. By integrating external information retrieval (IR) systems, RAG retrieves knowledge from external databases, ensuring that LLMs are equipped with up-to-date and relevant information tailored to ∗Equal contribution. Preprint. Under review. the user’s query. By grounding the LLM’s outputs in verifiable, external knowledge, RAG systems significantly improve the reliability and accuracy of content generated by LLMs [ 13]. Nevertheless, in real-world scenarios, RAG systems often divide long context into independent chunks, overlooking the deeper connections between fragments and lacking a global perspective [ 14,15]. Moreover, a significant portion of data inherently exhibits a graph-like structure, such as in recommendation systems [ 16], the Web, and knowledge graphs [ 17]. To address these, researchers have proposed
https://arxiv.org/abs/2505.16237v1
graph- based RAG (GRAG) [ 14,18–20] as an enhancement to traditional RAG methods. Instead of retrieving isolated text chunks where information is treated independently of its structural relationships, GRAG retrieves subgraphs from databases [ 18]. These subgraphs maintain the interconnected relationships between knowledge entities, providing richer context and enabling more accurate, contextually relevant responses. However, despite its potential, integrating RAG with graphs is non-trivial, presenting two key challenges: (1) Irrelevant Knowledge : The retrieval stage often introduces irrelevant information, hindering the LLM’s attention on useful knowledge (especially in dense graph databases, where retrieval usually extends to adjacent nodes), and the generated inputs become overly lengthy, further causing computational inefficiency. Some traditional RAG approaches [ 21–23] aim to tackle this challenge by incorporating a reranker in the post-retrieval stage [ 13,10]. For instance, BGE [ 21] leverages cross-encoder-based architectures to effectively capture interactions between queries and documents, enabling more precise relevance estimation. However, in the GRAG setting, where chunks are typically represented as nodes or triplets, this approach can disrupt structural information and overlook the connections between knowledge. Meanwhile, existing GRAG methods [ 15] focus primarily on retrieval-stage optimizations (e.g., GraphRAG [ 14] builds community graphs for long documents to facilitate retrieval and summarization), resulting in relatively shallow similarity that lack deeper semantic matching capabilities. (2) Representation Gap : In the generation stage, when integrating LLMs with graph structure embeddings, such as those generated by Graph Neural Network (GNN), challenge arises due to the representation gap: graph embeddings are designed to capture structural properties, which misaligns with the sequence-based representations used by LLMs. This misalignment limits the ability to fully leverage graph for enhanced reasoning [24,25]. In the field of LLMs, most existing GRAG methods [ 20,26–30] primarily treat graph data as plain text inputs, lacking a dedicated encoder for structural information, which limits their understanding capabilities. Some methods [ 19,31,18] incorporate graph encoders, but typically rely on straightforward approaches like concatenating graph embeddings with LLM inputs. For instance, G-Retriever [ 18] uses a projector to map graph embeddings, while GNP [ 19] applies cross-modality pooling to fuse text and graph representations. However, neither approach explicitly aligns these distinct representation spaces, leaving the underlying gap unresolved. These research gaps present a question: Can we better prune irrelevant nodes while keeping structural information, and effectively encode graph representations for utilization by LLMs? In this work, we propose Align-GRAG, a novel reasoning-guided dual alignment framework specif- ically designed for GRAG. During the retrieval stage, we calculate the similarity between nodes and edges to extract the initial subgraph. To address the two key challenges in GRAG, we intro- duce an innovative graph aligner module for the post-retrieval pharse. We leverage LLM-based summarization ability with well-crafted prompts to generate reasoning chains that bridge informa- tion gaps. The aligner achieves dual alignment with reasoning chains as optimization target: (1) Graph Node Alignment , which ranks critical reasoning nodes, pruning irrelevant knowledge; and (2) Graph Representation Alignment , ensuring unified semantic space between graph and language representations via contrastive learning. This dual alignment refines subgraphs to focus on relevant
https://arxiv.org/abs/2505.16237v1
knowledge, enabling the generator to produce accurate, context-aware responses grounded in the graph’s structure. Extensive experiments are conducted on GraphQA benchmark [ 18], covering tasks such as commonsense reasoning, scene graph understanding, and knowledge graph reasoning, which demonstrate the superior performance of Align-GRAG compared to other baselines and highlight the effectiveness of Aligner module. Overall, this work makes three key contributions: •To the best of our knowledge, this is the first work to introduce a reasoning-guided graph alignment framework in the post-retrieval phrase for GRAG. •We propose a novel graph aligner module for dual alignment between graph node and representation, allowing effective pruning of irrelevant nodes while learning a unified semantic space. •The effectiveness of Align-GRAG was comprehensively validated on three datasets across different tasks, demonstrating superior performance compared to other strong baselines. 2 LLMLoRA(optional) ! ❄Textualize(!!"#$%&')LLM (Text Embedding) ❄Query QueryQueryAnswerSubgraphLLM Reasoning Chain ❄① Graph Retriever② Graph Aligner③ Graph Generator Positive PairIn-batch NegativePCSTSubgraphGraph EncoderSentence EncoderGraph Encoder ❄!!"#$%&'(2) Representation AlignmentGraph EncoderTarget DistributionNode-Level PredictionKL divergence lossContrastiveloss ! ❄TrainableFrozen(1) Node Alignment AnswerNode Similarity!!"#$%&'!(&)'#*&' ! !Graph-Level EmbGraph EncoderGraph PruningRepresentation Generationℒ!"#$%&' ❄ConcatSummarizeFigure 1: The Align-GRAG framework includes three components: ➀Graph Retriever, ➁Graph Aligner, and ➂Graph Generator. It first retrieve subgraphs, then trains the aligner with LLM- based summarized reasoning chain. The trained aligner prunes irrelevant nodes, generates graph embeddings, and feeds them into the LLM to produce the final answer. 2 Preliminaries This section introduces key concepts and notations, including textual graphs and task formulation. Textual Graph. A textual graph is a graph where nodes and edges are enriched with textual information, capturing both structural and semantic details. Formally, a textual graph is defined as G= (V,E,{tn}n∈V,{te}e∈E), where VandEare the sets of nodes and edges. tn∈ DLnrepresents the text associated with a node n∈ V, where Dis the vocabulary, and Lnis the text length. Similarly, te∈ DLeis the text for an edge e∈ E, with Leas its length. Task Formulation. This work addresses RAG with textual graph. The goal is to retrieve relevant information from textual graphs and generate accurate responses. Given a query tq, a sequence of tokens from D, the model retrieves a subgraph Gr= (Vr,Er)fromGbased on the semantic similarity oftn,tewithtq, where Vr⊆ V andEr⊆ E. The retrieved nodes, edges, and texts are refined to improve the input quality for the LLM. During generation, the retrieved subgraph, query tq, and prompt Pare provided as input to the LLM, which produces the final output Y, grounded in the retrieved knowledge and graph context. 3 Methodology In this section, we introduce Align-GRAG framework, which is composed of three components: Graph Retriever (Sec. 3.1), Graph Aligner (Sec. 3.2), and Graph Generator (Sec. 3.3) modules. The overall framework pipeline is depicted in Figure 1. In Figure 1, ➀denotes Graph Retriever, we leverage the similarity between entities, relations, and the query to extract the initial subgraph. In the aligner module (Figure 1- ➁), we use a GNN backbone to encode subgraph information. To tackle irrelevant knowledge noise and representation gaps, we introduce an LLM-based summarization technique with tailored prompts. By taking the query-answer pair and subgraph as input, the LLM
https://arxiv.org/abs/2505.16237v1
generates a reasoning chain that links the query to the answer, identifying key intermediate nodes and bridging information gaps. Next, we propose the graph aligner to achieve dual alignment with this summarized reasoning chain: (1) Graph Node Alignment : A KL divergence loss [ 32] highlights critical nodes and reasoning edges, aligning node-level distributions and pruning irrelevant nodes. (2) Graph Representation Alignment : A contrastive loss with in-batch negative sampling aligns graph and language representations in a shared semantic space. Jointly optimizing these alignments enables effective knowledge pruning and unified representation learning. In the generation stage (Figure 1- ➁), the aligned graph embedding is used as a graph token, concatenated with the query and pruned subgraph, and then fed into an LLM for answer generation. By leveraging the aligned graph, generator produces responses that reflect a deep understanding of graph’s structure and its relevance to query. 3 3.1 Graph Retriever Existing RAG methodologies are mainly designed for plain text documents or triplets, where infor- mation retrieval occurs independently of the graph structure [ 13,5,17]. In retrieval stage, we first utilize an encoder-only language model (e.g., SentenceBERT [ 33]) to encode textual information, including the query tq, text of each node tnand edge tein graph respectively: q=SBERT (tq)∈Rd,n=SBERT (tn)∈Rd,e=SBERT (te)∈Rd(1) Then we compute cosine similarity sim(·)between query embeddings qand embeddings of nodes n and edges e. The top- knodes and edges are selected as the most relevant entities and relations. Vk=argtopk sim(q,n) ,Ek=argtopk sim(q,e) (2) This forms retrieve top- kentities and relation sets, denoted as Gretriever . Inspired by G-retriever [ 18], we further leverage the Prize-Collecting Steiner Tree algorithm [ 34] to maintain a controlled graph size, as detailed in Appendix A. 3.2 Graph Aligner In this section, we introduce the Graph Aligner module. The retrieved subgraph GRetriever faces limitations when integrated with LLMs, including irrelevant information from the retriever and misalignment between graph and language representations during generation with LLM. Our Graph Aligner tackles these issues through two key objectives: (1) Aligning node-level distributions to prune irrelevant structures, refining subgraphs based on LLM knowledge preferences; (2) Bridging the gap between graph structures and language descriptions by unifying them in a shared latent space for LLM integration. The illustration is shown in Figure 1- ➁. We provide a detailed explanation below. 3.2.1 Summarized Reasoning Chain To achieve the Aligner module’s optimization goals, our motivation is to leverage crucial reasoning chain related to query-answer pair. This is especially important for multi-hop problems, which require intermediate reasoning steps to bridge information gaps. Incorporating such reasoning significantly improves answer quality, as demonstrated by methods like Chain-of-Thoughts [ 35] and O1 [36]. To explicitly identify the essential nodes within graph database, we propose an LLM-based summarization technique. By designing well-crafted prompts, we harness the capabilities of strong LLMs to generate comprehensive reasoning chains that logically connect the question to the answer. For example of Table 8 of Appendix G, given a query, “What is the name of the first Harry Potter novel?” , with the LLM-generated prior, we can identify the intermediate and critical node, J.K.
https://arxiv.org/abs/2505.16237v1
Rowling . We believe this can serve as an excellent label. The deeper insights provided through reasoning are subsequently utilized in the dual node and representation alignment process to train the aligner module. More details for summarization prompt are provided in Appendix G. 3.2.2 Node Alignment The Aligner module utilizes the summarized reasoning chain to identify and align relevant nodes within the graph, effectively pruning redundant nodes and edges by filtering out unrelated information at the node level. Specifically, we employ a GNN, which can be GraphTransformer [ 37] or GAT [ 38], to encode the structural information of the graph. The GNN produces node-level embeddings ng based on the input graph: ng=GNN (G)∈R|V|×d(3) where, |V|is the number of node and dis feature dim. For the reasoning text, we employ SBERT to encode the textual description into a representation rsthat captures its semantic meaning: rs=SBERT (treasoning )∈Rds(4) We concatenate the embedding of each node in the subgraph with the query embedding. The concatenated embeddings are then passed through an MLP module, which generates a predicted importance score for each node. Finally, these scores are transformed into probability distributions using the softmax function, producing both predict and reasoning importance scores. ppredict =Softmax (MLP ([ng,qexpanded]))∈R|V|,preasoning =Softmax (sim(rs,n))∈R|V|(5) 4 where [,]means concat operation, qexpandedis the query embedding broadcasted across all nodes. To align the node distribution, we minimize the Kullback–Leibler (KL) divergence [ 32] between predicted probabilities ppredict and the probabilities preasoning . The KL divergence loss for a subgraph is given by: LNA=1 |V||V|X i=1preasoning (i) logpreasoning (i) ppredict(i)(6) Optimizing LNAenables effective alignment of relevant knowledge. 3.2.3 Representation Alignment To bridge the representation gap between graph structures and language-based descriptions, by treating the text representation derived from reasoning chain as the target label, Aligner module aligns the graph and text embeddings to encourage semantic consistency. We apply a mean pooling operation across the node embeddings ngto obtain a unified graph-level representation rg: rg=POOL (ng) =1 |V|X v∈Vng(v)∈Rd(7) To unify the graph and text representations in a shared semantic space, we apply a contrastive loss with in-batch negative sampling. This loss encourages positive pairs (i.e., graph and text embeddings) to have higher similarity, while pushing apart non-matching pairs. Then, a shared-weight MLP layer is used to map rgandrsto dimension dtof LLM token embeddings: ˆrs=MLP (rs)∈Rdt,ˆrg=MLP (rg)∈Rdt(8) The contrastive loss for representation alignment from ˆrgtoˆrsis defined as: LRA(ˆrg→ˆrs) =−1 NNX i=1" logexp sim(ˆri g,ˆri s)/τ PN j=1exp sim(ˆri g,ˆrj s)/τ# (9) where, Nis the batch size, (ˆri g,ˆri s)is the i-th positive (graph-text) pair in the batch, τis a temperature parameter to control the sharpness. Similarly, we can obtains the loss LRA(ˆrs→ˆrg)from ˆrstoˆrg. The final representation alignment loss are obtained by: LRA= 1/2(LRA(ˆrs→ˆrg) +LRA(ˆrg→ˆrs)) (10) To achieve an optimized Graph Aligner, we perform joint optimization of node and representation Alignment. The total loss for the Graph Aligner is defined as: LAligner =LRA+LNAThe parameters of GNN encoder are jointly optimized using the loss LAligner with a specified training step, which reflects degree of alignment. We evaluate impact of the align degree in Figure 2 and the effectiveness
https://arxiv.org/abs/2505.16237v1
in Section 4.4. 3.2.4 Graph Pruning and Representation Generation We use the trained Graph Aligner to prune irrelevant nodes, and produce graph representations that are aligned with language. The Aligner performs graph pruning across the entire dataset. To start, we introduce a hyperparameter: the number of seed nodes, denoted as nseed. For the graph’s node-level output, npredict , the top nseednodes are selected as seed nodes. Using the first-order neighbors of these seed nodes, we expand the adjacent nodes and edges. This process yields the newly constructed subgraph, denoted as GAligner = (VAligner,EAligner ). What is more, we also utilize the trained Aligner across the entire dataset to produce the graph representation rg. The aligned subgraph GAligner serves as the input to LLM, enabling more efficient and accurate generation results. The efficiency of pruning are evaluated in Appendix C. 3.3 Graph Generator The Aligner module enables efficient representation alignment and pruning of irrelevant knowledge, greatly facilitating integration with LLMs. To generate the final answer, we concatenate the text of graph GAligner with query tokens and input them into the LLM’s TextEmbedder ((The token embedding of LLM itself). This yields text tokens embedding: rt=TextEmbedder ([tq,GAligner ])∈ 5 R(Tq+Tg)×dt, where TqandTgmean the token length of query and graph. rtis then concatenated with the previously learned graph embedding rg. The conditional probability of generating answer Y given the aligned graph GAligner and the query tqis defined as: pθ(Y|tq,GAligner ) =mY i=1pθ(yi|[MLP (rg),rt], y<i) (11) To enhance efficiency of training and deployment, we fine-tune generator using parameter-efficient tuning method, like LoRA [39]. 4 Experiments We conducted experiments on the GraphQA benchmark [ 18], which includes ExplaGraphs (com- monsense reasoning), SceneGraphs (scene graph understanding), and WebQSP (knowledge graph reasoning). To ensure a fair comparison, we utilized the same retrieval results obtained via the PCST algorithm across all baselines. More details about datasets, baselines and implementation details are provided in Appendix B. 4.1 Main Results Table 1: Performance comparison using Llama2-7b [ 2] and GraphTransformer [ 37] as backbones and same retrieval settings for all methods. The table reports the mean and standard deviation (std) results across three random seeds. For methods marked with ‘ †’, we reproduce the results on WebQSP to report both F1 and Accuracy. Results for methods marked with ‘ ‡’ are taken directly from [ 18] and were not reproduced due to their poor performance. The best results are highlighted in bold , and the second-best results are underlined . ‘Improvement’ represents the gain of Align-GRAG over the second-best baseline. ‘ *’ indicates the statistically significant improvements (i.e., two-sided t-test withp <0.05) over the best baseline. ↑: higher is better. MethodExplaGraphs SceneGraphs WebQSP Accuracy ↑ Accuracy ↑ F1↑ Hit@1↑ Accuracy ↑ Inference-only ‡ Zero-shot 0.5650 0.3974 - 0.4106 - Zero-CoT [40] 0.5704 0.5260 - 0.5130 - CoT-BAG [41] 0.5794 0.5680 - 0.3960 - KAPING [42] 0.6227 0.4375 - 0.5264 - Raw Fine-tuning† Prompt tuning [43] 0.5763 ±0.0243 0.6341 ±0.0024 0.2652 ±0.0049 0.4807 ±0.0055 0.2827 ±0.0073 LoRA [39] 0.8538 ±0.0353 0.7862 ±0.0031 0.4445 ±0.0058 0.6505 ±0.0068 0.4479 ±0.0091 Reranker-based gte-base [22] 0.8557 ±0.0144 0.8556
https://arxiv.org/abs/2505.16237v1
±0.0095 0.5378 ±0.0044 0.7373 ±0.0064 0.5251 ±0.0052 gte-large [22] 0.8776 ±0.0095 0.8592 ±0.0074 0.5392 ±0.0013 0.7340 ±0.0044 0.5374 ±0.0038 bge-reranker-base [21] 0.8534 ±0.0159 0.8577 ±0.0029 0.5323 ±0.0052 0.7397 ±0.0012 0.5254 ±0.0010 bge-reranker-large [21] 0.8612 ±0.0184 0.8644 ±0.0060 0.5366 ±0.0045 0.7391 ±0.0093 0.5401 ±0.0077 G-RAG [44] 0.8484 ±0.0174 0.8474 ±0.0147 0.5181 ±0.0023 0.7114 ±0.0113 0.5080 ±0.0041 G-RAG-RL [44] 0.8478 ±0.0112 0.8509 ±0.0142 0.5291 ±0.0066 0.7167 ±0.0039 0.5185 ±0.0026 GNN-based GraphToken ‡[31] 0.8508 ±0.0551 0.4903 ±0.0105 - 0.5705 ±0.0074 - GNP [19] 0.8704 ±0.0034 0.8616 ±0.0096 0.5369 ±0.0049 0.7391 ±0.0100 0.5441 ±0.0046 G-Retriever† PT[18] 0.8516 ±0.0092 0.8131 ±0.0162 0.4740 ±0.0049 0.6921 ±0.0099 0.4740 ±0.0033 G-Retriever† LoRA [18] 0.8705 ±0.0329 0.8683 ±0.0072 0.5366 ±0.0031 0.7366 ±0.0049 0.5405 ±0.0031 GRAG† PCST [45] 0.8805 ±0.0050 0.8561 ±0.0052 0.5355 ±0.0049 0.7485 ±0.0104 0.5503 ±0.0035 Align-GRAG (Ours) 0.8992 ±0.0124 0.8804 ±0.0106 0.5445 ±0.0041 0.7626 ±0.0063 0.5700 ±0.0039 In this section, we conduct extensive experiments on the ExplaGraphs, SceneGraphs, and WebQSP datasets, comparing the performance of our method with 16 baseline methods. All baselines are 6 implemented using the same retrieved text, and the same backbones, Llama2-7b [ 2] and GraphTrans- former [ 37]. As shown in Table 1, the overall results demonstrate the effectiveness of Align-GRAG, which achieves performance improvements across all metrics on three datasets. Notably, on WebQSP dataset, Align-GRAG achieves a remarkable 4.76% improvement in Accuracy compared to the second-best method. This improvement can be attributed to the innovative design of Align-GRAG, particularly its alignment-based pruning and optimization strategies for graph structures, showcasing its potential in complex graph reasoning and knowledge-based question answering tasks. Comparing with Inference-only methods . They rely solely on the reasoning capabilities of LLM without task-specific optimization and, as a result, perform poorly. Specifically, the Zero- shot approach, which does not depend on any carefully constructed prompts, suffers from limited performance due to its inability to leverage task-specific knowledge. Comparing with raw fine-tuning methods . LoRA significantly outperforms Prompt Tuning and also surpasses inference-only methods. This highlights the effectiveness of task-specific optimization in graph QA scenarios. However, it still lags behind reranker-based and GNN-based methods, indicating that more advanced techniques are required to fully exploit graph structure. Comparing with reranker-based methods . gte-large achieves the second-best performance on both the ExplaGraphs and WebQSP datasets (F1 metric), outperforming other reranker approaches. However, it still falls significantly short of Align-GRAG. This demonstrates that simply performing node and triple reranking is insufficient to fully capture the graph structural information. Although G-RAG-RL also utilizes a GNN encoder to model graph structures, it rely on similarity with the query as a label, resulting in poor performance. This validates the effectiveness of our approach, which uses reasoning chains summarized by LLM as optimization target. By aligning graph reasoning with LLM-derived reasoning chains, Align-GRAG achieves superior performance, showing that this alignment is crucial for graph-based reasoning tasks. Comparing with GNN-based methods . G-Retriever ranks second on SceneGraphs, while GNP achieves the second-best Accuracy on WebQSP. However, both methods fall significantly short of Align-GRAG overall. This is because G-Retriever simply concatenates graph embeddings with LLM inputs using a projector, while GNP
https://arxiv.org/abs/2505.16237v1
employs cross-modality pooling to fuse text and graph representations. Despite these efforts, their approaches remain limited. In contrast, Align-GRAG explicitly aligns the representation of graphs with language, effectively bridging the gap between these two spaces and achieving state-of-the-art results. 4.2 Ablation Study Table 2: Ablation study on different alignment strategy. MethodWebQSP F1↑Hit@1 ↑Accuracy ↑ Align-GRAG 0.5445 0.7626 0.5700 w/o Representation Alignment 0.5458 0.7586 0.5675 w/o Node Alignment 0.5344 0.7371 0.5339 w/o Both 0.5348 0.7328 0.5216 Random Alignment 0.4617 0.6861 0.4865We perform an ablation study on WebQSP using three metrics to evaluate how the proposed modules contribute to perfor- mance improvement . As illus- trated in Table 2, we examine the following variants: (1) w/o Node Alignment: Removes the KL loss and knowledge pruning components. (2) w/o Represen- tation Alignment: Excludes the contrastive loss and the graph embedding input to the LLM. (3) w/o Both: Removes both Node Alignment and Representation Alignment. (4) Random Alignment: Optimizes the aligner module using randomly generated labels. We observe that removing the Node Alignment module significantly lowers all evaluation metrics compared to the full model (Align-GRAG), indicating the effectiveness of node-level alignment optimization and node pruning. In contrast, while excluding the Representation Alignment mod- ule also leads to a performance decline, the impact is comparatively less significant. Performance further deteriorates when both modules are removed (w/o Both), highlighting the critical role of dual alignment between node and representation in filtering irrelevant knowledge and bridging the representation gap between the graph and language. The Random Alignment variant performs the worst, as random alignment fails to guide the model effectively. This emphasizes the necessity 7 Table 3: Generalization Analysis on different LLM and GNN backbones. LLM Backbones Llama-2-7b-hf Llama-2-13b-hf GNN MethodExplaGraphs WebQSP ExplaGraphs WebQSP Accuracy ↑ Hit@1 ↑ Accuracy ↑ Hit@1 ↑ GTGNP 0.8704 0.7391 0.8880 0.7696 G-Retriever 0.8705 0.7366 0.9115 0.7739 Align-GRAG (Ours) 0.8992 0.7626 0.9241 0.7789 GATGNP 0.9061 0.7291 0.8989 0.7676 G-Retriever 0.7960 0.7414 0.8953 0.7737 Align-GRAG (Ours) 0.9151 0.7309 0.9151 0.7573 GCNGNP 0.7545 0.7298 0.8682 0.7564 G-Retriever 0.8592 0.7352 0.9007 0.7521 Align-GRAG (Ours) 0.8574 0.7377 0.9152 0.7592 of meaningful alignment strategies and demonstrates that our LLM-summarized reasoning chain provides a highly useful label. 4.3 Generalization Analysis In this section, we analyze whether the effectiveness of Align-GRAG can generalize across different GNN backbones and LLM sizes . Experiments are conducted on both ExplaGraphs and WebQSP datasets, with various GNN backbones, including GT (GraphTransformer) [ 37], GAT [ 38], and GCN [ 46], as well as LLM backbones of different sizes. As shown in Table 3, our method consistently achieves the best performance across most settings (9 out of 12 experimental settings), demonstrating its strong adaptability. Notably, our method shows the most significant improvement over GNP and G-Retriever when using GT as the backbone, highlighting that GT is particularly well-suited for learning dual alignment of node and representation. On the other hand, we observe that results with the -13B LLM generally outperform those with the -7B LLM, especially when GCN is used as the backbone. This suggests that larger LLMs play a crucial role in effectively learning and
https://arxiv.org/abs/2505.16237v1
reasoning over graph structures. 4.4 Evaluation of Representation Alignment 024820 40 60 80 100 Align Degree (Step)4 3 2 1 012Cosine-Similarity-score Aligned Unaligned (a) Align with Query 024820 40 60 80 100 Align Degree (Step)020406080Cosine-Similarity-score Aligned Unaligned (b) Align with Summarization 024820 40 60 80 100 Align Degree (Step)40 30 20 10 0Cosine-Similarity-score Aligned Unaligned (c) Align with Textualized Graph Figure 2: Representation Alignment Analysis: The cosine similarity score between graph embeddings and language embeddings (aligned using the aligner module vs. the unaligned setting). In this section, we evaluate whether the aligner can effectively bridge the representation gap? Specifically, we calculate the Cosine Similarity scores between the graph embeddings and the language embeddings of the query, summarization, and textualized graph, respectively, on test set. The graph embeddings include both the unaligned embeddings and the aligned embeddings (optimized using our contrastive loss). From the results in Figure 2, we observe that as the alignment degree increases, Cosine Similarity scores improve. For instance, similarity with summarization embeddings improves significantly as alignment progresses from 40 to 80 steps. For query and textualized graph embeddings, scores rise sharply between 0-4 steps. This indicates that training effectively reduces the representation gap between graph and language embeddings. After 8 steps, aligned embeddings consistently achieve higher similarity scores then unaligned across all text representations (query, 8 summarization, and textualized graph), demonstrating the effectiveness of our approach. However, as shown in experiment of Appendix E, the alignment degree does not necessarily improve indefinitely. When it reaches a certain level, excessive alignment may over-alter the original graph information, compromising its accuracy. 5 Related Work Retrieval-Augmented Generation (RAG). RAG [ 13,10,13] has been extensively studied to address the challenges of hallucination and outdated information [ 8,9], which often lead to untrustworthy outputs from LLMs [ 47]. By integrating information retrieval systems, RAG has demonstrated its effectiveness in real-world applications, enhancing reliability of generated responses [ 48–51]. The RAG pipeline mainly includes: pre-retrieval (e.g., query rewriting and query expansion), retrieval, post-retrieval (e.g., reranking), and generation [ 13]. Among these, reranking has emerged as a key technique for refining retrieval results. Initial retrieval [ 52,53] often relies on simple similarity scoring. Reranking [ 22,21,44], on the other hand, employs more sophisticated models to reassess and reorder the initially retrieved documents based on their relevance to the query, thereby improving the quality and relevance of retrieved documents. For instance, bge-reranker [ 21] leverages cross-encoders that perform full attention over the input pair, providing higher accuracy compared to embedding-based models. However, in GRAG scenarios, reranking may disrupt the inherent structural information of graph. Our proposed Align-GRAG effectively performs post-retrieval processing to extract relevant knowledge, while preserves the graph structural information. Large Language Models on Graph. Graphs, composed of nodes and edges, are essential for modeling real-world relationships across various domains. In recent years, GNN [ 37,38,46,54] have emerged as a powerful tool for encoding graph structures. With the rise of LLMs and their demon- strated impressive capabilities, there is growing interest in integrating LLMs with graph learning techniques to enhance graph-based tasks [ 55–58]. Pioneering works
https://arxiv.org/abs/2505.16237v1
[ 59,31,60] have incorporated GNNs by feeding graph tokens into LLMs. Beyond basic integration, recent studies [ 61–63] have focused on tighter architectural fusion, embedding graph neural layers within transformer architec- tures to enable seamless interaction between graph reasoning and natural language understanding . However, most existing researches [ 58] focus on tasks like node and graph classification. Our work addresses the challenges of retrieval-based graph QA, leveraging retrieval-augmented methods and graph learning techniques while overcoming the limitations of current frameworks. Graph RAG. Traditional RAG often struggles to capture structured relational knowledge or global context. Graph Retrieval-Augmented Generation (GRAG) addresses these limitations by leveraging graph-structured knowledge for improved retrieval and reasoning [ 42,41,64,65]. GRAG utilizes graph databases (e.g., Freebase [ 66], Wikidata [ 67]) to retrieve graph elements like triples, paths, or subgraphs [ 14,68,45,69]. Some approaches [ 14,15,70,71] focus on constructing large- scale graphs from text. For example, GraphRAG [ 14] uses LLMs to extract entities, partitions graphs using hierarchical clustering, and generates summaries to support retrieval. Other methods [72,73,18,19] encode graph data with GNNs and integrate it with language models for enhanced reasoning, while some translate natural language queries into logical forms for knowledge graph retrieval [ 74–76,27,28]. More recently, LLMs have been used as iterative agents for reasoning over knowledge graphs [ 30,26,20,29]. However, existing GRAG approaches primarily emphasize optimizing retrieval but often lack effective post-retrieval strategies, leading to shallow subgraph- query similarity. To address this, our Align-GRAG introduces a novel dual alignment mechanism that bridges retriever and generator, improving overall performance. 6 Conclusion This work proposes a novel reasoning-guided dual alignment framework, designed to tackle two key challenges in GRAG: the retrieval of irrelevant nodes causing noisy inputs and the representation gap between graph structures and language models. Align-GRAG introduces an innovative graph aligner module for dual alignment of knowledge and representation, enabling the effective pruning of irrelevant nodes while unifying graph-language representations. Extensive experiments on the GraphQA benchmark demonstrate that Align-GRAG consistently outperforms strong baselines. 9 References [1] OpenAI. Gpt-4 technical report, 2024. [2]Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [3]Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223 , 2023. [4]Derong Xu, Wei Chen, Wenjun Peng, Chao Zhang, Tong Xu, Xiangyu Zhao, Xian Wu, Yefeng Zheng, Yang Wang, and Enhong Chen. Large language models for generative information extraction: A survey. Frontiers of Computer Science , 18(6):186357, 2024. [5]Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zheng Liu, Zhicheng Dou, and Ji-Rong Wen. Large language models for information retrieval: A survey. arXiv preprint arXiv:2308.07107 , 2023. [6]Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. A review on language models as knowledge bases. arXiv preprint arXiv:2204.06031 , 2022. [7]Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu,
https://arxiv.org/abs/2505.16237v1
Alexander H Miller, and Sebastian Riedel. Language models as knowledge bases? arXiv preprint arXiv:1909.01066 , 2019. [8]Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qiang- long Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 2023. [9]Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. Hallucination is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817 , 2024. [10] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2023. [11] Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, and Erik Cambria. A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics. arXiv preprint arXiv:2310.05694 , 2023. [12] Jinqi Lai, Wensheng Gan, Jiayang Wu, Zhenlian Qi, and S Yu Philip. Large language models in law: A survey. AI Open , 2024. [13] Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 6491–6501, 2024. [14] Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. From local to global: A graph rag approach to query-focused summarization. arXiv preprint arXiv:2404.16130 , 2024. [15] Zirui Guo, Lianghao Xia, Yanhua Yu, Tu Ao, and Chao Huang. Lightrag: Simple and fast retrieval-augmented generation. arXiv preprint arXiv:2410.05779 , 2024. [16] Qingyu Guo, Fuzhen Zhuang, Chuan Qin, Hengshu Zhu, Xing Xie, Hui Xiong, and Qing He. A survey on knowledge graph-based recommender systems. IEEE Transactions on Knowledge and Data Engineering , 34(8):3549–3568, 2020. [17] Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering , 2024. 10 [18] Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V Chawla, Thomas Laurent, Yann LeCun, Xavier Bresson, and Bryan Hooi. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering. arXiv preprint arXiv:2402.07630 , 2024. [19] Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V Chawla, and Panpan Xu. Graph neural prompting with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 19080–19088, 2024. [20] Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel Ni, Heung-Yeung Shum, and Jian Guo. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph. In The Twelfth International Conference on Learning Representations . [21] Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff, Defu Lian, and Jian-Yun Nie. C- pack: Packed resources for general chinese embeddings. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 641–649, 2024. [22] Zehan Li, Xin Zhang, Yanzhao Zhang,
https://arxiv.org/abs/2505.16237v1
Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 , 2023. [23] Pengyue Jia, Derong Xu, Xiaopeng Li, Zhaocheng Du, Xiangyang Li, Xiangyu Zhao, Yichao Wang, Yuhao Wang, Huifeng Guo, and Ruiming Tang. Bridging relevance and reasoning: Rationale distillation in retrieval-augmented generation. arXiv preprint arXiv:2412.08519 , 2024. [24] Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, and Animashree Anandkumar. Multi-modal molecule structure–text model for text-based retrieval and editing. Nature Machine Intelligence , 5(12):1447–1457, 2023. [25] Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, and Jian Tang. Learning on large-scale text-attributed graphs via variational inference. arXiv preprint arXiv:2210.14709 , 2022. [26] LINHAO LUO, Yuan-Fang Li, Reza Haf, and Shirui Pan. Reasoning on graphs: Faithful and interpretable large language model reasoning. In The Twelfth International Conference on Learning Representations . [27] Derong Xu, Xinhang Li, Ziheng Zhang, Zhenxi Lin, Zhihong Zhu, Zhi Zheng, Xian Wu, Xiangyu Zhao, Tong Xu, and Enhong Chen. Harnessing large language models for knowledge graph question answering via adaptive multi-aspect retrieval-augmentation, 2025. [28] Xixin Hu, Xuan Wu, Yiheng Shu, and Yuzhong Qu. Logical form generation via multi-task learning for complex question answering over knowledge bases. In Proceedings of the 29th International Conference on Computational Linguistics , pages 1687–1696, 2022. [29] Liyi Chen, Panrong Tong, Zhongming Jin, Ying Sun, Jieping Ye, and Hui Xiong. Plan-on-graph: Self-correcting adaptive planning of large language model on knowledge graphs. arXiv preprint arXiv:2410.23875 , 2024. [30] Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yang Song, Chen Zhu, Hengshu Zhu, and Ji- Rong Wen. Kg-agent: An efficient autonomous agent framework for complex reasoning over knowledge graph. arXiv preprint arXiv:2402.11163 , 2024. [31] Bryan Perozzi, Bahare Fatemi, Dustin Zelle, Anton Tsitsulin, Mehran Kazemi, Rami Al-Rfou, and Jonathan Halcrow. Let your graph do the talking: Encoding structured data for llms. arXiv preprint arXiv:2402.05862 , 2024. [32] Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals of mathematical statistics , 22(1):79–86, 1951. 11 [33] Nils Reimers and Iryna Gurevych. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics, 11 2020. [34] Daniel Bienstock, Michel X Goemans, David Simchi-Levi, and David Williamson. A note on the prize collecting traveling salesman problem. Mathematical programming , 59(1):413–420, 1993. [35] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [36] Tianyang Zhong, Zhengliang Liu, Yi Pan, Yutong Zhang, Yifan Zhou, Shizhe Liang, Zihao Wu, Yanjun Lyu, Peng Shu, Xiaowei Yu, et al. Evaluation of openai o1: Opportunities and challenges of agi. arXiv preprint arXiv:2409.18486 , 2024. [37] Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509 , 2020. [38] Petar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova,
https://arxiv.org/abs/2505.16237v1
Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations , 2018. [39] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 , 2021. [40] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems , 35:22199–22213, 2022. [41] Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov. Can language models solve graph problems in natural language? Advances in Neural Informa- tion Processing Systems , 36, 2024. [42] Jinheon Baek, Alham Fikri Aji, and Amir Saffari. Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. arXiv preprint arXiv:2306.04136 , 2023. [43] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 3045–3059, 2021. [44] Jialin Dong, Bahare Fatemi, Bryan Perozzi, Lin F Yang, and Anton Tsitsulin. Don’t forget to connect! improving rag with graph-based reranking. arXiv preprint arXiv:2405.18414 , 2024. [45] Yuntong Hu, Zhihan Lei, Zheng Zhang, Bo Pan, Chen Ling, and Liang Zhao. Grag: Graph retrieval-augmented generation. arXiv preprint arXiv:2405.16506 , 2024. [46] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 , 2016. [47] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems , 33:9459–9474, 2020. [48] Alireza Salemi and Hamed Zamani. Evaluating retrieval quality in retrieval-augmented gen- eration. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2395–2400, 2024. 12 [49] Hamed Zamani and Michael Bendersky. Stochastic rag: End-to-end retrieval-augmented generation through expected utility maximization. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2641– 2646, 2024. [50] Diji Yang, Jinmeng Rao, Kezhen Chen, Xiaoyuan Guo, Yawen Zhang, Jie Yang, and Yi Zhang. Im-rag: Multi-round retrieval-augmented generation through learning inner monologues. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 730–740, 2024. [51] Viju Sudhi, Sinchana Ramakanth Bhat, Max Rudat, and Roman Teucher. Rag-ex: A generic framework for explaining retrieval augmented generation. In Proceedings of the 47th Interna- tional ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’24, page 2776–2780, New York, NY , USA, 2024. Association for Computing Machinery. [52] Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval , 3(4):333–389, 2009. [53] Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118 , 2021. [54] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks.
https://arxiv.org/abs/2505.16237v1
IEEE transactions on neural networks and learning systems , 32(1):4–24, 2020. [55] Chao Huang, Xubin Ren, Jiabin Tang, Dawei Yin, and Nitesh Chawla. Large language models for graphs: Progresses and directions. In Companion Proceedings of the ACM Web Conference 2024 , WWW ’24, page 1284–1287, New York, NY , USA, 2024. Association for Computing Machinery. [56] Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, and Jiawei Han. Large language models on graphs: A comprehensive survey. IEEE Transactions on Knowledge and Data Engineering , 2024. [57] Yuhan Li, Zhixun Li, Peisong Wang, Jia Li, Xiangguo Sun, Hong Cheng, and Jeffrey Xu Yu. A survey of graph meets large language model: Progress and future directions. arXiv preprint arXiv:2311.12399 , 2023. [58] Xubin Ren, Jiabin Tang, Dawei Yin, Nitesh Chawla, and Chao Huang. A survey of large lan- guage models for graphs. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 6616–6626, 2024. [59] Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang. Graphgpt: Graph instruction tuning for large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 491–500, 2024. [60] Ziwei Chai, Tianjie Zhang, Liang Wu, Kaiqiao Han, Xiaohai Hu, Xuanwen Huang, and Yang Yang. Graphllm: Boosting graph reasoning ability of large language model. arXiv preprint arXiv:2310.05845 , 2023. [61] Yijian Qin, Xin Wang, Ziwei Zhang, and Wenwu Zhu. Disentangled representation learning with large language models for text-attributed graphs. arXiv preprint arXiv:2310.18152 , 2023. [62] Yun Zhu, Yaoke Wang, Haizhou Shi, and Siliang Tang. Efficient tuning and inference for large language models on textual graphs. arXiv preprint arXiv:2401.15569 , 2024. [63] Xuanwen Huang, Kaiqiao Han, Yang Yang, Dezheng Bao, Quanjin Tao, Ziwei Chai, and Qi Zhu. Can gnn be good adapter for llms? In Proceedings of the ACM on Web Conference 2024 , pages 893–904, 2024. 13 [64] Haoyu Han, Yu Wang, Harry Shomer, Kai Guo, Jiayuan Ding, Yongjia Lei, Mahantesh Halap- panavar, Ryan A Rossi, Subhabrata Mukherjee, Xianfeng Tang, et al. Retrieval-augmented generation with graphs (graphrag). arXiv preprint arXiv:2501.00309 , 2024. [65] Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, and Zheng Li. Retrieval-augmented generation with knowledge graphs for customer service question answering. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2905–2909, 2024. [66] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data , pages 1247–1250, 2008. [67] Denny Vrande ˇci´c and Markus Krötzsch. Wikidata: a free collaborative knowledgebase. Com- mun. ACM , 57(10):78–85, sep 2014. [68] Costas Mavromatis and George Karypis. Gnn-rag: Graph neural retrieval for large language model reasoning. arXiv preprint arXiv:2405.20139 , 2024. [69] Shirley Wu, Shiyu Zhao, Michihiro Yasunaga, Kexin Huang, Kaidi Cao, Qian Huang, Vassilis N Ioannidis, Karthik Subbian, James Zou, and Jure Leskovec. Stark: Benchmarking llm retrieval on textual
https://arxiv.org/abs/2505.16237v1
and relational knowledge bases. arXiv preprint arXiv:2404.13207 , 2024. [70] Tianyu Fan, Jingyuan Wang, Xubin Ren, and Chao Huang. Minirag: Towards extremely simple retrieval-augmented generation. arXiv preprint arXiv:2501.06713 , 2025. [71] Lei Liang, Mengshu Sun, Zhengke Gui, Zhongshu Zhu, Zhouyu Jiang, Ling Zhong, Yuan Qu, Peilong Zhao, Zhongpu Bo, Jin Yang, et al. Kag: Boosting llms in professional domains via knowledge augmented generation. arXiv preprint arXiv:2409.13731 , 2024. [72] Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. Qa-gnn: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 535–546, 2021. [73] Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, and Ji-Rong Wen. Unikgqa: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph. arXiv preprint arXiv:2212.00959 , 2022. [74] Lingxi Zhang, Jing Zhang, Yanling Wang, Shulin Cao, Xinmei Huang, Cuiping Li, Hong Chen, and Juanzi Li. Fc-kbqa: A fine-to-coarse composition framework for knowledge base question answering. In The 61st Annual Meeting Of The Association For Computational Linguistics , 2023. [75] Donghan Yu, Sheng Zhang, Patrick Ng, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Yiqun Hu, William Yang Wang, Zhiguo Wang, and Bing Xiang. Decaf: Joint decoding of answers and logical forms for question answering over knowledge bases. In The Eleventh International Conference on Learning Representations , 2022. [76] Haoran Luo, Zichen Tang, Shiyao Peng, Yikai Guo, Wentai Zhang, Chenghao Ma, Guant- ing Dong, Meina Song, Wei Lin, et al. Chatkbqa: A generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models. arXiv preprint arXiv:2310.08975 , 2023. [77] Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 201–206, 2016. [78] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [79] Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, et al. A survey on llm-as-a-judge. arXiv preprint arXiv:2411.15594 , 2024. 14 A Prize-Collecting Steiner Tree To achieve the PCST algorithm, we assign ‘prize’ to nodes and edges based on their similarity to a given query. Relevance is determined through the ranked sets of cosine similarity, VkandEk, as follows: prize(n) =k−i,ifn∈ Vkandnis the i-th ranked node 0, otherwise(12) where iis the rank of nin setVk. Nodes that are not among top krankings are assigned a prize of zero. The objective of the PCST algorithm is to identify a subgraph that maximizes the total prize of nodes and edges while minimizing the cost: Gretriever = arg max S⊆G, Sis connected X n∈VSprize(n) +X e∈ESprize(e)−cost(S)! (13) where VSandESare the sets of nodes and edges in the subgraph S, respectively. The cost of constructing the subgraph is defined as cost(S) =|E| ·Ce,
https://arxiv.org/abs/2505.16237v1
where |E|is the number of edges, and Ce is a predefined per-edge cost that serves as a regularization parameter to control the subgraph’s size. In this way, we can obtain a preliminary retrieved subgraph GRetriever . B Experimental Settings Table 4: Statistics of GraphQA Benchmark [18]. Dataset ExplaGraphs SceneGraphs WebQSP #Training 1,659 59,978 2,826 #Validation 553 19,997 245 #Test 554 20,025 1,628 B.1 Datasets and Metrics. Following G-retriever [ 18], we conducted experiments on GraphQA benchmark [ 18], including ExplaGraphs (commonsense reasoning), SceneGraphs (scene graph understanding), and WebQSP (knowledge graph reasoning). The dataset statistics are summarized in Table 4, with a data split of train:validation:test = 6:2:2. The split for WebQSP is derived from the original dataset [ 77]. For ExplaGraphs and SceneGraphs, we use Accuracy as the evaluation metric, and for WebQSP, we employ F1, Hit@1, and Accuracy to evaluate model performance. B.2 Baselines. To evaluate the effectiveness of our proposed method, we compare it with four categories of baseline: •Inference-only. This category includes models that leverage frozen LLMs for question answering by using textual graph and query as input. Various prompt designs are employed, including: Zero-shot (answers questions directly based on retrieved information), Zero-CoT [35] (enhances zero-shot reasoning by appending the phrase "Let’s think step by step."), CoT-BAG [41] (adds "Let’s construct a graph with the nodes and edges first." after providing the textual description of graph), and KAPING [42] (retrieves relevant graph triples and prepends them to the input question). •Raw Fine-tuning. In this configuration, following [ 18], we fine-tune the LLM using parameter-efficient tuning methods, without incorporating advanced reranking techniques. It includes popular approaches like Prompt Tuning [43] and LoRA [39]. •Reranker-based. These baselines use reranking models to refine the ranking of documents before inputting it into the LLM. Including: GTE (General Textual Embedding) [ 22], with two variants (gte-base, 109M parameters; gte-large, 335M parameters) developed by Alibaba DAMO Academy, trained on large-scale relevance text pairs to enhance retrieval 15 accuracy. BGE (BAAI General Embedding) [ 21], with two variants (bge-reranker-base, 278M parameters; bge-reranker-large, 560M parameters), are cross-encoders optimized for retrieval-augmented LLMs, offering higher accuracy at the cost of efficiency. G-RAG and G-RAG-RL [44] use GNNs as rerankers, leveraging document connections and semantic information from abstract meaning representation graphs to improve context-aware ranking. •GNN-based. This category integrates GNN encoders with LLM embeddings. GraphToken [31] encodes graph structures as explicit prompts for LLMs, boosting graph reasoning performance. GNP [19] introduces a plug-and-play approach using a GNN encoder and cross-modality pooling to enhance LLMs with knowledge graphs. G-Retriever [18] estab- lishes the GraphQA benchmark and employs a RAG method with soft prompting, improving graph-based QA, understanding, and mitigating hallucination. GRAG [ 45] introduces a divide-and-conquer strategy for efficient textual subgraph retrieval and incorporates both text and graph views into LLMs for graph context-aware generation. B.3 Implementation Details. For the retrieval process, we use the same retrieval results obtained through the PCST algorithm across all baselines to ensure a fair comparison. We use GraphTransformer [ 37], GAT [ 38], and GCN [ 46] as GNN encoders, and Llama-2-7b-hf and Llama-2-13b-hf [
https://arxiv.org/abs/2505.16237v1
2] as generators. Both reranker-based and GNN-based methods apply LoRA for fine-tuning, except for GraphToken, whose results are from G-retriever [ 18]. All methods are compared using the same training hyperparameters where applicable, ensuring fair comparability (e.g., GNN layers and LoRA rank). In the Aligner module, we explore two hyperparameters: alignment degree and the number of seed nodes ( nseed), with the analysis shown in Figure 5. To summarize the reasoning chain, we employ Llama-3.1-70B-Instruct [78]. The models were trained on the training set, with optimal hyperparameters and early stopping determined using the validation set. Results are reported on the test set. All experiments were run on two NVIDIA A100 GPUs (80GB each). To implement baselines, for GNP [ 19], we implemented the Graph Neural Prompting module within our framework, including components such as the GNN encoder, cross-modality pooling, and domain projector. For G-RAG and G-RAG-RL [ 44], we adopted their ranking approach, combining cross-entropy loss with pairwise ranking loss. However, since treating documents as nodes was infeasible in our case, we instead treated entities as nodes and employed the same GNN encoder as our method. We implemented GTE [ 22] and BGE-reranker [ 21] using their official open-source models. Nodes and triples are ranked by query similarity and, as in our method, fed into the LLM for generation. For the GRAG PCST model [ 45], we reproduced its experiments. However, to ensure a fair evaluation, we standardized the retrieval process by using the PCST method for retrieval. This allowed us to directly compare it with their graph encoder approach. C Evaluating the Efficiency after Node Pruning Table 5: Evaluation of Efficiency. ↑indicates results better than PCST, while ↓indicates worse results. (xx%) represents the percentage of tokens relative to PCST. Method #Tokens Infer Time Hit@1 Non-Retriever 100626.28 OOM - BM25 [52] 2569.57 17:24 min 0.4394 PCST [34] 2576.66 17:28 min 0.4502 PCST w/ Alignernseed=4 496.54 (19.27%) 3:31 min 0.4299 ↓ w/ Alignernseed=6 698.54 (27.11%) 4:45 min 0.4527 ↑ w/ Alignernseed=8 905.52 (35.14%) 6:55 min 0.4699 ↑ w/ Alignernseed=10 1120.75 (43.5%) 8:27 min 0.4785 ↑ w/ Alignernseed=15 1546.90 (60.04%) 11:50 min 0.4914 ↑ In this section, we evaluate the efficiency improvements brought by node alignment and node pruning in Aligner module . Specifically, we compare different retrieval methods in terms of average token consumption, average inference time, and Hit@1 scores on the test set. The w/ Aligner method 16 is built on the initial retrieval results from PCST, where different seed nodes are selected to perform node pruning. The experiments were conducted on two Nvidia A100 80G GPUs in an inference-only setting. To fully utilize GPU resources and memory, we adjusted the batch size to be as large as possible under different average token conditions. For comparison, the BM25 method retrieves triples and ensures that the token consumption is roughly similar to that of PCST. From Table 5, we observe that when the number of seed nodes ( nseed) is set to 6, our Aligner method achieves performance comparable to PCST while utilizing 70% fewer tokens. This demonstrates the method’s ability to effectively
https://arxiv.org/abs/2505.16237v1
prune irrelevant knowledge while preserving the essential information. However, when nseedis too small (e.g., nseed= 4), performance drops significantly, suggesting that some critical information may be pruned during the process. On the other hand, increasing the number of seed nodes to 15 (utilizing 60.04% of the tokens) leads to performance that significantly surpasses PCST. This highlights the strength of our method in efficiently selecting useful knowledge and confirms that, in longer-context scenarios, removing irrelevant information enhances overall results. Additionally, we find that BM25 performs worse than PCST when using a similar number of tokens. This suggests that the PCST retrieval method is better suited for graph databases, as it more effectively captures the connections between knowledge. D Impact of Seed Nodes In this section, we analyze the impact of the number of seed nodes (nseed) on model performance. The experiments are conducted on the WebQSP dataset, where we evaluate the Hit@1 and Accuracy metrics. From the experiments in Figure 3, we observe that the Hit@1 and Accuracy performance peaks when the number of seed nodes is set to 25. Beyond this point (from 25 to 30), the performance starts to decline. This indicates that including too many nodes introduces a significant amount of irrelevant knowledge, which negatively impacts the model. Our pruning strategy effectively eliminates irrelevant knowledge to enhance model performance. On the other hand, when the number of seed nodes is as low as 5, the performance is considerably poor. This suggests that excessive pruning removes crucial knowledge, which is detrimental to performance. This highlights a trade-off: pruning reduces noise and improves performance, but over-pruning leads to the loss of essential knowledge. 510 15 20 25 30 Number of Seed Nodes0.480.500.520.540.560.580.60Accuracy-score 510 15 20 25 30 Number of Seed Nodes0.680.700.720.740.760.78Hit@1-score Figure 3: Hyperparameters Analysis of the Number of seed nodes. E Impact of Align Degree This section examines how the Align Degree (number of training steps for Aligner module) influences model performance. As shown in Figure 4, we evaluate the Hit@1 and Accuracy metrics on the WebQSP dataset. From the experimental curve of Align Degree, we observe that the Hit@1 and Accuracy metrics peak at around 60 epochs before declining. This indicates that, as training progresses, bridging the representation gap between the graph and language helps the LLM better understand graph data. However, excessive training may lead to overfitting, which disrupts graph information and ultimately causes a drop in performance. This suggests that there is an optimal point in training where the representation alignment is most effective, and training beyond this point can be detrimental. 17 102030405060708090100 Align Degree (Step)0.500.510.520.530.540.550.560.570.580.59Accuracy-score 102030405060708090100 Align Degree (Step)0.720.730.740.750.760.770.78Hit@1-score Figure 4: Hyperparameters Analysis of Align degree. F Impact of Top K retrieval Figure 5 illustrates the impact of varying the Top K retrieval of entities and relations on model performance across Accuracy-score, F1-score, and Hit@1-score. By analyzing the trends in the graphs, we can derive insights into the effect of Top K on model performance. All three metrics show a similar trend: performance improves as K increases, peaks at K = 10, and then declines. This
https://arxiv.org/abs/2505.16237v1