text
string
source
string
in the models’ paramet- ric memory (Mallen et al., 2023), making contex- tual information more crucial. For smaller models (Llama 3.2 3B, Llama 3.1 8B, and Qwen 2.5 7B), PopQA exhibits stronger positional effects when rotating the relevant passage. The effect is less pronounced in Llama 3.3 70B due to its larger para- metric memory that can often recall these long-tail entities.Retriever Sequential Inverse Shuffle MaxRel MinDist BGE 69.33 69.73 68.87 71.00 68.40 BGE+RR 70.20 71.00 71.40 71.33 70.33 BM25 54.73 54.60 55.93 56.07 55.00 BM25+RR 59.93 60.60 60.40 60.73 59.07 Table 6: Answer accuracy of Qwen 2.5 7B when ar- ranging with different strategies the top-10 passages retrieved from different retrieval pipelines. LLM NQ TriviaQA PopQA Q7B 44.20 68.80 20.40 L3B 58.60 68.00 20.60 L8B 67.40 80.80 30.60 L70B 74.60 92.20 49.60 Table 7: Closed-book answer accuracy for different LLMs across the three benchmarks. Regarding distracting effects, when rotating a hard distractor among weak distracting passages, all models generally display the characteristic U- shaped pattern (see Figures 9 to 12), suggesting that distracting effects are more consistent across models and less influenced by parametric knowl- edge. B.2 Positional Bias in Real Scenarios In Section 5, we presented experiments for k=5, showing minimal impact of different passage ar- rangement strategies on answer accuracy. Here, we expand the analysis to k=10 to investigate whether retrieving more passages might exhibit a more pro- nounced positional bias effect. Table 5 shows the answer accuracy across different LLMs when ar- ranging the top-10 passages retrieved by BGE+RR using the strategies described in Section 5. Simi- lar to the k=5 case, we observe that the positional bias has a marginal impact on answer accuracy. Across all LLMs, the difference between the best- performing strategy and the Shuffle strategy is not statistically significant according to the Wilcoxon test with p-value=0.05. Table 6 presents the results for Qwen 2.5 7B across different retrieval pipelines. We note one exception in the BGE retriever (with- out re-ranker), where the MaxRelevance strategy achieves 71.00 accuracy while Shuffle yields 68.87, which is a statistically significant difference. However, this appears to be an isolated case rather than a consistent pattern. This single exception does not contradict the broader statistical trend ob- served across all other configurations. For weaker retrievers like BM25, the positional ordering has less impact simply because they retrieve fewer rel- evant passages overall, as shown in Fig. 1a. In general, these findings with k=10 reinforce our conclusion from the main paper: in realistic RAG settings, the impact of positional bias is mini- mal compared to its effect in controlled experimen- tal conditions. The interaction between relevant and distracting passages in real retrieval results tends to neutralize potential benefits from strategic passage ordering. 1 5 10 15 20 25 k0.30.40.50.60.7HITS@kBGE+RR BGE BM25+RR BM25(a) 1 5 10 15 20 25 k0.10.20.30.40.50.6Precision@kBGE+RR BGE BM25+RR BM25 (b) 1 5 10 15 20 25 k0.20.30.40.50.60.70.8MaxDE@kBGE+RR BGE BM25+RR BM25 (c) 1 5 10 15 20 25 k0.200.250.300.35MeanDE@kBGE+RR BGE BM25+RR BM25 (d) Figure 3: Results on PopQA of different retrieval pipelines when varying the number kof retrieved passages.
https://arxiv.org/abs/2505.15561v1
We compute the distracting effect on Qwen 2.5 7B. 1 5 10 15 20 25 k0.20.40.60.8HITS@kBGE+RR BGE BM25+RR BM25 (a) 1 5 10 15 20 25 k0.20.40.6Precision@kBGE+RR BGE BM25+RR BM25 (b) 1 5 10 15 20 25 k0.20.40.60.8MaxDE@kBGE+RR BGE BM25+RR BM25 (c) 1 5 10 15 20 25 k0.140.160.180.200.220.240.26MeanDE@kBGE+RR BGE BM25+RR BM25 (d) Figure 4: Results on NQ of different retrieval pipelines when varying the number kof retrieved passages. We compute the distracting effect on Qwen 2.5 7B. 1 5 10 15 20 25 k0.60.70.80.9HITS@kBGE+RR BGE BM25+RR BM25 (a) 1 5 10 15 20 25 k0.30.40.50.60.70.8Precision@kBGE+RR BGE BM25+RR BM25 (b) 1 5 10 15 20 25 k0.10.20.30.40.50.6MaxDE@kBGE+RR BGE BM25+RR BM25 (c) 1 5 10 15 20 25 k0.060.070.080.090.100.11MeanDE@kBGE+RR BGE BM25+RR BM25 (d) Figure 5: Results on TriviaQA of different retrieval pipelines when varying the number kof retrieved passages. We compute the distracting effect on Qwen 2.5 7B. You are given a question and you must respond based on the provided documents. Respond directly without providing any premise or explanation. Documents: <passage> ... <passage> Question: <question> Answer: Figure 6: Prompt used for response generation. Your job is to look at a question, a list of acceptable answers, and a document, then determine if the document is RELEVANT or IRRELEVANT for answering the question. Each document may have some metadata information like the title or the section it belongs to. This information may help you understand the context of the document. We are in a multi-reference setting, which means that there may be multiple correct answers to the question. The answer list contains all the correct answers. First, I will give examples of each type, and then you will evaluate a new example. The following are examples of RELEVANT documents. ``` Question 1: when did korn 's follow the leader come out Acceptable answers list 1: [ 'August 18 , 1998 ','Summer 1998 '] Document 1: (Title: Follow the Leader (Korn album)) Follow the Leader is the third studio album by the American nu metal band Korn . The album was released on August 18 , 1998 , through Immortal / Epic . This was their first album not produced by Ross Robinson . Instead , it was produced by Steve Thompson and Toby Wright . Question 2: who played bobby byrd in get on up Acceptable answers list 2: [ 'Nelsan Ellis '] Document 2: (Title: Get on Up (film))(Section: Production - Casting) On August 26, 2013, Universal selected Chadwick Boseman to play the lead role of James Brown. Boseman did all of his own dancing and some singing. The soundtrack is live recordings of James Brown. On September 17, Universal announced an open casting call for actors, musicians, and extras for different roles in the biopic, which was held on September 21. On September 30, Taylor cast Viola Davis to play Susie Brown and Octavia Spencer to play Aunt Honey. On October 21, Nelsan Ellis joined the cast of film to portray Bobby Byrd, Brown 's long-time friend. Question 3: What movie has the song on the road again? Acceptable answers list
https://arxiv.org/abs/2505.15561v1
3: [ 'Honeysuckle Rose '] Document 3: (Title: On the Road Again (Willie Nelson song)) The song , about life on tour , came about when the executive producer of the film Honeysuckle Rose approached Nelson about writing the song for the film 's soundtrack . ''On the Road Again ''became Nelson 's 9th Country & Western No. 1 hit overall ( 6th as a solo recording act ) in November 1980 , and became one of Nelson 's most recognizable tunes . In addition , the song reached No. 20 on the Billboard Hot 100 , and No. 7 on the Adult Contemporary chart . It was his biggest pop hit to that time and won him a Grammy Award for Best Country Song a year later . ``` These documents are all RELEVANT because: - They contain sufficient information to support at least ONE of the acceptable answers. - The information can be found directly or through simple inference. - Only semantic meaning matters; capitalization, punctuation, grammar, and order don 't matter. The following are examples of IRRELEVANT documents. ``` Question 1: when did korn 's follow the leader come out Acceptable answers list 1: [ 'August 18 , 1998 ','Summer 1998 '] Document 1: (Title: Korn Discography) Korn 's third album marked a significant evolution in their sound and commercial success. The band spent much of 1998 recording and promoting this album, which would go on to achieve platinum status multiple times. Following their summer tour, they continued to gain mainstream attention. The album contained several singles that performed well on the charts, including "Got the Life" and "Freak on a Leash." Reviews were generally positive, with critics noting the band had refined their nu-metal style while maintaining their aggressive edge. Question 2: who played bobby byrd in get on up Acceptable answers list 2: [ 'Nelsan Ellis '] Document 2: (Title: Get on Up (film))(Section: Critical Reception) Critics particularly praised the casting decisions in "Get on Up," noting the strong ensemble supporting Chadwick Boseman 's portrayal of James Brown. The film 's recreation of the dynamic between Brown and his longtime friend and musical collaborator received significant attention. Several reviewers highlighted the chemistry between the main characters and how it captured their complex professional and personal relationship spanning decades. The scenes depicting their early musical development were considered among the film 's strongest moments, effectively showing how their partnership shaped the evolution of funk music. Question 3: What movie has the song on the road again? Acceptable answers list 3: [ 'Honeysuckle Rose '] Document 3: (Title: Classic Songs in Films) Many people believe, though it 's not actually correct, that Willie Nelson 's iconic song 'On The Road Again 'first appeared in the 1980 film 'Smokey and the Bandit II. 'Some music historians have suggested that this misconception arose because the film 's themes of truck driving and life on the road seemed to perfectly match the song 's message. The song 's road-trip vibe made it a natural fit for many movies, but this particular connection
https://arxiv.org/abs/2505.15561v1
is just a popular misconception. ``` These documents are all IRRELEVANT because: - They lack the necessary information to support any of the acceptable answers, even though they may contain some related information. - They reference similar themes, keywords, or surrounding context but don 't provide the specific answer required. - Some contain subtle distractors that seem relevant at first glance but don 't actually answer the specific question. Before making your final evaluation, follow this step-by-step process: 1. Identify the specific information needed to match at least one of the acceptable answers. 2. Carefully search the document for this exact information or information that directly implies it. 3. Check for these common errors: - The document contains similar keywords or themes but not the actual answer. - The document contains partial information that would need to be combined with external knowledge. - The document discusses related topics but doesn 't specifically answer the question. Also note the following things: - The evaluation should be based ONLY on the specific question and acceptable answers list provided. - Do not try to generalize or apply your own knowledge beyond the information given in the question, acceptable answers list, and document. - A document with tangential information about the topic is still IRRELEVANT if it doesn 't contain the specific answer. Here is a new example. Don 't apologize or correct yourself if there was a mistake; we are just trying to evaluate the relevance of the document. ``` Question: {question} Acceptable answers list: {answers} Document: {document} ``` Evaluate the document for this new question as one of: A: RELEVANT B: IRRELEVANT Return a JSON object with the following format: {{ "motivation": "Your concise motivation for the evaluation here. Use maximum 2 sentences.", "grade": "A" or "B" }} Figure 7: Prompt for document relevance assessment using Claude 3.7 Sonnet as judge. Your job is to look at a question, a list of acceptable answers, and a predicted answer, and then assign a grade of either CORRECT or INCORRECT. We are in a multi-reference setting, which means that there may be multiple correct answers to the question. The answer list contains all the correct answers. First, I will give examples of each grade, and then you will grade a new example. The following are examples of CORRECT predicted answers. ``` Question: What are the names of Barack Obama 's children? Acceptable answers list: [ 'Malia Obama and Sasha Obama ','Natasha Marian and Malia Ann '] Predicted answer 1: sasha and malia obama Preidcted answer 2: Natasha and Malia Predicted answer 3: most people would say Malia and Sasha, but I 'm not sure and would have to double check Predicted answer 4: Barack Obama has two daughters. Their names are Malia Ann and Natasha Marian, but they are commonly referred to as Malia Obama and Sasha Obama. Malia was born on July 4, 1998, and Sasha was born on June 10, 2001. ``` These predicted answers are all CORRECT because: - They contain all essential information from at least one of the acceptable answers. -
https://arxiv.org/abs/2505.15561v1
They do not contain any information that contradicts the acceptable answers. - Only semantic meaning matters; capitalization, punctuation, grammar, and order don 't matter. - Hedging and guessing are permissible, provided that at least one of the acceptable answers is fully included and the response contains no incorrect information or contradictions. The following are examples of INCORRECT predicted answers. ``` Question: What are the names of Barack Obama 's children? Acceptable answers list: [ 'Malia and Sasha ','Natasha Marian and Malia Ann '] Predicted answer 1: Malia. Predicted answer 2: Malia, Sasha, and Susan. Predicted answer 3: Barack Obama does not have any children. Predicted answer 4: I think it 's either Malia and Sasha. Or it could be Malia and Jackie. Or it could be Joey and Malia. Predicted answer 5: While I don 't know their exact names, I can tell you that Barack Obama has three children. Predicted answer 6: It 's possible you may mean Betsy and Olivia. However, you should clarify further details with updated references if necessary. Is that the correct answer? Predicted answer 7: It may be the case that Obama 's child is named James. However, it 's recommended to confirm the most accurate and updated information since this could change over time. This model may not always reflect the most current information. Predicted answer 8: Malia and Sasha are Barack Obama 's daughters. Additionally, he adopted a son named Michael in 2018. ``` These predicted answers are all INCORRECT because: - A factual statement in the answer contradicts the information in the acceptable answers list. Incorrect statements that have some hedging (e.g., "it is possible that", "although i 'm not sure, i think") are also considered incorrect. In particular: - Predicted answer 1: It only mentions one child when both are required. - Predicted answers 2, 5, and 8: They include accurate information but also contain incorrect information ("Susan", "three children", "adopted a son named Michael"). - Predicted answer 3: It directly contradicts the acceptable answer. - Predicted answer 4: It presents multiple alternatives, some of which are incorrect ("Jackie", "Joey"). - Predicted answers 6 and 7: They suggest completely incorrect answers, even with hedging ("Betsy and Olivia", "James"). Also note the following things: - The acceptable answers may contain more information than the question. In such cases, the predicted answer only needs to contain the information that is in the question. - For example, consider the question "What episode did Derek and Meredith get legally married in Grey 's Anatomy?" with acceptable answer "Season 7, Episode 20: White Wedding". Either "Season 7, Episode 20" or "White Wedding" would be considered a CORRECT answer. - Do not punish predicted answers if they omit information that would be clearly inferred from the question. - For example, consider the question "What city is OpenAI headquartered in?" and the acceptable answer "San Francisco, California". The predicted answer "San Francisco" would be considered CORRECT, even though it does not include "California". - Consider the question "What award did A pretrainer 's guide to training data: Measuring the
https://arxiv.org/abs/2505.15561v1
effects of data age, domain coverage, quality, & toxicity win at NAACL '24?", the acceptable answer is "Outstanding Paper Award". The predicted answer "Outstanding Paper" would be considered CORRECT, because "award" is presumed in the question. - For the question "What is the height of Jason Wei in meters?", the acceptable answer is "1.73 m". The predicted answer "1.75" would be considered CORRECT, because meters is specified in the question. - For the question "What is the name of Barack Obama 's wife?", the acceptable answer is "Michelle Obama". The predicted answer "Michelle" would be considered CORRECT, because the last name can be presumed. - Do not punish for typos in people 's name if it 's clearly the same name. - For example, if the acceptable answer is "Hyung Won Chung", you can consider the following predicted answers as correct: "Hyoong Won Choong", "Hyungwon Chung", or "Hyun Won Chung". - You must attain to the question and acceptable answers list information. Do not rely on your knowledge to grade the answer. If the predicted answer is correct based on the question and acceptable answer, it should be considered correct, even if you know that the answer is wrong. Here is a new example. Don 't apologize or correct yourself if there was a mistake; we are just trying to grade the answer. ``` Question: {question} Acceptable answers list: {target} Predicted answer: {predicted_answer} ``` Grade the predicted answer of this new question as one of: A: CORRECT B: INCORRECT Return a JSON object with the following format: {{ "motivation": "Your concise motivation for the grade here. Use maximum 2 sentences.", "grade": "A" or "B" }} Figure 8: Prompt for answer correctness assessment using Claude 3.7 Sonnet as judge. 1 2 3 4 5 6 7 8 910 Relevant Passage Position677073767982858891Accuracy NQ 1 Relevant + 4 Weak Distracting 1 Relevant + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Relevant Passage Position677073767982858891 TriviaQA 1 2 3 4 5 6 7 8 910 Relevant Passage Position677073767982858891 PopQA 1 2 3 4 5 6 7 8 910 Hard Distracting Position27313539434751555963Distracting Effect 1 Hard Distracting + 4 Weak Distracting 1 Hard Distracting + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Hard Distracting Position27313539434751555963 1 2 3 4 5 6 7 8 910 Hard Distracting Position27313539434751555963 Qwen 2.5 7BFigure 9: Controlled experiments results for Qwen 2.5 7B across datasets. 1 2 3 4 5 6 7 8 910 Relevant Passage Position636669727578818487Accuracy NQ 1 Relevant + 4 Weak Distracting 1 Relevant + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Relevant Passage Position636669727578818487 TriviaQA 1 2 3 4 5 6 7 8 910 Relevant Passage Position636669727578818487 PopQA 1 2 3 4 5 6 7 8 910 Hard Distracting Position5053565962656871747780Distracting Effect 1 Hard Distracting + 4 Weak Distracting 1 Hard Distracting + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Hard Distracting Position5053565962656871747780 1 2 3 4 5 6 7 8 910 Hard Distracting Position5053565962656871747780 Llama 3.2 3B Figure 10: Controlled experiments
https://arxiv.org/abs/2505.15561v1
results for Llama 3.2 3B across datasets. 1 2 3 4 5 6 7 8 910 Relevant Passage Position7275788184879093Accuracy NQ 1 Relevant + 4 Weak Distracting 1 Relevant + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Relevant Passage Position7275788184879093 TriviaQA 1 2 3 4 5 6 7 8 910 Relevant Passage Position7275788184879093 PopQA 1 2 3 4 5 6 7 8 910 Hard Distracting Position50535659626568717477Distracting Effect 1 Hard Distracting + 4 Weak Distracting 1 Hard Distracting + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Hard Distracting Position50535659626568717477 1 2 3 4 5 6 7 8 910 Hard Distracting Position50535659626568717477 Llama 3.1 8BFigure 11: Controlled experiments results for Llama 3.1 8B across datasets. 1 2 3 4 5 6 7 8 910 Relevant Passage Position79828588919497Accuracy NQ 1 Relevant + 4 Weak Distracting 1 Relevant + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Relevant Passage Position79828588919497 TriviaQA 1 2 3 4 5 6 7 8 910 Relevant Passage Position79828588919497 PopQA 1 2 3 4 5 6 7 8 910 Hard Distracting Position404448525660646872Distracting Effect 1 Hard Distracting + 4 Weak Distracting 1 Hard Distracting + 9 Weak Distracting 1 2 3 4 5 6 7 8 910 Hard Distracting Position404448525660646872 1 2 3 4 5 6 7 8 910 Hard Distracting Position404448525660646872 Llama 3.3 70B Figure 12: Controlled experiments results for Llama 3.3 70B across datasets. You are given a question and you must respond based on the provided documents. Respond directly without providing any premise or explanation. Documents: Document[1] (Title: Bids for the 2024 and 2028 Summer Olympics)(Section: Non-selected bids - 2024 - United States) On 1 December 2014 all four shortlisted cities Boston, Los Angeles, San Francisco, and Washington met the final deadline to submit their bid proposals prior to the USOC Board Meeting held on 16 December 2014 in Redwood City, California. During the closed door meeting each of the four cities were given two hours to present their city 's bids. Following the final presentation, the USOC announced that the United States would bid to host the 2024 Olympic and Paralympic Games, but did not announce which city would bid. On 8 January 2015, the USOC selected Boston to be the candidate city from the United States but on 27 July 2015 Boston 's bid was withdrawn and the USOC bid process was reopened. On 1 September 2015 the USOC announced that Los Angeles was chosen for the United States bid for the 2024 Summer Games. Document[2] (Title: Sports in the United States)(Section: Olympics) However, after the dissolution of the Soviet Union, it now primarily contends with China and Great Britain at the Summer Games for both the overall medal count and the gold medal count and with Norway and Canada at the Winter Games for the overall medal count. The United States hosted both Summer and Winter Games in 1932, and has hosted more Games than any other country – eight times, four times each for the Summer and Winter Games: BULLET::::- the 1904 Summer
https://arxiv.org/abs/2505.15561v1
Olympics in St. Louis, 1932 Summer Olympics and 1984 Summer Olympics in Los Angeles; and the 1996 Summer Olympics in Atlanta; BULLET::::- the 1932 Winter Olympics and 1980 Winter Olympics in Lake Placid, New York; the 1960 Winter Olympics in Squaw Valley, California; and the 2002 Winter Olympics in Salt Lake City, Utah. Los Angeles will host the Summer Olympics for a third time in 2028, marking the ninth time the U.S. hosts the Olympic Games. Document[3] (Title: 1992 Winter Olympics)(Section: Legacy) The 1992 Olympic Winter Games marked the last time both the Winter and Summer games were held in the same year. The 1992 Olympics also marks the last time France hosted the Olympics. Paris will host the 2024 Summer Olympics. Document[4] (Title: Sports in Chicago)(Section: Olympic bids) After a months 'long process that saw the elimination of several American and international cities, Chicago was selected on April 14, 2007, to represent the United States internationally in the bidding for the 2016 Summer Olympics. The International Olympic Committee eventually shortlisted four of the seven applicant cities, where Chicago remained, before Rio de Janeiro was elected as the host in 2009. Following Chicago 's loss in the race for the 2016 Olympics, the USOC bid for the 2024 Olympics with Los Angeles which result in a deal where Los Angeles secured the right to host the 2028 Summer Olympics. Chicago had previously hosted the 1959 Pan American Games. Chicago was selected to host the 1904 Summer Olympics, but they were transferred to St. Louis to coincide with the Louisiana Purchase Exposition. Document[5] (Title: Summer Olympic Games)(Section: Hosting) The United States has hosted the Summer Olympic Games four times: the 1904 Games were held in St. Louis, Missouri; the 1932 and 1984 Games were both held in Los Angeles, California; and the 1996 Games were held in Atlanta, Georgia. The 2028 Games in Los Angeles will mark the fifth occasion on which the Summer Games have been hosted by the U.S. In 2012, the United Kingdom hosted its third Summer Olympic Games in the capital city, London, which became the first city ever to have hosted the Summer Olympic Games three times. The cities of Los Angeles, Paris, and Athens have each hosted two Summer Olympic Games. In 2024, France will host its third Summer Olympic Games in its capital, making Paris the second city ever to have hosted three Summer Olympics. In 2028, Los Angeles will become the third city ever to have hosted the Games three times. Australia, France, Germany and Greece have all hosted the Summer Olympic Games twice. Question : When did the united states host the last olympics? Answer: The United States hosted the last Summer Olympics in 1996 in Atlanta, Georgia. Gold Answer : 2002 Figure 13: Example showing how the position of a hard distractor affects Qwen 2.5 7B’s response with a relevant passage fixed in position 2. When a hard distractor is placed in position 5 (the position with the highest distracting effect according to Fig. 2b) and other passages are weak distractors, the model
https://arxiv.org/abs/2505.15561v1
arXiv:2505.15563v1 [cs.CL] 21 May 2025Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis Mohammad Ali College of Information University of Maryland College Park, USA mali24@umd.edu aliusacomm@gmail.comNaeemul Hassan Philip Merrill College of Journalism College of Information University of Maryland College Park, USA nhassan@umd.edu Abstract This research presents a novel approach to computational fram- ing analysis, called Semantic Relations-based Unsupervised Framing Analysis (SUFA). SUFA leverages semantic relations and depen- dency parsing algorithms to identify and assess entity-centric em- phasis frames in news media reports. This innovative method is derived from two studies—qualitative and computational—using a dataset related to gun violence, demonstrating its potential for analyzing entity-centric emphasis frames. This article discusses SUFA’s strengths, limitations, and application procedures. Overall, the SUFA approach offers a significant methodological advance- ment in computational framing analysis, with its broad applicability across both the social sciences and computational domains. Keywords Computational framing analysis, semantic relations, dependency parsing, natural language processing, communication method, com- putational method Reference: Mohammad Ali and Naeemul Hassan. 2023. Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis. Presented at the Association for Education in Journalism and Mass Communication Conference, August 07–10, 2023. AEJMC, Washington, DC, USA, 14 pages. 1 Introduction Frames are predominantly explored using qualitative methods [e.g., Morin 2016] and quantitative methods [e.g., McKeever et al .2022] through manual labor and analysis of small datasets. The recent proliferation of online news reports and social media posts has resulted in the generation of a vast amount of digital data that is difficult to analyze manually. To overcome this challenge, scholars have started using various computational methods, broadly divided into two parts: supervised and unsupervised [Ali and Hassan 2022]. The supervised methods require pre-determined labels and substan- tial human labor, while the unsupervised methods that this current research focuses on need little human effort and are applicable across domains. Existing unsupervised methods (e.g., topic modeling) in framing analysis mainly rely on the frequency and co-occurrence of words, This research received the Top Method Paper Award at AEJMC 2023. AEJMC, Washington, DC ©2023 Copyright held by the owner/author(s). Published under Creative Commons CC-BY 4.0 License.leading to the exploration of topics instead of deeper framing in- sights [Ali and Hassan 2022; Entman 1993; Nicholls and Culpepper 2021]. An improved unsupervised computational solution to this longstanding communication challenge is becoming essential in this era of big data. Scholars [e.g., Ali and Hassan 2022] advocate for methods to capture semantic relationships between words, mov- ing beyond the traditional bag-of-words approach to enhance the methodological framework. In response to these calls, this arti- cle examines semantic relationships between words, presenting a novel unsupervised approach for computational framing analysis based on dependency parsing, a natural language processing (NLP) technique largely overlooked in framing analysis. This mixed-method article involves two studies. Study 1 employs a qualitative textual analysis to inductively examine a sample of news reports published by four major U.S. news media outlets on the 2022 Uvalde school mass shooting in Texas, as a case study. While the political impasse and public debate continue over gun violence,
https://arxiv.org/abs/2505.15563v1
it is important to understand how news media outlets frame the issue, as media framing determines how people “choose to act upon [the problem]” [Entman 1993, p. 54]. Study 1 examines how individual words, such as adjectives and adverbs, convey different meanings related to the shooter, victims, and the shooting event. This helps us understand how these words and their semantic relationships work together to construct frames. Study 2 employs the computational technique of dependency parsing to analyze the same dataset. Specifically, we investigate dependency parsing, along with word embedding, k-means clustering, and manual input, establishing this method as a viable approach for capturing semantic relationships and analyzing the entity-centric emphasis frames. Integrating qualitative and quantitative approaches in this project provides complementary strengths essential for developing the methodological approach. The qualitative analysis in Study 1 of- fers interpretive depth by first manually uncovering whether and how specific words and their semantic relationships contribute to frame construction in natural language. This inductive insight helps ground the methodological design. Through quantitative computa- tional techniques, Study 2 validates and extends this insight from Study 1 by systematically extracting these patterns computation- ally. Together, the two studies demonstrate that semantic structures, captured through dependency parsing, can reliably identify em- phasis frames, laying the foundation for a scalable, unsupervised computational framing analysis model. 1 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan The outcomes of both studies are discussed. Importantly, this mixed-method project solidified and proposed the semantic relations- based approach for framing analysis, named “Semantic Relations- based Unsupervised Framing Analysis.” The step-by-step procedure for applying this approach, along with its strengths, limitations, and future research directions, is also discussed. 2 Literature Review 2.1 Framing Scholars have not reached a consensus on a unified definition of framing [Goffman 1974; Hertog and McLeod 2001]. However, one of the most widely cited definitions in framing studies comes from Ent- man [1993], who posits: To frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem defi- nition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described. (p. 52) In the news media context, a frame is “a central organizing idea” [Tankard et al .1991], and it “denotes how journalists, their sources, and audiences work within conditions that shape the mes- sages they construct as well as the ways they understand and in- terpret these messages” [D’Angelo and D’Angelo 2018, p. xxiv]. Going beyond the idea of a simple topic, news framing is “like moving a telescope into position” [Fairhurst 2005, p. 125], where selected aspects are coherently organized in a way that makes an argument, promoting a particular interpretation, evaluation, and solution [Fairhurst 2005]. Importantly, a frame “operates by selecting and highlighting some features of reality while omitting others” [Ent- man 1993, p. 53]. Echoing with this, Fairhurst and Sarr [1996] notes that a frame is “to choose one particular meaning (or set of meanings) over another” (p. 3). 2.2 Emphasis vs. Equivalency Framing The concept of framing
https://arxiv.org/abs/2505.15563v1
revolves around two broad competing as- pects: emphasis framing and equivalency framing. Equivalency framing involves presenting two or more alternatives with logically equivalent phrases (e.g., loss versus gain) Kahneman and Tversky [1984]; Levin et al .[1998]. In contrast, emphasis framing refers to the act of repeatedly highlighting or associating certain pieces of information about an issue or topic, while omitting other relevant aspects D’Angelo [2017]. This article focuses on analyzing emphasis framing with the newly proposed computational approach. 2.3 Words in Constructing Frames Scholars have long identified words and phrases that construct frames. Prior studies revealed that using certain words helps iden- tify frames [Entman 1993; Fairhurst and Sarr 1996; Gamson and Modigliani 1989; Hertog and McLeod 2001]. For example, “the use ofbaby versus fetus signals a very different approach to the topic of abortion” [Hertog and McLeod 2001, p. 150]. Prior framing studies looked at various parts of speeches, including verbs, adverbs, and adjectives, which enhances researchers’ ability to identify frameboundaries and relationships [Hertog and McLeod 2001]. The fre- quent use of verbs such as “falsifying,” “forging,” and “manipulating” was found to have been utilized in news reports to frame scien- tists [Boesman and Van Gorp 2018]. News reporters also use various verbs of attribution (e.g., accused, charged, blamed) to create worth for one person while devaluing another [Dickerson 2001]. 2.4 Conceptualization and Operationalization of the Framing Component Prior studies provide evidence for using words in constructing frames [Hertog and McLeod 2001; Miller 1995]. When a particular word is selected or coded as part of a frame, this word directly or indirectly operates in relation to other words to express the intended framing meaning. In other words, framing meanings are often produced not by isolated words but through their associative use with surrounding words, particularly when an entity is modified by adjectives, adverbs, or verbs. For example, Bantimaroudis and Ban [2001] reported how Somali leaders were framed by U.S. news media through the repeated use of the term “warlords” in contrast to their opposition, the United Nations forces. They interpreted the frame by exposing how ex- tensively the word “warlords” was used in the news media cover- age. This current research argues that the word “warlords” alone does not sufficiently convey a practical meaning for understanding the frame about Somali leaders. Instead, we better understand the intended frame when the word “warlords” is seen as an adjective modifier to its noun, “Mohammed Siad Barre, ” forming a phrase like “warlord Barre.” In this context, the framing component emerges from a meaningful semantic pair, a modifying word and its head noun, which together construct the framing meaning. Crucially, this pair of words is bound by a meaningful semantic relation. For example, in the dependency parsing output of natural language processing, the noun “Barre” and its modifier “warlord” are linked by an adjectival modifier relation (known as “amod”). Based on this linguistic structure, this current research conceptu- alizes a framing component as “a pair of words connected by a meaningful semantic relation.” The modifying word may belong to various parts of speech, such
https://arxiv.org/abs/2505.15563v1
as adjectives (e.g., young shooter), verbs (e.g., shooter kills), or even participles and modal verbs (e.g., shooter accused of [killing]). In qualitative textual and quantitative content analyses that rely on manual labor, scholars might code the keyword “warlord,” keep- ing other parts (e.g., noun and semantic relation) in mind, and con- sider its semantic context during interpretation to explore meaning- ful insights. However, for computational analysis, capturing such semantic structures explicitly becomes essential for scaling framing analysis to large datasets. To this end, this current research operationalizes a framing com- ponent as a pair of words connected by a meaningful semantic re- lation, specifically identified using dependency parsing techniques. For instance, adjective-noun (amod) or verb-subject (nsubj) relation- ships are used to detect modifier-entity structures, such as “teenage gunman” or “shooter kills.” These semantic relations are compu- tationally extracted from the dependency tree of each sentence. By identifying the framing components in the semantic relations- based structure, this approach allows for systematic extraction of 2 Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis AEJMC, August 07–10, 2023, Washington, DC entity-modifier pairs in large datasets, ensuring both consistency and scalability. This operationalization is particularly well-suited for analyzing entity-centric frames, as it captures how individuals, organizations, or groups are framed through specific modifying words in large datasets. 2.5 Framing Analysis with Computational Approaches Traditionally, researchers utilize qualitative and quantitative meth- ods to analyze frames, relying on manual labor and small amounts of data [D’Angelo and D’Angelo 2018; Reese et al .2001]. To tackle the challenge of analyzing frames in large-scale datasets, scholars have begun using computational approaches—both supervised and unsupervised—in the last two decades [e.g., Card et al .2015; Liu et al. 2019; Van Atteveldt and Peng 2018; Walter and Ophir 2019]. Supervised. A supervised approach needs pre-labeled datasets. In this approach, a model is first trained on the labeled data and then applied to a new dataset to classify or predict each instance [Kot- siantis et al .2007]. Under the supervised framing analysis ap- proach, [Liu et al .2019] proposed a deep learning-based model developed with manual codes of headlines of news reports relating to gun violence. Unsupervised. An unsupervised approach does not require any pre-annotated datasets. Instead, it inductively explores all unlabeled data [Kotsiantis et al .2007]. Existing unsupervised approaches used to analyze frames include topic modeling [DiMaggio et al .2013], structural topic modeling [Gilardi et al .2021], hierarchical topic modeling [Nguyen 2015], cluster analysis [Burscher et al .2016], frequency-based models [Sanderink 2020], and FrameAxis [Kwak et al.2021]. Compared to supervised models, unsupervised ones demand less time and can be replicated across domains. Semantic relations. Existing unsupervised computational ap- proaches for framing analysis are mainly based on the ideas of frequency and co-occurrences of words, resulting in the identifi- cation of discussion topics or themes, instead of frames [Ali and Hassan 2022]. Such topics do not provide a coherent framing in- terpretation. As per the framing conceptualization [Entman 1993; Reese et al .2001], semantic relations among words are a key to going deeper into frames, compared to the current
https://arxiv.org/abs/2505.15563v1
bag-of-words- based practices, such as topic modeling. This limitation calls for exploring an unsupervised technique to capture semantic relations among words for better identifying frames. This article intends to fill the gap by focusing on unsupervised methods of framing analysis. Although a few studies attempted to address the task with seman- tic relations, their approaches are not sufficiently comprehensive or supervised from the data analysis perspective. For example, [Stur- dza et al .2018] describes an approach of operationalizing frames using a rule-based system with a software named TurboParser. However, the author did not execute it using a dataset, leaving its usefulness unclear. A recent study by Ziems and Yang [2021] pro- poses an NLP framework to understand the frames of an entity or issue (e.g., victims in police violence) with relevant attributes (e.g., age, gender, race). However, they pre-determined the attributes and then string-matched relevant tokens as a way of framing particular entities, which is also considered supervised.Another study by van Atteveldt et al .[2013] presents a computa- tional framing analysis method based on semantic relations. Their approach is also a kind of supervised task, as it first determines and labels particular frames and then identifies occurrences of each pre-determined frame in the dataset. Framing analysis scholars in recent studies [e.g., Ali and Hassan 2022; Nicholls and Culpepper 2021] call for exploring semantic relations for improved framing nuances. Therefore, this research seeks to fill the gap by offering and advancing a semantic relations-based unsupervised approach for framing analysis through two studies—qualitative textual analysis and computational analysis. Both studies examine a sample of 100 news reports published by four major U.S. news media outlets on the 2022 Texas school mass shooting. 2.6 Gun Violence and Framing Analysis Gun violence is a widely studied area in the U.S., as the mass shoot- ing problem has been on the rise for years [El-Bawab 2022]. The body of gun violence research involves various other issues, such as mental illness [McGinty et al .2014], frames [Morin 2016], and public health issues [McKeever et al .2022]. Analyzing a sample of news articles on serious mental illness and gun violence, McGinty et al.[2014] found that "dangerous people" with serious mental illness were more likely to be mentioned as a cause of gun vio- lence than “dangerous weapons. ” A recent study by McKeever et al . [2022] conducted an online survey (N=510) and found gun control and gun rights as the two salience frames. They also revealed that people held individuals responsible for gun violence and identified background checks as the most salient solution. 2.7 Attribution Theory The root of frames is drawn from the assumptions outlined in attri- bution theory (AT) [Heider 2013; Kelley 1973; Pan and Kosicki 1993]. So, this research analyzes and explains frames through the lens of AT. Originally developed within social psychology, the theory pri- marily describes how people explain and perceive the causes of an individual’s behavior [Heider 2013; McLeod 2010]. While defining the theory, Kelley [1973] says: Attribution theory is a theory about how people make causal explanations, about how they answer
https://arxiv.org/abs/2505.15563v1
questions beginning with "why?" It deals with the information they use in making causal inferences, and with what they do with this information to answer causal ques- tions. (p. 107). As naïve psychologists, people tend to make two broad types of causal attributions: a) dispositional attributions and b) situational attributions [Heider 2013; Kelley 1973]. Dispositional attributions point to an individual’s internal factors as being responsible for an incident. For example, in a car crash, labeling people’s reckless driving behavior as a cause could be a dispositional attribution. Situational attributions refer to factors that exist outside an indi- vidual and are prevalent in specific situations. In the same example, attributing the snowy road as a cause could be considered a situa- tional factor. Two prominent frameworks provide potential factors and insights that shape people’s perceptions of dispositional and 3 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan situational attributions. These are the covariation model [Kelley 1973] and the correspondent inference [Jones and Davis 1965]. 2.8 Covariation model The covariation model of Kelley [1973] identified three potential factors leading to causal perceptions. These are consensus, distinc- tiveness, and consistency. 1) Consensus is related to a person or entity that explains how many individuals behave in the same way. High consensus indicates a higher level of situational attribution. 2) Distinctiveness is related to the situations that explain how an individual behaves in other similar situations. High distinctiveness indicates a higher level of situational attribution. 3) Consistency is related to time, which explains how frequently an individual’s behavior occurs. High consistency indicates a higher level of dispo- sitional attributions [Kelley 1973]. 2.9 Correspondent inference Jones and Davis [1965] offered three key factors in inferring causal attributions. 1) People’s degree of choice: A freely chosen behav- ior is considered to infer an individual’s dispositional attributions compared to forced behavior. 2) Social desirability of behavior: An individual’s behavior that is low in social desirability or so- cial expectedness is more likely to make dispositional attributions compared to high social desirability. 3) Intended consequence of behavior: People infer an individual’s behavior as dispositional, especially when the behavior’s intended consequence is negative and harmful to people. 2.10 Case: 2022 Uvalde School Shooting This study analyzes media coverage of a mass shooting that oc- curred on May 24, 2022, in Uvalde, Texas. An 18-year-old former student named Salvador Ramos entered Robb Elementary School with an AR-15-style rifle and opened fire [Peck and Goodman 2022; Sandoval 2023]. The shooting resulted in the deaths of 19 students and two teachers and the injuries of 17 others [Jacobo and El-Bawab 2022; Peck and Goodman 2022; Shooting nd]. The Uvalde school shooting is one of the deadliest shootings in the United States in terms of the number of casualties Shooting [nd]. The mass shooting incident received extensive coverage in local, national, and international news media [Kellner 2025], sparking outrage and reigniting long-standing debates over gun control and school safety and calls for action [Livingston 2022]. News media coverage of the Uvalde shooting evolved over time [Kellner 2025]. Soon after the incident, the then-President of
https://arxiv.org/abs/2505.15563v1
the United States, Joe Biden, visited Texas to console the victims and pledged to act [Liv- ingston 2022; The Biden White House Achieve 2022]. Within a month of the Uvalde school mass shooting that occurred 10 days after another shooting in Buffalo, New York, a gun safety legisla- tion was passed by the Senate and Congress and then signed by the President on June 25, 2022. The gun safety law is reported as the first of its kind in the previous 30 years [Clyde and Miranda 2022]. As the deadliest mass shooting in recent years and drawing widespread media coverage, the Uvalde elementary school shooting has been purposively selected for this study.3 Study 1: Qualitative Textual Analysis This study focuses on an in-depth examination of the usage pat- terns of specific words, such as adjectives and adverbs, and their semantic relations in constructing frames. Typically, computational tools and traditional research methods, such as qualitative and quantitative methods, are broadly pursued as separate lines of in- quiry into frames. However, this study seeks to bridge this divide by utilizing the insights of inductive qualitative research to inform computational approaches in framing analysis. For this analysis, we purposively selected the 2022 mass shooting as a case that took place at Robb Elementary School. Specifically, we looked at how news media outlets in the right-leaning (a.k.a. WSJ and Fox News) and left-leaning categories (a.k.a. NYT and CNN) use selected modifying words (e.g., adjectives and adverbs) structured in a semantic pattern to frame the shooter, victims, and the event. Therefore, the following research questions are asked for explo- ration: RQ1: How do right-leaning and left-leaning news media outlets use words and phrases to construct frames while covering the 2022 mass shooting at Robb Elementary School in Texas? RQ2: How do the right-leaning and left-leaning news media outlets frame the shooter, victims, and the mass shooting event at the Robb Elementary school in Texas? RQ3: How do the semantic relations of words in the Texas mass shooting news reports inform the computational analysis of frames? 4 Study 1 Method To answer RQs, study 1 used qualitative textual analysis, a widely used approach to analyze frames inductively [Hertog and McLeod 2001]. It fits with the study’s purpose of inductively analyzing news reports to gain an in-depth understanding of frames, word usage patterns, and their semantic relations to constructing frames [Ent- man 1993]. Qualitative textual analysis is “all about language, what it represents and how we use it to make sense of our [social reali- ties]” [Brennen 2017, p. 203]. While exploring "how texts operate to produce meaning” [Browne 2009, p. 53], the qualitative anal- ysis helps “make an educated guess at some of the most likely interpretations that might be made of that text” [McKee 2001, p. 1]. 4.1 Data Collection We collected a total of 100 news reports, including 600 news head- lines and paragraphs, published by four news media outlets on the 2022 Robb Elementary School shooting in Texas. Each of them in- cludes ten news reports on the shooting that took place on May 24,
https://arxiv.org/abs/2505.15563v1
2022. Of the news outlets, The New York Times (NYT) and Cable News Network (CNN) are selected as the left-leaning news media, and the Wall Street Journal (WSJ) and Fox News as the right-leaning news media [Check nd]. The news media outlets were categorized based on their bias scores provided by Media Bias/Fact Check (MBFC). The MBFC is a non-partisan American independent site that provides bias scores for media outlets [Check nd; Odhner 2022]. It is important to acknowledge that although Fox News andThe Wall Street Journal are both considered right-leaning according to MBFC scores, they differ in tone and editorial focus. As noted by Ad Fontes Media (2024), Fox News tends to be more partisan in its 4 Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis AEJMC, August 07–10, 2023, Washington, DC opinion content, whereas The Wall Street Journal is comparatively more centrist in its news reporting [Media nd] (“Ad Fontes Media,” n.d.). For the purposes of this study, both outlets were categorized on the right side of the media bias spectrum. Using purposive sampling, we used the keywords “(Texas OR Uvalde) AND (“shoot*”)” and searched articles on Factiva, a global news database, for these four news media outlets separately from May 24 to 31, 2022. After downloading all news reports identified during the period, we manually scrutinized and removed articles that are not specifically relevant to this Uvalde mass shooting and are other types of content, such as editorials. This continued un- til 25 news reports were selected for each new media. Collecting news reports continued until the dataset seemed saturated with relevant words and phrases coded in this study. Data saturation is considered a useful guide for sampling data in a qualitative study that deals with a relatively small amount of information [Brennen 2017; Sandelowski 1995]. With this process, a sample of 100 news articles was finally selected for this analysis. Since the lead and ini- tial paragraphs(s) generally represent the most important messages in a news story [Liu et al .2019; Van Dijk 1985], we purposively selected the headlines and first five paragraphs of each news report, totaling 600 headlines and paragraphs, for an in-depth analysis. 4.2 Data Analysis This study analyzed the news reports with word-by-word coding in three phases—open coding, axial coding, and selective coding [Sal- daña 2021] using NVivo, a qualitative data analysis software. The coding process was guided by the three research questions, framing theory [Entman 1993], and attribution theory [Heider 2013; Kel- ley 1973]. Following the research purpose and questions, the data analysis focused on using words and their semantic relations in constructing frames [Entman 1993] in the case of the Robb Elemen- tary School shooting. The analysis explores whether and how the news reports used various words and phrases to promote particular interpretations or evaluations relating to the shooter, victims, and the event. During the open coding phase, we specifically looked at the use of words and phrases that promoted or highlighted four aspects: a) the shooter, Salvador Ramos; b) victims, such as school children and teachers; c)
https://arxiv.org/abs/2505.15563v1
the shooting incident, in certain ways. Each type of word and phrase was coded into a separate code. For instance, the words “kills, ” “killing,” and “killed” were coded into a single code. In the axial coding phase, where related codes are grouped into broader categories, we organized initial codes into similar categories based on shared framing purposes. Finally, during the selective coding, where central theme(s) are refined, a few broad themes emerged with adequate exemplars (Table 1. To ensure validity, we used two strategies: data triangulation (drawing from multiple data sources) and disconfirming evidence (intentionally seeking and considering both supporting and opposing evidence from data) [Creswell and Báez 2016]. 5 Study 1 Findings RQ1 and RQ2: The analysis identifies distinct sets of specific words and phrases in left-leaning news outlets, such as NYT and CNN,and right-leaning outlets, such as WSJ and Fox News (see Table 1, that frame the shooter, victims, and the shooting event differently. 5.1 Shooter “Accused” killer. The analysis shows that both right-leaning and left-leaning media outlets use some common verbs (e.g., kills, left dead, opened fire, and shot) while attributing the shooter’s act. Im- portantly, right-leaning media outlets use weaker verbs and modi- fiers (e.g., “is accused of shooting,” “claimed lives,” and “allegedly committed by”), which casts doubt on Salvador Ramos’ crime and weakens the gravity of killing people. In contrast, left-leaning me- dia use stronger verbs and modifiers (e.g., “shot and killed,” “burst in and killed,” and “horrifically”), which presents the incident with a higher gravity of the mass shooting act. Differences in identifying the shooter also exist between the two groups of media outlets. In contrast to the left-leaning media outlets, the right-leaning ones use weaker terms like “alleged gunman” and “alleged shooter. ” For example, a news report published by Fox News on May 27 said, “Salvador Ramos, the alleged gunman accused of shooting his grandmother and then targeting dozens of victims. . . .” This seems to have cast doubts regarding Salvador Ramos’ act of killing people, at least to some extent. Mental instabilities. In terms of attributing the responsibility or blame to Salvador for the mass shooting, both groups of news media outlets showed their own bias. In contrast to left-leaning news media outlets, the right-leaning ones selected and highlighted Salvador’s mental and family-related instabilities, drawing attention to the social factors while presenting Salvador as responsible for the shooting. For example, the WSJ reported, “Salvador Ramos. . . came from a broken family and unsettled classmates and co-workers with sometimes aggressive behavior and disturbing social-media posts.” 5.2 Shooting Incident Low vs. high severity. The shooting incident has been found to have been presented differently in terms of its severity between the right-leaning and left-leaning news media outlets. The former has employed specific words and phrases to portray the incident as a less severe one than the latter. The event modifier columns of Table 1 demonstrate that while some words and phrases are common to both groups of news outlets, others are used to attribute the shooting incident differently. For example, the NYT
https://arxiv.org/abs/2505.15563v1
used the word “slaughter” to describe the incident, while Fox News and the WSJ did not. The left-leaning media also used “terrorist attack,” which was not used by their right-leaning counterparts. In contrast, right- leaning media outlets identified the shooting as a “senseless crime. ” The use of words such as “terrorist attack” and “slaughter” might trigger nodes in the human brain related to other deadly terrorist incidents, portraying the shooting as a more severe act [Collins and Loftus 1975]. On the other hand, the use of the phrase “senseless crime” suggests a typical type of crime. Therefore, differences in the use of words have contributed to defining the mass shooting as a problem in terms of its severity between the right-leaning and left-leaning news media outlets. 5 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan 6 Victims Teenager vs. older. Divergent portrayals of the victims in the news media outlets were observed, with both left-leaning and right- leaning news media using distinct words and phrases, although some terms were commonly employed. The left-leaning media outlets specifically employed phrases such as “school children,” “el- ementary school children,” and “kids,” whereas the right-leaning outlets used “children” and “students.” For instance, the NYT re- ported on “the killing of at least 19 elementary school children in second, third, and fourth grades.” Although subtle, this contrast in- dicated the left-leaning outlets’ emphasis on the word “elementary, ” framing the shooting incident as an attack on young children of this age group. The word “elementary” distinguishes the age range of 5-10 years from “children” and “students. ” The word “elementary” emphasizes the victims’ age range more clearly, whereas terms like “students” and “children” are more general and can apply across various age groups, including older youth. By promoting the ages of the victims in distinct ways, both the left-leaning and right-leaning news media outlets presented the severity of the shooting incident and drew attention to the shooting problem differently. RQ3: Answers to RQ1 and RQ2 highlight the use of words in constructing relevant frames. The RQ3 serves the main purpose of Study 1, which is to investigate the semantic patterns or semantic relations of those words in creating frames. As a way of answering it, this analysis provides various groups of words and phrases cen- tering on the shooter, victims, and the event, illustrating semantic relations among the words (see Table 1. 7 Semantic Relations The above results and Table 1 provided in the analysis present two crucial aspects that contribute to the understanding of frames. Firstly, the words used to construct frames are crucial in identify- ing the framing components utilized by the news media outlets. Secondly, the semantic relations among the words are crucial in es- tablishing the frames’ meaning. Semantic relations indicate how the words are interrelated and which entity the words are attributed to. Reading through only the words might provide some insights into relevant framing components, but the insights are not fully mean- ingful without the words’ semantic relations. When the words are read with their semantic relations, it renders
https://arxiv.org/abs/2505.15563v1
particular meanings to construct frames. For instance, in the excerpt “An 18-year-old gun- man on Tuesday fatally shot 19 children and two adults” from an NYT article, the semantic relation between the phrase “18-year-old” and the “gunman” (Salvador) highlights that the modifier refers to the gunman and not the children. Without considering the seman- tic relations, it appeared challenging to comprehend the relevant meanings of the words and subsequently construct frames. 8 Study 1 Discussion 8.1 Highlight and Hide As the findings indicate, both left-leaning and right-leaning news media highlighted some common and different words regarding the Texas mass shooting, conforming to the framing strategy of highlighting and hiding certain aspects of the event [D’Angelo and D’Angelo 2018; Entman 1993; Greussing and Boomgaarden2017]. As a frame functions to purvey various judgments about reality [D’Angelo and D’Angelo 2018; Entman 1993], the frames constructed by left-leaning and right-leaning news outlets may shape how people perceive and understand the causes of the mass shooting and influence their attitudes toward it. 8.2 Attribution of Responsibility Left-leaning news media outlets attributed more responsibility to Salvador for the mass shooting compared to right-leaning ones. As the attribution theory [Kelley 1973] and framing theory [Entman 1993] suggests, with highlighted salience in situational factors (e.g., broken family) in right-leaning outlets, people are more likely to attribute the shooting’s causes to situational factors. This is sup- ported by the phrase “accused of” that right-leaning news used in presenting Salvador’s shooting. Right-leaning news media highlighted aspects of Salvador’s so- cial factors, such as his broken family, which may have made his actions appear more situationally driven. As per the correspondent inference model [Jones and Davis 1965], such social desirability can reduce attributions of personal responsibility by shifting away from dispositional factors. Overall, the left-leaning news media reports focused on attributing the causal responsibility of the mass shooting more to Salvador, while the right-leaning news media reports went beyond Salvador’s individual responsibility to his family factors. Such causal interpretation is supported by the study of [McGinty et al.2014], which shows “dangerous people” with mental illness were more likely mentioned as a cause of gun violence than “dan- gerous weapons.” The study by [McKeever et al .2022] also extends evidence in support of this current study’s findings. 8.3 Semantic Relations for Computational Framing Analysis Unsupervised computational methods mostly rely on the ideas of frequencies and co-occurrences of words [Blei 2012; DiMaggio et al . 2013]. These bag-of-words-based approaches are not designed to look at the semantic relations of words and end up with identifying topics, instead of frames [Ali and Hassan 2022]. The study 1 findings demonstrate that capturing semantic relations helps discern in- depth nuances in the texts through word relations and, thus, identify relevant frames. For example, in the following excerpt from a New York Times article, “An 18-year-old gunman on Tuesday fatally shot 19 children and two adults,” the semantic relations show that the phrase “18-year-old” modifies the “gunman” (aka Salvador), not children. Without knowing this semantic relation, relevant meanings of the words and, subsequently, frames do not emerge (see Table
https://arxiv.org/abs/2505.15563v1
1). Manual data analysis enables the researchers to identify such se- mantic relations and relevant frames, as presented above. Therefore, semantic relations appeared essential for having relevant meanings and frames in a text. In a computational method, being able to capture the semantic relations seems to be a one-step advancement toward better identification and analysis of frames. As identified in this study 1, the lists of words, their attributes, and semantic relations for the shooter, victims, and the event are so specific that these can be incorporated into an algorithmic model. So, this study suggests incorporating these semantic relations into computational 6 Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis AEJMC, August 07–10, 2023, Washington, DC techniques (e.g., dependency parsing) for better automatic framing analysis. As envisioned in Nicholls and Culpepper [2021] and Ali and Hassan [2022], this current study’s findings extended additional evidence of how semantic relations among words and phrases, in- stead of just bag-of-words, can better explain nuances of frames, especially in an unsupervised model. 9 Study 2: Computational Analysis Study 2 of computational analysis builds on the insights and recom- mendations from Study 1 of qualitative textual analysis. It focuses on the potential of using dependency parsing, an NLP technique that analyzes the grammatical structure of a sentence by identify- ing relationships between words, such as which word modifies or depends on another. This approach aims to enhance the identifica- tion and analysis of frames computationally. Examining the same dataset of news articles from Study 1, this computational analysis explores how dependency parsing can capture the semantic rela- tions of words and understand relevant frames. We also compare the results of the unsupervised computational model (Study 2) with those obtained through manual data analysis (Study 1) to evalu- ate the effectiveness of the computational approach. The findings contribute to a better understanding of the role of the semantic relations-based computational approach in analyzing frames and offer insights into the potential of using dependency parsing as a methodological approach for framing analysis. Since it is one of the first studies to use semantic relations in analyzing frames, we offer similar research questions established in study 1, consistent with the objectives of study 2. RQ1: How do right-leaning and left-leaning news media outlets use words and phrases to construct frames at the Robb Elementary school in Texas? RQ2: How do the right-leaning and left-leaning news media outlets frame the shooter, victims, and the mass shooting event at the Robb Elementary school in Texas? 10 Study 2 Method 10.1 Dataset To answer the research questions, we analyzed the same news report dataset as study 1. Parsed by the spaCy NLP language model, the dataset contains a total of 24604 tokens, with 4768 for CNN, 6282 for Fox, 6759 for NYT, and 6795 for WSJ. We used the same dataset to compare the frames provided by the computational approach with those of the qualitative study. 10.2 Analysis The data analysis involved the following seven steps: 1) Coreference resolution: As this study aims to identify mod- ifying words centering
https://arxiv.org/abs/2505.15563v1
three entities, the shooter, victims, and the event, we needed to identify and resolve the coreferences (e.g., “he” or “suspect” for the shooter) to capture all possible modifying words of both “references” and “co-references.” To accomplish this, we applied NeuralCoref, an extension of the spaCy NLP library that provides coreference resolution. 2) Token extraction: We then applied a dependency parser of the spaCy language model that parsed all the news reports andgenerated a dependency parse tree. This tree provides the syntactic structure of a sentence that includes nodes, such as heads (e.g., gunman) and children (e.g., suspect), representing words, and edges representing the semantic relationships between those heads and children. Each edge is labeled with a specific dependency relation, such as “amod” (adjective modifier). 3) Determining keywords: To capture all possible words that refer and co-refer to each of the three entities, we determined relevant keywords for each entity (e.g., Salvador, gunman, shooter). These keywords were determined based on study 1 insights and then refined through manual checking of some tokens in the output produced in step 2 (see Table 2 for details). 4) Filtering heads and children: Based on the keywords, we filtered out all relevant “heads” and “children” of each entity, all their dependency relations, and associated news outlets. 5) Determining and refining dependency relations: This step determines and refines useful dependency relations based on this study’s purpose. We removed some dependency relations (e.g., cc, punc) that were not useful in making meanings in relation to the RQs, by manual checking of the relations grouped in the output produced in step 4 (see Table 2 for details). 6) Filtering “framing components”: We consider each pair of head and child with certain dependency relation (e.g., the pair of “shooting” keyword and “deadly” child with “amod” relation) as a framing component that provides a particular attribution to an entity. This step filtered out all framing components for each entity by the news outlets. 7) Framing components to frames: Until the last step, we an- alyzed the data computationally using spaCy and Pandas, a popular data analysis library for Python. In this step, we followed both com- putational and manual qualitative explorations. 7a) Computational: We computationally grouped the framing components for each entity by dependency relations. To achieve this goal, we used BERT word embedding and k-means clustering of the modifying words (also known as children). 7b) Qualitative: We inductively coded the modifying words and categorized them into groups following the research questions manually. Here, we consider a single framing component as a candidate for being included in multiple groups (Saldaña, 2016), and triangulation and disconfirming evidence were utilized to ensure the validity (Creswell, 2016). In both parts, each group is considered as a frame. With the process, a number of frames emerged with exemplars. 11 Study 2 Findings This section reports the findings of the qualitative analysis in step 7, followed by the computational analysis from steps 1 to 6. The results of the computational exploration in step 7 are not reported here, as we found that the findings from manual analysis
https://arxiv.org/abs/2505.15563v1
outperformed them. The clusters revealed through k-means clustering were not found to be coherent and adequately insightful for understanding the nuances of frames, as we examined the results manually. The findings of the qualitative analysis reveal that right-leaning and left-leaning news media outlets use different words to construct frames of the shooter, victims, and the mass shooting event at the Robb Elementary school in Texas differently, as presented in Tables 3, 4, and 5 respectively. 7 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan 11.1 Shooter: “Accused” killer The shooter was characterized with some words that create doubt over the shooter’s killing action. Comparing the attributions used by right-leaning and left-leaning news outlets, it was found that the former used the words "alleged [shooter]" and “suspected [shooter]” more frequently than the latter. Furthermore, a right-leaning news outlet referred to the shooter as “accused [of shooting]”, which was not used by the left-leaning outlets. These attribution differences suggest that the two media outlets utilized different priorities in framing the shooter. 11.2 Shooter: Diversity of attributes As depicted in Table 3, right-leaning news outlets used a greater variety of attributes to highlight various aspects of the shooter than left-leaning outlets. For example, right-leaning outlets used words such as “unhappy,” “deceased,” “civilized,” and “active” to describe the shooter, which left-leaning outlets did not use. However, these words appear scattered and do not seem to form a coherent ar- gument. This may be due to the small dataset used in this study. A larger dataset in future research could reveal more modifying words and categorize them into relevant groups, providing further insights into framing strategies. 11.3 Shooter & Victims: Teenager vs. Older Right-leaning news media outlets tend to use words depicting the “shooter” as comparatively younger than left-leaning outlets. For instance, words used by the right-leaning outlets to attribute to the shooter include “teenage,” “young,” and “student,” which left- leaning outlets did not mention. Another example is that the shooter was identified as “18-year-old” 26 times in the left-leaning outlets and only 10 times in the right-leaning ones (see Table 3. In contrast, the victims were attributed with the word “young,” an adjective modifier, five times by the left-leaning outlets and zero times by the right-leaning ones. Overall, right-leaning outlets frame the shooter as younger and the victims as older, and the scenario is the opposite in left-leaning outlets (see Table 4. 11.4 Victims: Our Kids vs. Your Kids There is not much difference between left-leaning and right-leaning news outlets in using personal pronouns to modify the victims (see Table 4. Pronouns addressing victims are important to perceive how the news media outlets stand with them. The left-leaning outlets still used a greater variety of personal words, such as my (2), our (2), and your (2), while the right-leaning ones used two such words, her (2) and our (3). 11.5 Shooting Event: Low vs. High Severity To describe the shooting, left-leaning news media outlets tend to use more severe and emotionally charged words, such as deadliest (6), deadly (6), horrific (1), horrifying (1), heinous (1),
https://arxiv.org/abs/2505.15563v1
tragic (1), and fatally [shot] (1), which frames the issue as a more significant problem. Such words used by right-leaning outlets include deadly (4), deadliest (3), awful (4), horrific (3), senseless (2), and devastated (1). This shows the right-leaning outlets use less intense words like “senseless” and “awful,” which suggests a less severe framing of the issue (see Table 5. Overall, the mass shooting framing is constructedby the language deployed by news media outlets, and the severity of the framing can differ based on the political leaning of the outlet, which is aligned with framing aspects suggested by [Entman 1993]. 12 Study 2 Discussion Study 2 investigates how news outlets frame the shooter, victims, and the Texas school shooting, applying a new computational ap- proach based on semantic relations. 12.1 Attribution of Responsibility Framing the shooter as “young” or “older” can have significant implications for how people perceive the shooting and the level of responsibility attributed to the shooter [Entman 1993]. The use of the “young” attribute by right-leaning outlets could soften the shooter’s image and create a more sympathetic portrayal, thereby reducing the level of responsibility attributed to him [Jones and Davis 1965]. On the other hand, the left-leaning outlets’ focus on the victims’ youth could create a greater sense of tragedy and urgency and, therefore, a higher level of responsibility attributed to the shooter [Decety et al .2012]. As per attribution theory, people tend to attribute a person’s behavior to internal or external factors based on internal and external factors [Heider 2013; Kelley 1973]. In this case, the framing of the shooter and victims differently by the news outlets might shape how people attribute responsibility for the shooting. The framing differences among news media outlets might have been shaped more by established media routines and practices [Reese and Shoemaker 2018] than by the specifics of this particular mass shooting event. 12.2 Taking Actions The news media outlets’ different approaches to highlighting se- lected “severe” words might have significant implications for how the public perceives the incident and “choose[s] to act upon” the problem [Entman 1993, p. 54]. The left-leaning news outlets’ higher salience on words like “deadly” and “deadliest” might activate the "amygdala” node in people’s brains, potentially leading them to take actions like protest and advocacy [Barry et al .2013; Phelps 2006]. At the same time, highlighting more on the words “accused” and “alleged” regarding the shooter’s act, the right-leaning news outlets, compared to left-leaning ones, offered doubt in people’s perception regarding Ramos’s mass shooting. Such higher salience on these words in right-leaning outlets seems to have weakened people’s perception of the shooter’s dispositional factors in committing the offense [Kelley 1973]. 12.3 Highlight and Hide Taking some meanings or words over others as discussed above conforms the framing technique of highlight and hide, as proposed by Entman [1993] and Fairhurst and Sarr [1996]. Both right-leaning and left-leaning news outlets utilized distinct ways of framing the shooter, victims, and the event despite some common depictions between the groups. Overall, the left-leaning outlets attempt to elicit people’s sympathy for “victims” while
https://arxiv.org/abs/2505.15563v1
right-leaning ones sympathize with the shooter, as evidenced above. 8 Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis AEJMC, August 07–10, 2023, Washington, DC 13 Integrated Discussion of Both Studies This research’s primary objective is to introduce and explore a new approach to computational framing analysis. Our initial qualitative inquiry in study 1 revealed in-depth insights into the role of seman- tic relations in frame construction and suggested that dependency parsing, a computational method, could potentially serve as a prac- tical unsupervised approach to frame analysis. Based on study 1’s findings and recommendations, study 2 applied dependency parsing to the same dataset as an approach to computationally analyzing frames. A comparison of the findings of both studies demonstrates the potential of this proposed semantic relations-based approach to automate the identification and analysis of frames in large datasets. As the study 2 discussion suggests, its findings on framing the shooter, victims, and the event are well interpretable with relevant theoretical frameworks, and the interpretations are mostly aligned with those of study 1 and prior gun violence research. With that, this article proposes a novel computational framing analysis approach based on dependency parsing named “Semantic Relations-based Unsupervised Framing Analysis” (SUFA). 14 Semantic Relations-based Unsupervised Framing Analysis (SUFA) 14.1 Novelty of SUFA The SUFA is novel in analyzing frames in several ways. First, it is based on semantic relations that extend beyond the bag-of-words approach utilized by most existing unsupervised computational framing analysis methods, such as topic modeling. Second, as dis- cussed above, a few studies have employed semantic relations in framing analysis [e.g., van Atteveldt et al .2013; Ziems and Yang 2021]. However, they did not present it as an unsupervised method. In contrast to these studies, our approach demonstrates its distinc- tion as an unsupervised method. Researchers do not need to define a frame in advance to explore frames within a dataset. Third, our approach provides flexibility in using qualitative manual coding or computational tools like word embedding and k-means clus- tering in step 7 of its data analysis process. In this sense, it is a mixed-method approach that prior studies did not include. 14.2 Data Analysis in SUFA. The procedure for analyzing data in SUFA is outlined in seven steps in the study 2’s method section. To effectively apply SUFA, we recommend following these steps along with a few additional considerations. If news media outlets have specific identities, such as left or right-leaning, we suggest labeling these identities in the data. Steps 3 and 5 require human intelligence. For example, step 3 involves providing keywords for each entity, which can be in- formed by domain knowledge, researchers’ little manual data ex- ploration, or with the assistance of WordNet [Miller 1995]. In step 5, researchers may need to manually review the output to identify useful semantic relations for analysis. For step 7, either qualitative or computational analyses can be used, depending on the research goals and the number of modifying words derived from the dataset. The computational analysis, such as word embedding and k-means clustering, may generally be more
https://arxiv.org/abs/2505.15563v1
appropriate as SUFA is meant to analyze large datasets. However, if the size of modifying wordsis small enough to manage manually, a qualitative analysis might be more suitable for step 7, as research suggests that human intelli- gence often outperforms machines in tasks that require contextual interpretation and subjective judgment [Lazer et al. 2009]. 14.3 Strengths The SUFA is an unsupervised approach that does not require any prior labeling or defining of data frames. Instead, it uses an induc- tive approach to explore and group attributions together to reveal framing components or frames. One advantage of SUFA is that it allows for the flexibility of utilizing both human intelligence and computational techniques to emerge frames and their interpreta- tions, particularly in cases where the size of modifying words is small enough to be managed manually. Moreover, SUFA can induc- tively analyze frames in large datasets in an unsupervised manner. 14.4 Weaknesses The approach requires manual input in determining relevant key- words (step 3) and semantic relations (step 5), which can be time- consuming and subjective. It is limited to analyzing frames centered around entities, such as an individual (e.g., shooter), a group of peo- ple or community (e.g., victims), and an incident or phenomenon (e.g., a shooting event). Since this study focuses on exploring em- phasis frames through a semantic relations-based approach, it is better suited for analyzing emphasis frames [D’Angelo 2017] rather than equivalency frames [Kahneman and Tversky 1984]. Additionally, like other computational framing analysis approaches, this study only considers words and phrases, while other framing components like metaphor, placement, and visual elements are not analyzed. During coreference resolution (step 1), some words useful as framing components could be replaced with co-references (e.g., replacing the word “gunman” with “Salvador”), which might lead to the loss of some words with important nuances. 15 Conclusion This research introduces a new computational approach called Semantic Relations-based Unsupervised Framing Analysis (SUFA), which utilizes semantic relations to analyze news frames. While the method has some limitations, such as the need for manual input and its focus on emphasis frames, it provides a useful tool for exploring framing components in news media coverage. The mixed-method approach of SUFA offers researchers the flexibility to use entirely computational tools or couple it with qualitative manual coding, where applicable, for data analysis. Overall, SUFA is a valuable addition to the field of computational framing analysis, enabling more comprehensive and nuanced analysis of news media frames. 16 Limitations and Future Research The SUFA was developed and tested on a single dataset of news re- ports on gun violence in Study 1 and Study 2. However, the approach can be applied to other domains with the provision of relevant key- words and relations. Further research can be conducted to explore the applicability of this method in other domains and to improve its performance. Currently, SUFA only considers words when ana- lyzing frames. However, the computational framing analysis needs 9 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan to include other framing components such as metaphor, visual con- tent, placement, differences between
https://arxiv.org/abs/2505.15563v1
headline and body texts, and exemplars. Such advancements will provide a more comprehensive understanding of framing effects in news media. References Mohammad Ali and Naeemul Hassan. 2022. A survey of computational framing analysis approaches. In Proceedings of the 2022 conference on empirical methods in natural language processing . 9335–9348. Philemon Bantimaroudis and Hyun Ban. 2001. Covering the crisis in Somalia: Framing choices by the New York Times and the Manchester Guardian. Framing public life: Perspectives on media and our understanding of the social world (2001), 175–184. Colleen L Barry, Emma E McGinty, Jon S Vernick, and Daniel W Webster. 2013. After Newtown—public opinion on gun policy and mental illness. New England journal of medicine 368, 12 (2013), 1077–1081. David M Blei. 2012. Probabilistic topic models. Commun. ACM 55, 4 (2012), 77–84. Jan Boesman and Baldwin Van Gorp. 2018. Driving the frame: How news values, news pegs, and story angles guide journalistic frame building. In Doing news framing analysis II . Routledge, 112–134. Bonnie S Brennen. 2017. Qualitative research methods for media studies . routledge. Stephen Howard Browne. 2009. Close textual analysis: Approaches and applications (J. Kuypers, Eds.). Rhetorical criticism: Perspectives in action (pp. 63–76) . Lexington Books. Bjorn Burscher, Rens Vliegenthart, and Claes H de Vreese. 2016. Frames beyond words: Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Social Science Computer Review 34, 5 (2016), 530–545. Dallas Card, Amber Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) . 438–444. Media Bias Fact Check. n.d.. MediaBiasFactCheck.com as a Tool for Lateral Reading . https://mediabiasfactcheck.com/ Media Bias Fact Check. Don Clyde and Shauneen Miranda. 2022. Biden signs gun safety bill into law. https: //www.npr.org/2022/06/25/1107626030/biden-signs-gun-safety-law The National Public Radio. Allan M Collins and Elizabeth F Loftus. 1975. A spreading-activation theory of semantic processing. Psychological review 82, 6 (1975), 407. John W Creswell and Johanna Creswell Báez. 2016. 30 essential skills for the qualitative researcher . Sage Publications. Paul D’Angelo. 2017. Framing: Merua frames. In International encyclopedia of media effects , Patrick Rössler (Ed.). WileyBlackwell, NJ, 634–644. Paul D’Angelo and Paul D’Angelo. 2018. Doing news framing analysis II . Routledge. Jean Decety, Kalina J Michalska, and Katherine D Kinzler. 2012. The contribution of emotion and cognition to moral sensitivity: a neurodevelopmental study. Cerebral cortex 22, 1 (2012), 209–220. Donna L Dickerson. 2001. Framing “political correctness”: The New York Times’ tale of two professors. Framing public life: Perspectives on media and our understanding of the social world (2001), 163–174. Paul DiMaggio, Manish Nag, and David Blei. 2013. Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper coverage of US government arts funding. Poetics 41, 6 (2013), 570–606. Nadine El-Bawab. 2022. America’s gun violence problem by the num- bers. https://abcnews.go.com/US/americas-gun-violence-problem- numbers/story?id=85136358 The ABC News. Robert M Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of communication 43,
https://arxiv.org/abs/2505.15563v1
4 (1993), 51–58. Gail Fairhurst and Robert Sarr. 1996. The art of framing . San Francisco: Jossey-Bass. Gail T Fairhurst. 2005. Reframing the art of framing: Problems and prospects for leadership. Leadership 1, 2 (2005), 165–185. William A Gamson and Andre Modigliani. 1989. Media discourse and public opinion on nuclear power: A constructionist approach. American journal of sociology 95, 1 (1989), 1–37. Fabrizio Gilardi, Charles R Shipan, and Bruno Wüest. 2021. Policy diffusion: The issue-definition stage. American Journal of Political Science 65, 1 (2021), 21–35. Erving Goffman. 1974. Frame analysis: An essay on the organization of experience. Harvard University Press. Esther Greussing and Hajo G Boomgaarden. 2017. Shifting the refugee narrative? An automated frame analysis of Europe’s 2015 refugee crisis. Journal of ethnic and migration studies 43, 11 (2017), 1749–1774. Fritz Heider. 2013. The psychology of interpersonal relations . Psychology Press. James K Hertog and Douglas M McLeod. 2001. A multiperspectival approach to framing analysis: A field guide. In Framing public life . Routledge, 157–178. Julia Jacobo and Nadine El-Bawab. 2022. Timeline: How the shooting at a Texas el- ementary school unfolded . https://abcnews.go.com/US/timeline-shooting-texas-elementary-school-unfolded/story?id=84966910 The ABC News. Edward E Jones and Keith E Davis. 1965. From acts to dispositions the attribution process in person perception. In Advances in experimental social psychology . Vol. 2. Elsevier, 219–266. Daniel Kahneman and Amos Tversky. 1984. Choices, values, and frames. American psychologist 39, 4 (1984), 341. Harold H Kelley. 1973. The processes of causal attribution. American psychologist 28, 2 (1973), 107. Douglas Kellner. 2025. The Uvalde, Texas school shooting massacre. Educational Philosophy and Theory 57, 2 (2025), 91–95. Sotiris B Kotsiantis, Ioannis Zaharakis, P Pintelas, et al .2007. Supervised machine learning: A review of classification techniques. Emerging artificial intelligence applications in computer engineering 160, 1 (2007), 3–24. Haewoon Kwak, Jisun An, Elise Jing, and Yong-Yeol Ahn. 2021. FrameAxis: character- izing microframe bias and intensity with word embedding. PeerJ Computer Science 7 (2021), e644. David Lazer, Alex Pentland, Lada Adamic, Sinan Aral, Albert-László Barabási, Devon Brewer, Nicholas Christakis, Noshir Contractor, James Fowler, Myron Gutmann, et al. 2009. Computational social science. Science 323, 5915 (2009), 721–723. Irwin P Levin, Sandra L Schneider, and Gary J Gaeth. 1998. All frames are not created equal: A typology and critical analysis of framing effects. Organizational behavior and human decision processes 76, 2 (1998), 149–188. Siyi Liu, Lei Guo, Kate Mays, Margrit Betke, and Derry Tanti Wijaya. 2019. Detecting frames in news headlines and its application to analyzing news framing trends surrounding US gun violence. In Proceedings of the 23rd conference on computational natural language learning (CoNLL) . 504–514. Abby Livingston. 2022. Uvalde school shooting: ‘Do something!’ Biden is urged as he leaves Uvalde church . https://www.texastribune.org/2022/05/29/biden-uvalde-visit/ The Texas Tribune. Emma E McGinty, Daniel W Webster, Marian Jarlenski, and Colleen L Barry. 2014. News media framing of serious mental illness and gun violence in the United States, 1997-2012. American journal of public health 104, 3 (2014), 406–413. Alan McKee. 2001. A beginner’s guide to textual analysis. Metro Magazine: Media & Education Magazine 127/128 (2001), 138–149. Brooke W McKeever, Minhee Choi, Denetra Walker,
https://arxiv.org/abs/2505.15563v1
and Robert McKeever. 2022. Gun violence as a public health issue: Media advocacy, framing and implications for communication. Newspaper research journal 43, 2 (2022), 138–154. Saul McLeod. 2010. Attribution theory . https://www.simplypsychology.org/attribution- theory.html Simply Psychology. Ad Fontes Media. n.d.. . https://adfontesmedia.com/ George A Miller. 1995. WordNet: a lexical database for English. Commun. ACM 38, 11 (1995), 39–41. Aysel Morin. 2016. Framing terror: The strategies newspapers use to frame an act as terror or crime. Journalism & Mass Communication Quarterly 93, 4 (2016), 986–1005. Viet-An Nguyen. 2015. Guided probabilistic topic models for agenda-setting and framing . Ph. D. Dissertation. University of Maryland, College Park. Tom Nicholls and Pepper D Culpepper. 2021. Computational identification of media frames: Strengths, weaknesses, and opportunities. Political Communication 38, 1-2 (2021), 159–181. Katie Odhner. 2022. MediaBiasFactCheck.com as a Tool for Lateral Read- ing. https://newsliteracy.psu.edu/news/mediabiasfactcheck-com-as-a-tool-for- lateral-reading The News Literacy Institute at Pennsylvania State University. Zhongdang Pan and Gerald M Kosicki. 1993. Framing analysis: An approach to news discourse. Political communication 10, 1 (1993), 55–75. Josh Peck and J. David Goodman. 2022. Uvalde elementary school shoot- ing: Shooting at elementary school devastates community in South Texas . https://www.nytimes.com/live/2022/05/24/us/shooting-robb-elementary- uvalde#shooting-texas-elementary-school The New York Times. Elizabeth A Phelps. 2006. Emotion and cognition: insights from studies of the human amygdala. Annu. Rev. Psychol. 57, 1 (2006), 27–53. Stephen D Reese, Oscar H Gandy Jr, and August E Grant. 2001. Framing public life: Perspectives on media and our understanding of the social world . Routledge. Stephen D Reese and Pamela J Shoemaker. 2018. A media sociology for the networked public sphere: The hierarchy of influences model. In Advances in foundational mass communication theories . Routledge, 96–117. Johnny Saldaña. 2021. The coding manual for qualitative researchers . SAGE publications Ltd. Margarete Sandelowski. 1995. Sample size in qualitative research. Research in nursing & health 18, 2 (1995), 179–183. Lisa Sanderink. 2020. Shattered frames in global energy governance: Exploring frag- mented interpretations among renewable energy institutions. Energy research & social science 61 (2020), 101355. Edgar Sandoval. 2023. A year after the Uvalde massacre: Did anything change? https: //www.nytimes.com/2023/05/24/us/uvalde-shooting-fallout.html The New York Times. Mass Shooting. n.d.. . https://www.gunviolencearchive.org/mass-shooting?sort= desc&order=Victims%20Killedl The Gun Violence Archive. 10 Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis AEJMC, August 07–10, 2023, Washington, DC Mihai D Sturdza et al .2018. Automated framing analysis: A rule based system for news media text. Journal of Media Research-Revista de Studii Media 11, 32 (2018), 94–110. JW Tankard, L Hendrickson, J Silberman, K Bliss, and S Ghanem. 1991. Media frames: Approaches to conceptualization and measurement (Paper presented to the annual meeting of the Association for Education in Journalism and Mass Communication). Boston, Massachusetts (1991). The Biden White House Achieve. 2022. Remarks by President Biden on the school shooting in Uvalde, Texas . https://bidenwhitehouse.archives.gov/briefing- room/speeches-remarks/2022/05/24/remarks-by-president-biden-on-the-school- shooting-in-uvalde-texas/ The Biden White House Achieve. Wouter Van Atteveldt and Tai-Quan Peng. 2018. When communication meets com- putation: Opportunities, challenges, and pitfalls in computational communication science. Communication Methods and Measures 12, 2-3 (2018), 81–92. Wouter van Atteveldt, Tamir Sheafer, and Shaul Shenhav. 2013. Automatically extract- ing frames from media content using
https://arxiv.org/abs/2505.15563v1
syntacting analysis. In Proceedings of the 5th Annual ACM Web Science Conference . 423–430. Teun A Van Dijk. 1985. Structures of news in the press. Discourse and communication: New approaches to the analysis of mass media discourse and communication 10 (1985), 69. Dror Walter and Yotam Ophir. 2019. News frame analysis: An inductive mixed-method computational approach. Communication Methods and Measures 13, 4 (2019), 248– 266. Caleb Ziems and Diyi Yang. 2021. To protect and to serve? analyzing entity-centric framing of police violence. arXiv preprint arXiv:2109.05325 (2021). 11 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan A Appendix Table 1: Words, Phrases, and Their Relations in News Reports of Two Media Groups: NYT and CNN vs. WSJ and FOX. Salvador ModifiersNYT, CNN 18-year-old, 18-year-old man, Armored, Gunman, He, Shooter, Suspect WSJ, FOX18-year-old, 18-year-old man, A resident of Uvalde, Active shooter, Alleged gunman, Alleged shooter, Alone, Former student at Uvalde High school, Gunman, He, Mass shooter, Now-deceased, now-deceased suspect, Shooter, Suspect, Suspected lone gunman, Suspected shooter, Suspected Uvalde school shooter, Texas school shooting suspect Salvador HistoryNYT, CNN WSJ, FOX Broken family, Hostile, Unsettled classmates, Violent, was ’flashing red’ GunNYT, CNNA long rifle, Assault rifle, Semiautomatic rifle, Semiautomatic weapons, Semitauonic firearms, Tactical vest, With a rifle WSJ, FOX AR-platform rifle, Handgun, Legally purchased, Two rifles VerbNYT, CNNBurst in and killed, Came in an opened fire, in [in custody], Kills, Left, Left dead, Left killing, Opened fire, Shoots, Shot and killed, Shot dead, Stormed into WSJ, FOXAccused of shooting, Allegedly committed by, Broke into the school, Claimed the lives . . . , Entering [the school], Shot, Gunned down, Is accused of, Kills, Left, Left dead, Left killing, Opened fire, Walking into [school] Verb modifierNYT, CNN After . . . [another event], Fatally, Horrifically, Incomprehensibly WSJ, FOX Fatally VictimNYT, CNN18, 19, Adults, Age between 6 and 7 years old, At least, Children, Elementary school children, Elementary school students, Kids, One, School children, Students, Teachers, Two, Victims WSJ, FOX 14, 19, Adults, At least, Children, Children, One, Students, Teacher(s), Two, Victims, Xavier Lopez Event modifierNYT, CNN30th K-12 shooting, 6-year-old son, Aftermath, Attack, Deadliest mass shooting, Deadly shooting, Devastating, Elementary school shooting, Horrific mass murder, Mass school shooting, Mass shooting, Massacre, Nation reeling, School massacre, School shooting, Second deadliest, Shakes a nation, Slaughter, Slayings, Stealing their lives, Terrorism, Terrorist attack, Tragedy, Tragic, Violent, Worst school shooting WSJ, FOXAftermath, Attack, Deadliest, Deadliest shooting, Deadly, Deadly [shooting], Devastated the town, elementary school shooting, Horrific shooting, Horrific tragedy, Later discovered to be the shooting, Local elementary-school shooting, Mass casualty incident, Mass murder, Mass shooting, Massacre, murders, School shooting, Senseless crime, Shocked the country, Shooting, Texas elementary, school shooting, Texas mass shooting, Texas school shooting, third most deadly, tragedy Table 2: Keywords and Dependency Relations Used for the Shooter, Victims, and the Event Keywords Relations Shooter‘gunman’, ‘gunmen’, ‘man’, ‘Salvador’, ‘Ramos’, ‘shooter’, ‘shooters’, and ‘suspect’.’acl’, "amod", ’appos’, "compound", "relc", ’nsubj’,’dobj’, and ’nsubjpass’.. Victims‘adult’, ‘adults’, ‘child’, ‘children’, ‘kids’, ‘schoolchildren’, ‘student’, ‘students’, ‘teacher’, ‘teachers’, ‘victim’, and ‘victims’.’acl’, ’compound’, ’nummod’, ’relcl’, ’amod’, ’dobj’, ’nsubj’, ’nsubjpass’, and ’poss’ Event‘shooting’, ‘shootings’, ‘attack’, ‘massacre’, ‘event’, ‘tragedy’, ‘terrorism’, ‘slaughter’,
https://arxiv.org/abs/2505.15563v1
‘crime’, ‘slayings’, ‘murder’, and ‘aftermath’.’amod’, ’advmod’, ’compound’, ’nummod’, and ’relcl’ 12 Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis AEJMC, August 07–10, 2023, Washington, DC Table 3: Framing Components (with Frequencies) Deployed by Each News Media Outlet to Attribute the SHOOTER, Grouped under Different Associated Semantic Relations. Left CNNacl:clad (2), identified (2); amod: active (3), old (21), deranged (1), many (1), other (1), suspected (1), alleged (1), grandmother (1); appos: Ramos (2); compound: Salvador (7), mass (1) Left-center NYTacl:approaching (2), barricaded (2), driven (1); amod: shooting (1), angry (1), armed (2), old (5); compound: shooting (2), Salvador (2) Right-center WSJacl:named (1); amod: grandmother (1), old (8), teenage (1), unhappy (1), young (1), deceased (2), civilized (2), active (4); appos: himself (1), student (1), resident (1), old (1), man (1), birthday (1), Ramos (2); compound: school (1), mass (4), Salvador (12) Right FOXacl:accused (1), identified (1); amod: active (1), alleged (2), bureaudefined (1), deceased (4), lone (2), old (1), suspected (4), upstate (1), red (1); appos: resident (2), ones (1), gunman (1), Romas (1), Ramos (2), 18 (1); compound: suspect (1), York (1), resident (1), mass (1), Texas (1), Salvador (14), Ramos (1), school (1) Table 4: Framing Components (with Frequencies) Used by Each News Media Outlet to Attribute VICTIMS, Grouped under Different Associated Semantic Relations. Left CNNacl:aged (1); amod: local (1), young (2); compound: Parents (1), parents (1), school (1); nummod: 13 (3), 14(4), 18 (1), 19(13), 20 (2), 26 (2), 535 (1), Eighteen (1), Nineteen (5), Two(2), one(3), two(17); relcl: treated (1) Left-center NYTacl:killed (1); amod: dead (1), other (2), several (2), young (3); compound: Hook (1), Uvalde (1), daughter (1), grade (1), parents (2), roll (1), school (7); nummod: 14 (2), 18 (1), 19 (14), 20 (3), one (1), two (12); poss: America (1), Her (1), my (2), our (2), your (2); relcl: killed (1) Right-center WSJacl:celebrating (1), killed (1); amod: former (1), other (1), small (1); compound: Elementary (1), Robb (1), Trump (1), adult (1), mother (1); nummod: 16 (1), 17 (1), 19 (15), 20 (1), 21 (3), four (1), two (13); poss: her (2) Right FOXamod: dead (1), innocent (1), little (1), ofentry (1), old (1); compound: School (1), asa (1), center (1), school (1); nummod: 14 (2), 18 (3), 19 (8), 4,000 (1), Two (1), eight (1), one (3), two (7); poss: our (3); relcl: missing (1) 13 AEJMC, August 07–10, 2023, Washington, DC Ali and Hassan Table 5: Framing Components (with Frequencies) Used by Each News Media Outlet to Attribute the EVENT, Grouped under Different Associated Semantic Relations. Left CNNadvmod: ago (1), fatally (1), least (2), amod: 30th (2), American (2), Deadly (3), deadliest (2), deadly (2), heinous (1), horrific (1), previous (1), second (2), tragic (1); compound: Hook (2), mass (5), school (10); nummod: 39 (2), three (1); relcl: happened (1), left (4) Left-center NYTadvmod: ago (3), far (1); amod: Latest (1), deadliest (6), deadly (1), horrifying (1), immediate (1), mass (7), next (1), previous (1), recent (1), reported (1), such (1); compound: Buffalo (1), Newtown (1), School (2), mass (9),
https://arxiv.org/abs/2505.15563v1
school (14); nummod: 2012 (1), 215 (1), 693 (1), two (1); relcl: killed (3), say (1), took (1) Right-center WSJadvmod: away (2); amod: awful (4), deadliest (1), horrific (1), latest (2), local (1), mass (2), new (2), next (1); compound: Mass (1), mass (20), school (8); nummod: 2011 (1), claimed (1), died (1), have (1), is (1), rises (1), targeted (1), tolerated (1) Right FOXamod: deadliest (2), deadly (4), fourth (1), horrific (2), last (1), major (1), mass (1), recent (1), senseless (2); compound: Parkland (1), Texas (1), Tuesday’smass (1), mass (16), preventmass (1), school (9); nummod: 20 (1), 2018 (1); relcl: devastated (1), had (1), happened (1), left (2) 14
https://arxiv.org/abs/2505.15563v1
arXiv:2505.15585v1 [cs.IR] 21 May 2025MIRB: Mathematical Information Retrieval Benchmark Haocheng Ju1, Bin Dong,2,3,4∗ 1School of Mathematical Sciences, Peking University 2Beijing International Center for Mathematical Research and the New Cornerstone Science Laboratory, Peking University 3Center for Machine Learning Research, Peking University 4Center for Intelligent Computing, Great Bay Institute for Advanced Study, Great Bay University hcju@pku.edu.cn dongbin@math.pku.edu.cn Abstract Mathematical Information Retrieval (MIR) is the task of retrieving information from mathematical documents and plays a key role in various applications, includ- ing theorem search in mathematical libraries, answer retrieval on math forums, and premise selection in automated theorem proving. However, a unified benchmark for evaluating these diverse retrieval tasks has been lacking. In this paper, we introduce MIRB (Mathematical Information Retrieval Benchmark) to assess the MIR capabil- ities of retrieval models. MIRB includes four tasks—semantic statement retrieval, question-answer retrieval, premise retrieval, and formula retrieval—spanning a total of 12 datasets. We evaluate 13 retrieval models on this benchmark and analyze the challenges inherent to MIR. We hope that MIRB provides a comprehensive framework for evaluating MIR systems and helps advance the development of more effective retrieval models tailored to the mathematical domain.2 1 Introduction Mathematical Information Retrieval (MIR) [ 2,44] focuses on retrieving mathematical content such as definitions, theorems, and proofs from a mathematical corpus. MIR has many practical applications. For instance, mathematicians working with Lean [ 4,3] often need to verify whether a particular theorem exists in mathlib4, Lean’s mathematical library. In this case, the MIR query can be either a natural language or formal statement, and the corpus consists of declarations in mathlib4. Another example is students searching for similar questions or answers on Mathematics Stack Exchange to help them solve problems. Here, the user’s question serves as the query, and the corpus includes all question and answer posts on the forum. MIR is also an essential component in automated theorem proving, in both natural and formal languages. For example, NaturalProver [ 38] is a natural language theorem prover that uses stepwise beam search to sample proofs, retrieving multiple references from a corpus of ProofWiki definitions and theorems to support reliable tactic generation. Similarly, ReProver [ 41] is a formal theorem prover for Lean that performs best-first search; at each step, it retrieves premises from mathlib4 using the current proof state as the query, and feeds the retrieved premises into a tactic generator. This retrieval step is often referred to as premise retrieval. In summary, MIR plays a crucial role in a wide range of mathematical applications. ∗Corresponding author 2Our code and data are available at https://github.com/j991222/mirb and https://huggingface.co/collections/hcju/mirb-6827001711765454f58c5a76 Preprint. MIR differs from standard text retrieval in that both queries and documents often contain mathematical formulas. These formulas are highly structured, and their semantic meaning typically remains unchanged under variable substitution, even though their textual representations differ. This structural property poses unique challenges for retrieval models, which must adapt to the specific characteristics of mathematical language. Due to the importance of MIR, several competitions have been organized to evaluate different MIR systems. For example, ARQMath [ 45,21,20], held at the Conference and Labs of
https://arxiv.org/abs/2505.15585v1
the Evaluation Forum (CLEF) from 2020 to 2022, includes two main tasks: answer retrieval and formula retrieval, with both queries and corpora sourced from Mathematics Stack Exchange. Similarly, the NTCIR series [ 42] features a formula+keyword search task over corpora drawn from arXiv and Wikipedia. However, existing MIR datasets are limited in both task diversity and domain coverage, and are scattered across different sources. To the best of our knowledge, there is no unified benchmark that consolidates all major MIR tasks and datasets for a comprehensive evaluation of retrieval models. To address this gap, we introduce MIRB (Mathematical Information Retrieval Benchmark), a comprehensive benchmark designed to assess retrieval models on a wide range of MIR tasks across various domains and languages. MIRB covers four main tasks: Semantic Statement Retrieval, Question Answer Retrieval, Premise Retrieval, and Formula Retrieval, across 12 datasets in diverse mathematical domains and languages. We evaluate 13 retrieval models on this benchmark and observe that all models perform worse on reasoning-based tasks compared to semantic-based tasks. Moreover, applying cross-encoder rerankers generally leads to performance degradation. These results highlight that current retrieval models still have much room for improvement in handling MIR tasks. The rest of the paper is organized as follows. We review the related works on retrieval benchmarks, retrieval models and mathematical information retrieval in Section 2. Section 3 describe the tasks included in MIRB and the details of the dataset construction process. Experimental results of the evaluated retrieval models are presented in Section 4, and the paper concludes in Section 5. 2 Related Work 2.1 Retrieval Benchmarks Existing retrieval benchmarks can generally be divided into two categories: (1) general-purpose benchmarks that span diverse domains and tasks, such as BEIR [ 34], MTEB [ 26], MMTEB [ 5], C-MTEB [ 40] and MAIR [ 33]; and (2) domain-specific or task-specific benchmarks that focus on a particular domain or retrieval task. For example, ChemTEB [ 13] includes a retrieval benchmark for chemistry, while CodeSearchNet [ 10], CosQA [ 9], XcodeEval [ 14], and CoIR [ 17] target code retrieval. LONGEMBED [ 51] is designed for long-context retrieval. The benchmarks most closely related to our work are RAR-b [ 39] and BRIGHT [ 32], both of which include reasoning-based retrieval datasets covering commonsense reasoning, mathematics, and code. In RAR-b’s question- answer retrieval task, relevant documents directly answer the query, while BRIGHT focuses on retrieving documents that either assist in answering the query or use the same theorem as the one in the query. Our work differs from these benchmarks in three aspects: (1) we focus exclusively on the mathematics domain; (2) we include both semantic retrieval tasks (Semantic Statement Retrieval, Formula Retrieval) and reasoning-based tasks (Question-Answer Retrieval, Premise Retrieval), whereas RAR-b and BRIGHT focus solely on reasoning-based retrieval; (3) within reasoning-based retrieval, we include the task of premise retrieval in both natural and formal language, which is not covered in either RAR-b or BRIGHT. 2.2 Retrieval Models The development of retrieval models has advanced beyond the classic BM25 algorithm [ 28,29], which relies on sparse vector representations and measures lexical similarity
https://arxiv.org/abs/2505.15585v1
between queries and documents. Modern approaches leverage deep neural networks to encode queries and documents into dense vectors, enabling relevance assessment based on semantic similarity. A widely adopted training paradigm for these dense retrieval models [ 27,35,31,40] involves pretraining on large- scale unsupervised data using contrastive loss, followed by fine-tuning on smaller labeled datasets. In terms of architecture, earlier models commonly employed bidirectional encoders, but recent studies [ 36,22,23,15] have demonstrated that decoder-only language models can achieve superior 2 MIRB 4 tasks 12 datasetsSemantic Statement Retrieval Premise Retrieval Formula RetrievalQuestion -Answer Retrieval Informalized Mathlib 4 Retrieval Query natural language query Docs informalized mathlib4 statements MSE Dup. Question Retrieval Query MSE question Docs MSE questions MO Dup. Question Retrieval Query MO question Docs MO questions ARQMath -T ask-1 Query MSE question Docs MSE answer posts Stacks Query math theorem Docs proofs in the Stacks ProjectProofWiki Query math theorem Docs proofs in ProofWiki NaturalProofs Query math theorem Docs theorems, definitions and other statements LeanDojo Query Lean 4 proof state Docs mathlib4 declarations MAPL Query Isabelle proof state Docs Isabelle declarations HolStep Query HOL conjecture Docs HOL declarations ARQMath -T ask-2 Query math formula + context Docs math formulae in MSENTCIR -WFB Query math formula Docs math formulae in Wikipedia Figure 1: Overview of tasks and datasets in MIRB. performance. Moreover, the training data for retrieval models can be augmented with synthetic data generated by large language models [36, 25, 16]. 2.3 Mathematical Information Retrieval. Classical mathematical information retrieval methods often rely on tree-based representations to capture the structural information of mathematical formulas, such as the Symbol Layout Tree[ 43] and the Operator Tree [ 7]. A representative approach is the structure search used in Approach0 [ 50,47], which computes structural similarity by identifying the largest common subexpressions and matching maximum subtrees. More recent methods combine structure-based search with dense retrieval models [12,48,49,46], allowing systems to handle both the semantic similarity of text and the structural similarity of formulas. In general, dense retrievers such as text embedding models are more robust to invalid LaTeX formulas and to formulas written in alternative formats, whereas traditional structure based methods often fail at the parsing stage if the LaTeX syntax is incorrect. 3 The MIRB Benchmark We present MIRB , a benchmark designed to evaluate the mathematical information retrieval capabili- ties of retrieval models. It comprises four tasks: Semantic Statement Retrieval, Question-Answer Retrieval, Premise Retrieval and Formula Retrieval. Dataset statistics are provided in Table 1. The following four subsections describe each task and the corresponding dataset construction in detail. 3.1 Semantic Statement Retrieval Semantic Statement Retrieval is the task of retrieving semantically similar statements or questions given a math query, which itself is a mathematical statement or question. This task is motivated by real-world scenarios such as searching for theorems in mathematical libraries—for example, users of Lean often need to look up theorems in mathlib4. One instance of this task is Informalized Mathlib4 Retrieval, where the goal is to retrieve relevant mathlib4 theorems based on informal mathematical queries. Another instance is Duplicate Question Retrieval, which involves retrieving questions
https://arxiv.org/abs/2505.15585v1
labeled as duplicates on math forums like Mathematics Stack Exchange (MSE) and Math Overflow (MO). This task is inspired by the CQADupStack dataset [ 8]. A key challenge in this task is identifying semantically equivalent questions that may differ in phrasing or notation but express the 3 Table 1: Statistics of the datasets. We report the number of queries and documents in each dataset. Avg. D / Q denotes the average number of relevant documents per query. Average Word Length refers to the mean number of words per query or per document. Examples from five representative datasets (Informalized Mathlib4 Retrieval, MSE Dup. Question Retrieval, ARQMath-Task-1, NaturalProofs, NTCIR-WFB) are included in the main text, while examples from the remaining datasets are provided in the appendix. Test Avg. Word Length Task Dataset Relevancy #query #corpus Avg. D / Q Query Document Example Semantic Statement RetrievalInformalized Mathlib4 Retrieval [6] 3-level 40 124,254 7.23 10.38 41.60 Table 2 MSE Dup. Question Retrieval Binary 25,116 1,350,505 1.78 97.22 116.42 Table 3 MO Dup. Question Retrieval Binary 225 108,301 1.08 100.78 144.53 Table 11 Question-Answer RetrievalARQMath-Task-1 [45, 21, 20] 4-level 78 33,369 100.79 125.15 120.40 Table 4 ProofWiki Binary 1,099 15,763 1.03 48.37 196.87 Table 12 Stacks Binary 776 10,423 1.00 55.47 171.07 Table 13 Premise RetrievalNaturalProofs [37] Binary 2,060 40,806 3.94 49.51 62.32 Table 5 LeanDojo [41] Binary 4,109 180,944 2.33 106.28 30.18 Table 14 MAPL [24] Binary 4,000 493,029 7.07 43.53 30.15 Table 15 HolStep [11] Binary 1,411 3,973 22.82 34.33 28.84 Table 16 Formula RetrievalNTCIR-WFB [42] 3-level 39 1,994 38.95 2.72 2.93 Table 6 ARQMath-Task-2 [45, 21, 20] 4-level 76 9,969 63.18 122.25 5.61 Table 17 Table 2: Informalized Mathlib4 Retrieval example. Query Relevant Document Let L/K be a Galois extension, F be an intermidiate field, then L{σ∈Gal(L/K )|σx=x,∀x∈F}=FFixed Field of Fixing Subgroup Theorem: For a Galois field extension E/F with an interme- diate field K, the fixed field of the subgroup fixing K is equal to K. same mathematical meaning. We construct two datasets for this purpose: MSE Duplicate Question Retrieval and MO Duplicate Question Retrieval. The details of all three datasets are discussed in the following paragraphs. Informalized Mathlib4 Retrieval. We use the evaluation dataset from [ 6]. The original dataset contains both formal and informal queries; in this work, we focus only on the informal queries, retaining 40 out of the original 50. The retrieval corpus consists of informalized mathlib4 statements. Relevance is graded on a three-level scale, with the criteria defined in the original paper. An example query and its relevant document are shown in Table 2. MSE Dup. Question Retrieval. The task of Duplicate Question Retrieval involves retrieving questions that are duplicates of a given input question. We construct our dataset using the Mathematics Stack Exchange Data Dump (2024-09-30)3. We begin by extracting all question posts and removing those containing figures, links, or tables. Next, we build an undirected graph where an edge connects two questions if they are marked as duplicates in the data dump. We compute the transitive closure of this graph to ensure that if question A
https://arxiv.org/abs/2505.15585v1
is a duplicate of B and B is a duplicate of C, then A is also considered a duplicate of C. From each connected component in the graph, we randomly sample one question to serve as a query. The remaining questions constitute the initial corpus, which we further refine. To mitigate the issue of false negatives—questions that are duplicates but not labeled as such—we adopt a dynamic corpus approach similar to the LeetCode dataset in BRIGHT [ 32]. Specifically, we extract the tags for each question from the data dump. For a query Qwith tag set T(Q), we exclude a candidate question Q′from its corpus if the tag overlap satisfies |T(Q)∩T(Q′)| |T(Q)|≥0.5. This ensures that, aside from the ground-truth duplicates, most questions in the corpus are not on the same topic as the query, thus reducing the risk of unlabeled duplicates appearing as false negatives. MO Dup. Question Retrieval. The construction of the MO Duplicate Question Retrieval dataset follows the same procedure as for the MSE dataset. We use the MathOverflow Data Dump (2024- 3https://archive.org/download/stackexchange_20240930/stackexchange_20240930/math.stackexchange.com.7z 4 Table 3: MSE Dup. Question Retrieval example. Query Relevant Document Example of divisor Dsuch that degD > 0 andℓ(D) = 0 It is easy to see that if a divi- sorDon a projective curve Cover a field K has negative degree, then ℓ(D) = dim K{f∈ K(C)|div(f) +D≥0}is zero. How- ever, I suppose that the converse is not true. Can someone give me the simplest example of a divisor Don some curve Csatisfying deg(D)>0butℓ(D) = 0 ?Does the dual of a line bundle with no sec- tions have a section? Let L→Xbe a holo- morphic line bundle over a compact complex manifold. Suppose Lis non-trivial and has no non-trivial sections. Let me ask the follow- ing (hopefully not entirely trivial) question: Does the dual L∗have a non-trivial section? A special case of this is when Lis the dual of an ample line bundle. Obviously ample line bundles have sections, but the dual does not. 09-30)4. After cleaning the question posts, applying transitive closure to the graph, and filtering the corpus, we obtain 225 queries and 108,301 documents. 3.2 Question-Answer Retrieval Question-Answer Retrieval focuses on retrieving relevant answers or proofs for a given mathematical question. The main challenge lies in understanding the underlying mathematical intent of the question and identifying documents that provide accurate and precise answers—an objective that goes beyond simple semantic similarity. We include three datasets for this task: ARQMath-Task-1, ProofWiki, and Stacks, which are discussed in the following paragraphs. ARQMath-Task-1. ARQMath-Task-1 [ 45,21,20] is an answer retrieval task, where the goal is to retrieve relevant answer posts from Mathematics Stack Exchange (MSE) between 2010 and 2018, given a query question posted after 2019. The task was held over three years, with the query sets consisting of MSE questions from 2019, 2020, and 2021, respectively. We use ARQMath-3-Task-1 as the test set. The ARQMath-3-Task-1 dataset contains 78 queries, with an average of 446.8 annotated answers per query. Relevance is graded on four levels, and readers may refer to [ 20] for the detailed relevance criteria.
https://arxiv.org/abs/2505.15585v1
The evaluation metric is nDCG-prime, introduced in [ 30], which excludes unjudged documents from the ranked list. As a result, we adopt a dynamic corpus approach, where the corpus for each query consists only of its associated annotated documents. ProofWiki. ProofWiki is a mathematical library containing definitions, axioms, theorems, and their corresponding proofs. In the ProofWiki Question-Answer Retrieval task, the queries are theorems from ProofWiki, and the corpus consists of proofs sourced from the same platform. The objective is to retrieve the correct proof(s) for a given theorem. Since some theorems in ProofWiki have multiple proofs, the average number of relevant documents per query is greater than one. We use the theorems from the test set of the ProofWiki dataset in NaturalProofs [ 37] as queries, and include all proofs from the dataset, not just those associated with the queries, as the retrieval corpus. Stacks. The Stacks Project is a mathematical library focused on algebraic stacks and algebraic geometry. Similar to the ProofWiki Question-Answer Retrieval task, Stacks Question-Answer Retrieval aims to retrieve the correct proof for a given theorem in the Stacks Project. We use theorems from the test set of the Stacks dataset in NaturalProofs [ 37] as queries, and include all proofs from the dataset as the retrieval corpus. 3.3 Premise Retrieval Premise retrieval is the task of retrieving definitions, theorems, and lemmas that are useful for proving a target theorem or advancing the current proof state. This task plays a crucial role in automated theorem proving, where the ability to efficiently identify relevant mathematical premises can greatly influence the success of the proof process [ 24,41]. We include four datasets for this task: one natural language premise retrieval dataset, NaturalProofs [ 37], and three formal premise retrieval datasets: 4https://archive.org/download/stackexchange_20240930/stackexchange_20240930/mathoverflow.net.7z 5 Table 4: ARQMath-Task-1 example. Query Relevant Document Confusion about the formula of the area of a surface of revolution Before I read the formula of the area of revolution which isR 2πy ds , where ds=q 1 +dy dx2, I thought of deriving it myself. I tried to apply the same logic used for calculating the volume of rev- olution (e.g.,R πy2dx). My idea is to use many tiny hollow cylinders (inspired from the shell method), each has a surface area of (2πy)(dx):2πyis the circumference of the cylinder, and dxis the height of the cylin- der Their product is the surface area of the hollow (e.g., empty from the inside) cylinder. With this logic, the area isR 2πydx . Where is my mistake? Also it’s confusing why for the volume it was enough to partition the object using cylinders and for areas not.You should review the formula for the surface area in the case of a surface of revolution (e.g. here). The surface area of the surface obtained by rotation the graph of y=f(x) about the x-axis on the interval [x1, x2], is given by: 2πRx2 x1yq 1 + (y′)2dx= 2πRx2 x1f(x)q 1 + (f′(x))2dx Now iff(x) =cosh(4 x) 4, then f′(x) = sinh(4 x)so rotation on [−1,1]gives: π 2R1 −1cosh(4 x)q 1 + sinh2(4x)dxYou can simplify (a lot).
https://arxiv.org/abs/2505.15585v1
Can you take it from here? I also need to know how would one go about rotating this about the y-axis, but have no idea where to start. The link from above also covers the formula for rotation about the y-axis. Table 5: NaturalProofs example. Query Relevant Document IfHis an open covering of a closed and bounded subset Sof the real line ,thenS has an open covering eHconsisting of finitely many open sets belonging to H.no point of Scis a limit point of S. LeanDojo [ 41] for Lean, MAPL [ 24] for Isabelle, and HolStep [ 11] for HOL Light. The details of these four datasets are discussed in the following paragraphs. NaturalProofs. NaturalProofs [ 37] is a natural language premise retrieval dataset, where the goal is to retrieve definitions, lemmas, and theorems that are useful for proving a given query statement. It consists of four subsets: ProofWiki, Stacks, Real Analysis, and Number Theory. In the ProofWiki subset, the query is a theorem from ProofWiki, the corpus includes all definitions, lemmas, and theorems in the library, and the relevant documents are those used in the proof of the query theorem. The other three subsets follow a similar formulation. We evaluate each subset separately and report the average of their scores as the final result for the NaturalProofs dataset. LeanDojo. LeanDojo [ 41] provides a premise retrieval dataset for Lean, where the goal is to retrieve useful premises from mathlib4 to advance a given Lean 4 proof state. In this task, the query is a proof state, the corpus consists of all mathlib4 declarations, and the relevant documents are the premises used in the next tactic step. We follow the novel_premises data split from the original benchmark, in which each proof in the test set uses at least one premise not seen during training. MAPL. MAPL [ 24] is a premise retrieval dataset for Isabelle. The task is similar to that of LeanDojo premise retrieval, where the goal is to retrieve useful premises to advance the current proof state. In MAPL, the query is an Isabelle proof state and the corpus consists of premises expressed in Isabelle’s formal language. The original dataset comprises a collection of (state, premise) pairs, which we split into train, dev, and test sets following a strategy similar to the novel_premises split in LeanDojo. Specifically, each proof state in the test set uses at least one premise that does not appear in the training set. 6 Table 6: NTCIR-WFB example. Query Relevant Document L(λ, α, s ) =P∞ n=0exp(2 πiλn ) (n+α)s. g (s) =P∞ n=1a(n) ns HolStep. HolStep [ 11] is a dataset based on HOL Light proofs. Each file in the original dataset contains a single conjecture along with the dependencies used in its proof. We treat the conjectures as queries and aggregate all dependencies across the dataset to form the retrieval corpus. The task is to retrieve the relevant dependencies for a given conjecture. 3.4 Formula Retrieval Formula retrieval focuses on retrieving mathematical expressions that are relevant to a given query formula, optionally incorporating the
https://arxiv.org/abs/2505.15585v1
formula’s surrounding context. This task requires a deep understanding of the semantic meaning of mathematical formulas. We evaluate this task using two datasets: NTCIR-12 Wikipedia Formula Browsing (WFB) [42] and ARQMath-Task 2 [45, 21, 20]. NTCIR-WFB. The NTCIR-12 Wikipedia Formula Browsing task involves retrieving relevant formulas given a query formula. The corpus consists of mathematical formulas extracted from Wikipedia articles. Relevance is graded on a three-level scale, with detailed criteria provided in [ 42]. Similar to ARQMath-Task-1, we adopt a dynamic corpus approach, where each query is evaluated against only its associated annotated documents. ARQMath-Task-2. ARQMath-Task-2 [ 45,21,20] is a formula retrieval task, where the goal is to retrieve relevant formulas from MSE posts given a query formula along with its context (i.e., the question post in which it appears). We use ARQMath-3-Task-2 as the test set, which contains 76 queries and an average of 63.18 annotated relevant documents per query. The task defines four levels of relevance, with criteria detailed in [ 20]. Similar to ARQMath-Task-1, we adopt a dynamic corpus approach, where each query’s corpus consists only of its annotated documents. 4 Experiments In this section, we evaluate the performance of 13 retrieval models on MIRB. The experimental setup is described in SubSection 4.1, and the comparison of model performance is presented in SubSection 4.2. 4.1 Experiment Setup We evaluate four groups of retrieval models. For the sparse model, we test BM25. For open-source models with fewer than 1 billion parameters, we include gte-large-en-v1.5 [ 19], UAE-Large-V1 [18], and bge-large-en-v1.5 [ 40]. For open-source models with more than 1 billion parameters, we evaluate gte-Qwen2-1.5B-instruct [ 19], e5-mistral-7b-instruct [ 36], NV-Embed-v2 [ 15], gte-Qwen2- 7B-instruct [ 19], SFR-Embedding-2_R [ 22], and GritLM-7B [ 25]. For proprietary models, we evaluate Cohere-embed-english-v3.05, text-embedding-3-large6, and voyage-3-large7. For dense models, we compute the cosine similarity between the query embedding and the corpus embeddings, and return a ranked list of documents. Model configurations, including the maximum context length for queries and documents, as well as whether instructions are prepended to the queries, are provided in Table 7. The instructions used are listed in Table 8. Following prior work [ 34,32], we report nDCG@10 as the main evaluation metric. 5https://huggingface.co/Cohere/Cohere-embed-english-v3.0 6https://platform.openai.com/docs/models/text-embedding-3-large 7https://huggingface.co/voyageai/voyage-3-large 7 Table 7: Model configuration. Max |Q|and Max |D|is the maximum context length we set for each model. The instruction column denotes whether we prepend instructions to the query. Size Max |Q|Max|D|Instruction Sparse model BM25 - - - No Open-source models (<1B) gte-large-en-v1.5 434M 8192 8192 No UAE-Large-V1 335M 512 512 Yes bge-large-en-v1.5 335M 512 512 Yes Open-source models (>1B) gte-Qwen2-1.5B-instruct 1.78B 4096 4096 Yes e5-mistral-7b-instruct 7.11B 4096 4096 Yes NV-Embed-v2 7.85B 32768 32768 Yes gte-Qwen2-7B-instruct 7.61B 4096 4096 Yes SFR-Embedding-2_R 7.11B 4096 4096 Yes GritLM-7B 7.24B 4096 4096 Yes Proprietary models Cohere-embed-english-v3.0 - 512 512 No text-embedding-3-large - 8192 8192 No voyage-3-large - 32000 32000 Yes Table 8: Instructions used for different datasets are applied to all models that utilize instructions, except for UAE-Large-V1 and bge-large-en-v1.5. For these two models, the instruction used is: "Represent this sentence for searching relevant passages:" Dataset Instruction Informalized
https://arxiv.org/abs/2505.15585v1
Mathlib4 Retrieval Given a mathematical query, retrieve relevant theorems. MSE Dup. Question RetrievalGiven a math question, retrieve questions that are duplicates of the given oneMO Dup. Question Retrieval ARQMath-Task-1 Given a math problem, retrieve its solution. ProofWikiGiven a math theorem, retrieve its proof.Stacks NaturalProofs Given a math theorem, retrieve useful references, such as theorems, lemmas, and definitions, that are useful for proving the given theorem. LeanDojo Given a Lean 4 proof state, retrieve the declarations that are useful for proving it. MAPL Given an Isabelle proof state, retrieve the declarations that are useful for proving it. HolStep Given a HOL conjecture, retrieve the declarations that are useful for proving it. NTCIR-WFB Given a math formula, retrieve relevant formulas. ARQMath-Task-2 Given a math formula and its context, retrieve relevant formulas. 4.2 Results Main Results The results are shown in Table 9. BM25 underperforms compared to dense retrievers, and there is a clear performance gap between small models (fewer than 1B parameters) and larger models (around 7B). voyage-3-large outperforms all other models, achieving an average nDCG@10 score of 54.54 and ranking first on 7 out of the 12 datasets. Among the evaluated tasks, models generally perform better on semantic retrieval tasks such as Semantic Statement Retrieval and Formula Retrieval, while their performance degrades on reasoning-oriented tasks, especially Premise Retrieval. Unlike Question-Answer Retrieval, where the solution or part of it appears in the document, Premise Retrieval requires identifying relevant mathematical statements such as lemmas or theorems that are not part of the answer but are useful for constructing a proof. For formal premise retrieval datasets like LeanDojo, MAPL, and HolStep, embedding models often struggle because they are not extensively pre-trained on large corpora of formal language data. As a result, they are unfamiliar with the notation and syntax of formal languages, and are even less capable of identifying the underlying logical connections between the query state and potential premises. Consequently, even models that perform well on Question-Answer Retrieval (e.g., voyage-3-large) show poor performance on Premise Retrieval. To improve performance on this task, models need to be trained on premise retrieval datasets across different formal languages. Results of Reranking Applying rerankers to retrieval results is generally expected to improve performance. To assess their effectiveness on mathematical retrieval tasks, we evaluate two rerankers: bge-reranker-v2-m3 [ 1] and jina-reranker-v2-base-multilingual8. Each reranker computes a relevance score for the concatenated query and document pair, and then reranks the top 10 retrieved documents accordingly. We apply them to the top five models in MIRB: voyage-3-large, SFR-Embedding- 8https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual 8 Table 9: The performance of retrieval models in MIRB. We report nDCG@10 for all datasets. Avg. denotes the average score across datasets. The best score for each dataset is highlighted in bold. Semantic Statement Retrieval Question-Answer Retrieval Premise Retrieval Formula RetrievalAvg. Informalized Mathlib4 Retrieval MSE Dup. Question Retrieval MO Dup. Question Retrieval ARQMath-Task-1 ProofWiki Stacks NaturalProofs LeanDojo MAPL HolStep NTCIR-WFB ARQMath-Task-2 Sparse model BM25 31.49 22.85 44.01 24.83 57.35 35.49 24.14 6.91 15.27 25.88 66.03 32.46 32.23 Open-source models (<1B) gte-large-en-v1.5 38.05 46.76 68.04 37.78 66.49 32.26 28.42 3.73 8.78 29.15 68.83 59.87
https://arxiv.org/abs/2505.15585v1
40.68 UAE-Large-V1 40.43 41.11 67.44 31.66 54.81 28.17 27.85 4.64 5.59 30.17 71.92 55.50 38.27 bge-large-en-v1.5 41.99 41.70 67.40 31.02 56.36 30.25 27.53 5.45 6.84 30.51 73.76 55.22 39.00 Open-source models (>1B) gte-Qwen2-1.5B-instruct 55.17 43.13 67.73 41.97 77.83 52.56 27.46 8.40 18.64 28.05 72.96 53.56 45.62 e5-mistral-7b-instruct 57.33 51.14 71.31 46.46 77.29 39.85 32.14 10.80 15.41 30.27 78.48 57.93 47.37 NV-Embed-v2 59.48 55.00 78.47 47.34 83.08 58.56 37.21 12.27 16.58 32.77 73.22 70.00 52.00 gte-Qwen2-7B-instruct 40.38 38.40 61.77 44.74 77.02 49.35 30.08 11.53 17.46 28.16 77.52 54.68 44.26 SFR-Embedding-2_R 60.98 58.52 81.32 51.15 85.07 54.94 34.67 11.83 17.07 30.76 75.69 65.48 52.29 GritLM-7B 54.09 53.05 78.60 46.35 81.59 55.89 32.92 10.68 19.53 30.80 74.22 66.56 50.36 Proprietary models Cohere-embed-english-v3.0 42.00 42.96 61.00 38.05 66.00 32.33 28.99 6.96 13.95 29.72 73.27 54.51 40.81 text-embedding-3-large 49.38 52.35 76.74 45.79 81.95 56.14 31.33 11.34 19.94 31.02 73.06 70.18 49.93 voyage-3-large 57.36 60.33 82.87 52.45 91.69 62.62 32.74 13.02 17.77 32.68 76.91 74.00 54.54 Table 10: Results of reranking. Each ·/·/·represents the score before reranking, after applying the bge-reranker-v2-m3, and after applying the jina-reranker-v2-base-multilingual, respectively. NV-Embed-v2 SFR-Embedding-2_R GritLM-7B text-embedding-3-large voyage-3-large Informalized Mathlib4 Retrieval 59.48 / 55.19 / 56.61 60.98 / 55.29 / 57.16 54.09 / 52.39 / 54.71 49.38 / 46.92 / 48.23 57.36 / 55.22 / 56.06 MSE Dup. Question Retrieval 55.00 / 47.23 / 49.00 58.52 / 50.28 / 52.25 53.05 / 46.58 / 48.22 52.35 / 46.06 / 47.45 60.33 / 52.93 / 54.84 MO Dup. Question Retrieval 78.47 / 64.52 / 70.96 81.32 / 66.09 / 72.00 78.60 / 63.70 / 69.00 76.74 / 61.77 / 67.95 82.87 / 66.78 / 73.15 ARQMath-Task-1 47.34 / 46.66 / 47.17 51.15 / 50.41 / 50.52 46.35 / 43.45 / 44.36 45.79 / 43.30 / 44.09 52.45 / 51.41 / 53.03 ProofWiki 83.08 / 67.85 / 73.09 85.07 / 69.29 / 74.05 81.59 / 67.26 / 72.23 81.95 / 67.41 / 72.51 91.69 / 70.33 / 75.91 Stacks 58.56 / 44.67 / 51.68 54.94 / 41.07 / 49.47 55.89 / 42.03 / 49.60 56.14 / 41.87 / 50.93 62.62 / 45.76 / 53.75 NaturalProofs 37.21 / 33.32 / 32.33 34.67 / 31.44 / 30.39 32.92 / 31.00 / 29.76 31.33 / 29.22 / 28.23 32.74 / 30.70 / 29.82 LeanDojo 12.27 / 10.59 / 11.37 11.83 / 10.36 / 11.24 10.68 / 9.59 / 10.33 11.34 / 9.96 / 10.86 13.02 / 11.10 / 11.83 MAPL 16.58 / 16.56 / 16.81 17.07 / 17.10 / 17.27 19.53 / 18.73 / 18.99 19.94 / 18.64 / 19.45 17.77 / 17.06 / 18.08 HolStep 32.77 / 31.94 / 31.01 30.76 / 30.37 / 29.46 30.80 / 30.44 / 29.35 31.02 / 30.23 / 29.33 32.68 / 31.44 / 30.68 NTCIR-WFB 73.22 / 71.67 / 73.84 75.69 / 74.42 / 76.13 74.22 / 73.04 / 73.83 73.06 / 71.21 / 72.27 76.91 / 74.69 / 76.21 ARQMath-Task-2 70.00 / 69.06 / 67.69 65.48 / 64.96 / 64.99 66.56 / 65.41 / 64.92 70.18 / 69.43 / 67.26 74.00 / 72.90 / 71.65 Avg. 52.00 / 46.60 / 48.46
https://arxiv.org/abs/2505.15585v1
52.29 / 46.76 / 48.74 50.36 / 45.30 / 47.11 49.93 / 44.67 / 46.55 54.54 / 48.36 / 50.42 2_R, NV-Embed-v2, GritLM-7B and text-embedding-3-large, to assess whether reranking improves performance. The results, shown in Table 10, indicate that reranking generally leads to a decline in performance. In a few cases, slight improvements are observed: for example, jina-reranker-v2- base-multilingual raises the score of voyage-3-large on ARQMath-Task-1 from 52.45 to 53.03, and improves SFR-Embedding-2_R on NTCIR-WFB from 75.69 to 76.13. These results suggest that rerankers trained on general text retrieval tasks may not transfer effectively to mathematical retrieval. 5 Conclusion In this paper, we introduce MIRB, a comprehensive benchmark designed to evaluate the mathematical information retrieval capabilities of retrieval models. MIRB comprises four tasks: Semantic Statement Retrieval, Question-Answer Retrieval, Premise Retrieval, and Formula Retrieval. These tasks span both semantic-based and reasoning-based retrieval settings. We evaluate 13 retrieval models and observe that while their performance on semantic-based retrieval is moderate, they perform poorly on reasoning-based tasks. Additionally, applying cross-encoder rerankers does not lead to performance improvements. We hope that MIRB will facilitate future research in mathematical information retrieval and support the development of more effective retrieval models tailored to mathematics. 6 Limitations Our work has several limitations: •In the ProofWiki Question-Answer Retrieval dataset, we directly use the proofs from ProofWiki as the corpus. A more challenging setup would involve manually adding hard negatives—proofs that appear similar to the ground truth but are not valid proofs for the query theorem. The Stacks Question-Answer Retrieval dataset faces a similar issue. •In the Premise Retrieval datasets, we cover formal languages such as Lean, Isabelle, and HOL. To improve diversity, we should include more formal systems like Coq. However, to the best of our knowledge, there is currently no readily available premise retrieval evaluation dataset for Coq that can be directly incorporated into our benchmark. 9 •The Premise Retrieval datasets are constructed from successful proofs, meaning the ground- truth premises are indeed useful for advancing the proof. However, this does not imply that other premises are not also helpful, leading to potential false negatives. A more ideal approach would involve exhaustively testing all candidate premises with various tactics to determine their utility for the given proof state. 7 Acknowledgment This work is supported in part by National Key R&D Program of China grant 2024YFA1014000 and the New Cornerstone Investigator Program. References [1]Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. M3-Embedding: Multi-Linguality, Multi-Functionality, Multi-Granularity Text Embeddings Through Self- Knowledge Distillation. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics: ACL 2024 , pages 2318–2335, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [2]Pankaj Dadure, Partha Pakray, and Sivaji Bandyopadhyay. Mathematical information retrieval: A review. ACM Computing Surveys , 57(3):1–34, 2024. [3]Leonardo de Moura and Sebastian Ullrich. The Lean 4 Theorem Prover and Programming Language. In André Platzer and Geoff Sutcliffe, editors, Automated Deduction - CADE 28 - 28th International Conference on Automated Deduction, Virtual Event, July 12-15, 2021, Proceedings , volume 12699 of Lecture Notes in Computer
https://arxiv.org/abs/2505.15585v1
Science , pages 625–635. Springer, 2021. [4]Leonardo Mendonça de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer. The Lean Theorem Prover (System Description). In Amy P. Felty and Aart Middeldorp, editors, Automated Deduction - CADE-25 - 25th International Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings , volume 9195 of Lecture Notes in Computer Science , pages 378–388. Springer, 2015. [5]Kenneth Enevoldsen, Isaac Chung, Imene Kerboua, Márton Kardos, Ashwin Mathur, David Stap, Jay Gala, Wissam Siblini, Dominik Krzemi ´nski, Genta Indra Winata, et al. Mmteb: Massive multilingual text embedding benchmark. arXiv preprint arXiv:2502.13595 , 2025. [6]Guoxiong Gao, Haocheng Ju, Jiedong Jiang, Zihan Qin, and Bin Dong. A Semantic Search Engine for Mathlib4. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 8001–8013, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [7]Liangcai Gao, Ke Yuan, Yuehan Wang, Zhuoren Jiang, and Zhi Tang. The Math Retrieval System of ICST for NTCIR-12 MathIR Task. In Noriko Kando, Tetsuya Sakai, and Mark Sanderson, editors, Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, National Center of Sciences, Tokyo, Japan, June 7-10, 2016 . National Institute of Informatics (NII), 2016. [8]Doris Hoogeveen, Karin M. Verspoor, and Timothy Baldwin. CQADupStack: A Benchmark Data Set for Community Question-Answering Research. In Proceedings of the 20th Australasian Document Computing Symposium , ADCS ’15, New York, NY , USA, 2015. Association for Computing Machinery. [9]Junjie Huang, Duyu Tang, Linjun Shou, Ming Gong, Ke Xu, Daxin Jiang, Ming Zhou, and Nan Duan. CoSQA: 20,000+ Web Queries for Code Search and Question Answering. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 5690–5700, Online, August 2021. Association for Computational Linguistics. 10 [10] Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436 , 2019. [11] Cezary Kaliszyk, François Chollet, and Christian Szegedy. HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving. In International Conference on Learning Representations , 2017. [12] Andrew Kane, Yin Ki Ng, and Frank Wm. Tompa. Dowsing for Answers to Math Questions: Doing Better with Less. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum, Bologna, Italy, September 5th - to - 8th, 2022 , volume 3180 of CEUR Workshop Proceedings , pages 40–62. CEUR-WS.org, 2022. [13] Ali Shiraee Kasmaee, Mohammad Khodadad, Mohammad Arshi Saloot, Nicholas Sherck, Stephen Dokas, Hamidreza Mahyar, and Soheila Samiee. ChemTEB: Chemical Text Embedding Benchmark, an Overview of Embedding Models Performance & Efficiency on a Specific Domain. arXiv preprint arXiv:2412.00532 , 2024. [14] Mohammad Abdullah Matin Khan, M Saiful Bari, Xuan Long Do, Weishi Wang, Md Rizwan Parvez, and Shafiq Joty. XCodeEval: An Execution-based Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation
https://arxiv.org/abs/2505.15585v1
and Retrieval. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 6766–6805, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [15] Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models. In The Thirteenth International Conference on Learning Representations , 2025. [16] Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren, Blair Chen, Daniel Cer, Jeremy R Cole, Kai Hui, Michael Boratko, Rajvi Kapadia, Wen Ding, et al. Gecko: Versatile text embeddings distilled from large language models. arXiv preprint arXiv:2403.20327 , 2024. [17] Xiangyang Li, Kuicai Dong, Yi Quan Lee, Wei Xia, Hao Zhang, Xinyi Dai, Yasheng Wang, and Ruiming Tang. Coir: A comprehensive benchmark for code information retrieval models. arXiv preprint arXiv:2407.02883 , 2024. [18] Xianming Li and Jing Li. AoE: Angle-optimized Embeddings for Semantic Textual Similarity. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1825–1839, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [19] Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 , 2023. [20] Behrooz Mansouri, Anurag Agarwal, Douglas W. Oard, and Richard Zanibbi. Advancing Math- Aware Search: The ARQMath-3 Lab at CLEF 2022. In Matthias Hagen, Suzan Verberne, Craig Macdonald, Christin Seifert, Krisztian Balog, Kjetil Nørvåg, and Vinay Setty, editors, Advances in Information Retrieval , pages 408–415, Cham, 2022. Springer International Publishing. [21] Behrooz Mansouri, Richard Zanibbi, Douglas W. Oard, and Anurag Agarwal. Overview of ARQMath-2 (2021): Second CLEF Lab on Answer Retrieval for Questions on Math. In K. Selçuk Candan, Bogdan Ionescu, Lorraine Goeuriot, Birger Larsen, Henning Müller, Alexis Joly, Maria Maistro, Florina Piroi, Guglielmo Faggioli, and Nicola Ferro, editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction , pages 215–238, Cham, 2021. Springer International Publishing. [22] Rui Meng*, Ye Liu*, Shafiq Rayhan Joty, Yingbo Zhou Caiming Xiong, and Semih Yavuz. SFR-Embedding-2: Advanced Text Embedding with Multi-stage Training, 2024. 11 [23] Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. SFR- Embedding-Mistral:Enhance Text Retrieval with Transfer Learning. Salesforce AI Research Blog, 2024. [24] Maciej Mikuła, Szymon Tworkowski, Szymon Antoniak, Bartosz Piotrowski, Albert Q. Jiang, Jin Peng Zhou, Christian Szegedy, Łukasz Kuci ´nski, Piotr Miło ´s, and Yuhuai Wu. Magnusham- mer: A Transformer-Based Approach to Premise Selection. In The Twelfth International Conference on Learning Representations , 2024. [25] Niklas Muennighoff, SU Hongjin, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, and Douwe Kiela. Generative representational instruction tuning. In ICLR 2024 Workshop: How Far Are We From AGI , 2024. [26] Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. MTEB: Massive Text Embedding Benchmark. In Andreas Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics , pages 2014–2037, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics.
https://arxiv.org/abs/2505.15585v1
[27] Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. Text and Code Embeddings by Contrastive Pre-Training. CoRR , abs/2201.10005, 2022. [28] Stephen Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. Okapi at TREC-3. In Overview of the Third Text REtrieval Conference (TREC-3) , pages 109–126. Gaithersburg, MD: NIST, January 1995. [29] Stephen Robertson and Hugo Zaragoza. The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retr. , 3(4):333–389, apr 2009. [30] Tetsuya Sakai and Noriko Kando. On information retrieval metrics designed for evaluation with incomplete relevance assessments. Information Retrieval , 11:447–470, 2008. [31] Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. One Embedder, Any Task: Instruction-Finetuned Text Embeddings. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023 , pages 1102–1121, Toronto, Canada, July 2023. Association for Computational Linguistics. [32] Hongjin SU, Howard Yen, Mengzhou Xia, Weijia Shi, Niklas Muennighoff, Han yu Wang, Liu Haisu, Quan Shi, Zachary S Siegel, Michael Tang, Ruoxi Sun, Jinsung Yoon, Sercan O Arik, Danqi Chen, and Tao Yu. BRIGHT: A Realistic and Challenging Benchmark for Reasoning- Intensive Retrieval. In The Thirteenth International Conference on Learning Representations , 2025. [33] Weiwei Sun, Zhengliang Shi, Jiulong Wu, Lingyong Yan, Xinyu Ma, Yiding Liu, Min Cao, Dawei Yin, and Zhaochun Ren. MAIR: A Massive Benchmark for Evaluating Instructed Retrieval. arXiv preprint arXiv:2410.10127 , 2024. [34] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. InThirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. [35] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. Text Embeddings by Weakly-Supervised Contrastive Pre-training. CoRR , abs/2212.03533, 2022. [36] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. Improving Text Embeddings with Large Language Models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for 12 Computational Linguistics (Volume 1: Long Papers) , pages 11897–11916, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [37] Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun Cho. NaturalProofs: Mathematical Theorem Proving in Natural Language. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. [38] Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover: Grounded mathematical proof generation with language models. Advances in Neural Informa- tion Processing Systems , 35:4913–4927, 2022. [39] Chenghao Xiao, G Thomas Hudson, and Noura Al Moubayed. Rar-b: Reasoning as retrieval benchmark. arXiv preprint arXiv:2404.06347 , 2024. [40] Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff, Defu Lian,
https://arxiv.org/abs/2505.15585v1
and Jian-Yun Nie. C-Pack: Packed Resources For General Chinese Embeddings. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’24, page 641–649, New York, NY , USA, 2024. Association for Computing Machinery. [41] Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar. LeanDojo: Theorem Proving with Retrieval- Augmented Language Models. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. [42] Richard Zanibbi, Akiko Aizawa, Michael Kohlhase, Iadh Ounis, Goran Topic, and Kenny Davila. NTCIR-12 MathIR Task Overview. In NTCIR , 2016. [43] Richard Zanibbi and Dorothea Blostein. Recognition and retrieval of mathematical expressions. Int. J. Document Anal. Recognit. , 15(4):331–357, 2012. [44] Richard Zanibbi, Behrooz Mansouri, Anurag Agarwal, et al. Mathematical information retrieval: Search and question answering. Foundations and Trends ®in Information Retrieval , 19(1-2):1– 190, 2025. [45] Richard Zanibbi, Douglas W. Oard, Anurag Agarwal, and Behrooz Mansouri. Overview of ARQMath 2020: CLEF Lab on Answer Retrieval for Questions on Math. In Avi Arampatzis, Evangelos Kanoulas, Theodora Tsikrika, Stefanos Vrochidis, Hideo Joho, Christina Lioma, Carsten Eickhoff, Aurélie Névéol, Linda Cappellato, and Nicola Ferro, editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction , pages 169–193, Cham, 2020. Springer International Publishing. [46] Wei Zhong, Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. One Blade for One Purpose: Advancing Math Information Retrieval using Hybrid Search. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’23, page 141–151, New York, NY , USA, 2023. Association for Computing Machinery. [47] Wei Zhong, Shaurya Rohatgi, Jian Wu, C. Lee Giles, and Richard Zanibbi. Accelerating Substructure Similarity Search for Formula Retrieval. In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part I , page 714–727, Berlin, Heidelberg, 2020. Springer-Verlag. [48] Wei Zhong, Yuqing Xie, and Jimmy Lin. Applying Structural and Dense Semantic Matching for the ARQMath Lab 2022, CLEF. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum, Bologna, Italy, September 5th - to - 8th, 2022 , volume 3180 of CEUR Workshop Proceedings , pages 147–170. CEUR-WS.org, 2022. [49] Wei Zhong, Jheng-Hong Yang, Yuqing Xie, and Jimmy Lin. Evaluating Token-Level and Passage-Level Dense Retrieval Models for Math Information Retrieval. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022 , pages 1092–1102. Association for Computational Linguistics, 2022. 13 [50] Wei Zhong and Richard Zanibbi. Structural Similarity Search for Formulas Using Leaf-Root Paths in Operator Subtrees. In Leif Azzopardi, Benno Stein, Norbert Fuhr, Philipp Mayr, Claudia Hauff, and Djoerd Hiemstra, editors, Advances in Information Retrieval - 41st European Conference on IR Research, ECIR 2019, Cologne, Germany, April 14-18, 2019, Proceedings, Part I , volume 11437 of Lecture Notes in Computer Science , pages 116–129. Springer, 2019. [51] Dawei Zhu, Liang Wang, Nan
https://arxiv.org/abs/2505.15585v1
Yang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. LongEmbed: Extending Embedding Models for Long Context Retrieval. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 802–816, Miami, Florida, USA, November 2024. Association for Computational Linguistics. 14 A Dataset Examples In this section, we present examples of datasets from MIRB that are not included in the main text. Table 11: MO Dup. Question Retrieval example. Query Relevant Document On finite subsets of set of integers, which lies in its sum-set , whose sum of elements equals 0Letn >1be an integer and S⊆Zbe such that|S|=nandS⊆S+S:={a+b: a, b∈S}; then does there exist T⊆Swith 1≤ |T| ≤n/2such thatP a∈T= 0?Existence of a zero-sum subset Some time ago I heard this question and tried playing around with it. I’ve never succeeded to mak- ing actual progress. Here it goes: Given a finite (nonempty) set of real numbers, S= {a1, a2, . . . , a n}, with the property that for eachithere exist j, k(not necessarily distinct) so that ai=aj+ak(i.e. every element in Scan be written as a sum of two elements in S, note that this condition is trivially satisfied if0∈Sas then every x∈Scan be written asx+ 0). Must there exist {i1, i2, . . . , i m} (distinct) so that ai1+ai2+···+aim= 0? ETA: A possible reformulation can be made in terms of graphs. We can take the vertex set{1, . . . , n }and for each equation ai= aj+akin S add an edge [ij]and its "dual" [ik]. The idea is to find a cycle in this graph, whose dual is a matching. Table 12: ProofWiki example. Query Relevant Document Fortissimo Space is not Weakly Countably Compact Let $T = \struct S, \tau_p$ be a Fortissimo space. Then $T$ is not weakly countably compact.It suffices to show that Thas an infinite subset without limit points. Consider the set S\{p}. Letx∈S. We have: {{begin-eqn}} {{eqn | l =\paren {S \setminus \paren {S \setminus \set p} } \cup\set x | r = \set p\cup\set x | c = }} {{eqn | r = \set {p, x} | c = }} {{end-eqn}} By definition, xis a limit point ofS\{p}iff{p, x}is not a neighborhood of x. By definition of Fortissimo space, {p, x} is open in T. Hence it is a open neighborhood ofx. Therefore xis not a limit point of S\ {p}. Since xis arbitrary, S\ {p}has no limit points. Hence Tis not weakly countably compact. {{qed}} B Computing Resources We conduct our experiments on eight NVIDIA A800 (80G) GPUs. For the sparse model BM25, evaluation on our benchmark takes approximately one hour. For small models with fewer than one billion parameters, evaluation requires around six GPU hours. The 1.5B model takes about 36 GPU hours, while the 7B models require about 64 GPU hours. Each proprietary model is evaluated in under 25 hours. 15 Table 13: Stacks example. Query Relevant Document spaces-morphisms-lemma-birational Let Sbe a scheme. Let XandYbe algebraic space overSwith|X|and|Y|irreducible. Then XandYare birational if and only if there are
https://arxiv.org/abs/2505.15585v1
nonempty open subspaces U⊂Xand V⊂Ywhich are isomorphic as algebraic spaces over S.Assume XandYare birational. Let f:U→ Yandg:V→Xdefine inverse dominant rational maps from XtoYand from Yto X. After shrinking Uwe may assume f: U→Yfactors through V. Asg◦fis the identity as a dominant rational map, we see that the composition U→V→Xis the identity on a dense open of U. Thus after replacing Uby a smaller open we may assume thatU→V→Xis the inclusion of U intoX. By symmetry we find there exists an open subspace V′⊂Vsuch that g|V′: V′→Xfactors through U⊂Xand such thatV′→U→Yis the identity. The inverse image of |V′|by|U| → | V|is an open of |U|and hence equal to |U′|for some open subspace U′⊂U, see Properties of Spaces, Lemma \ref{spaces-properties-lemma-open- subspaces}. Then U′⊂U→Vfactors asU′→V′. Similarly V′→Ufactors as V′→U′. The reader finds that U′→V′and V′→U′are mutually inverse morphisms of algebraic spaces over Sand the proof is complete. Table 14: LeanDojo example. Query Relevant Document R : Type u M : Type v inst†² : CommRing R inst†¹ : AddCommGroup M inst†: Module R M B : BilinForm R M f g : Module.End R M hf : IsSkewAdjoint B f hg : IsSkewAdjoint B g ⊢IsAdjointPair B B (f * g) (g * f)theorem neg_mul_neg (a b : α) : -a * -b = a * b C Broader Impacts We introduce a unified benchmark for mathematical information retrieval, aiming to encourage the development of more effective retrieval models. We hope this benchmark helps advance search engines and automated theorem proving systems by driving progress in math-specific retrieval capabilities. 16 Table 15: MAPL example. Query Relevant Document proof (prove) using this: length ps = length vs left_nesting f \\<noteq> left_nesting g is_const (fst (strip_comb f)) goal (1 subgoal): 1. match (list_comb f ps) (list_comb g vs) = Nonelist_induct2: fixes xs :: "’c list" and ys :: "’d list" and P :: "’c list <Rightarrow> ’d list <Rightarrow> bool" assumes "length xs = length ys" and "P [] []" and " <And>x xs y ys. <lbrakk>length xs = length ys; P xs ys <rbrakk> <Longrightarrow> P (x # xs) (y # ys)" shows "P xs ys" Table 16: HolStep example. Query Relevant Document ABSOLUTELY_INTEGRABLE_CONVOLU TION_LINF_L1 |- (!bop. (!f. (!g. (!x. (((bi- linear bop) /\(((measurable_on f) UNIV) /\((bounded ((IMAGE f) UNIV)) /\((abso- lutely_integrable_on g) UNIV)))) ==> ((ab- solutely_integrable_on ( \y. ((bop (f ((vec- tor_sub x) y))) (g y)))) UNIV))))))BILINEAR_SWAP |- (!op. ((bilinear ( \x. (\y. ((op y) x)))) = (bilinear op))) Table 17: ARQMath-Task-2 example. Query Relevant Document Formula:R1 (x2+1)ndx Context:R1 (x2+1)ndxLet be n∈Z+. Com- pute the following integral: Z1 (x2+ 1)ndx I obtained that for n= 1 the value of the integral is tan−1x+C and for n= 2 x1 2 (x2+ 1)+tan−1 2x +C How to do the rest of the cases?In=R1 (x2−1)ndx 17
https://arxiv.org/abs/2505.15585v1
arXiv:2505.15607v1 [cs.CL] 21 May 2025From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning David Dinucu-Jianu∗1Jakub Macina∗1,2Nico Daheim1,3 Ido Hakimi1,2Iryna Gurevych3Mrinmaya Sachan1 1Department of Computer Science, ETH Zurich2ETH AI Center 3Ubiquitous Knowledge Processing Lab (UKP Lab), Department of Computer Science and Hessian Center for AI (hessian.AI), TU Darmstadt Abstract Large language models (LLMs) can transform education, but their optimization for direct question-answering often undermines effective pedagogy which requires strategically with- holding answers. To mitigate this, we propose an online reinforcement learning (RL)-based alignment framework that can quickly adapt LLMs into effective tutors using simulated student-tutor interactions by emphasizing ped- agogical quality and guided problem-solving over simply giving away answers. We use our method to train a 7B parameter tutor model without human annotations which reaches sim- ilar performance to larger proprietary models like LearnLM. We introduce a controllable re- ward weighting to balance pedagogical support and student solving accuracy, allowing us to trace the Pareto frontier between these two ob- jectives. Our models better preserve reasoning capabilities than single-turn SFT baselines and can optionally enhance interpretability through thinking tags that expose the model’s instruc- tional planning. https://github.com/eth-lre/PedagogicalRL 1 Introduction Large Language Models (LLMs) hold significant promise in education, particularly as personalized tutors capable of guiding students individually through problems. Recent advances have demon- strated remarkable LLM performance in math and science (Chervonyi et al., 2025; Saab et al., 2024). However, deploying LLMs effectively as educa- tional tutors involves more than excelling on bench- marks (Tack and Piech, 2022; Gupta et al., 2025). To be truly effective, a tutor must facilitate learn- ing by guiding students toward independently con- structing correct solutions rather than simply re- vealing the answers. We refer to this shift from assistant to tutor as pedagogical alignment. *Equal contribution. MDPO SFTQwen2.5-7B Qwen2.5-3BSocraticLM LearnLM-1.5LearnLM-2.0=2.0=1.5=0.75 =1.0=1.75=1.25=0.5 =0.25Qwen2.5-14BGPT-4oDeepSeek-V3-0324 DeepSeek-V3 =0.0 0 50 25 75 Leaked solutions (%)010203040 solve rate (%) Baseline Our runs ( ) Pareto Point Pareto frontierFigure 1: LLM tutoring forms a multi-objective scenario in which LLM tutors should increase the student’s solve rate (y-axis) while minimizing solution leakage (x-axis). Here, the ∆solve rate measures how often a student can solve a problem before and after the dialog with a tutor and leaked solutions measures how often the tutor tells the solution to the student. Our RL-trained Qwen-2.5-7B models with varying penalty λare on the Pareto-front and match the performance of specialized closed-source models when tutoring on Big-Math. Achieving robust pedagogical alignment remains an open challenge (Macina et al., 2025; Maurya et al., 2025). Approaches that rely on supervised fine-tuning (SFT) (Daheim et al., 2024; Kwon et al., 2024) can suffer from generalization issues while existing RL-based techniques typically depend on costly, and often proprietary, preference annota- tions (Team et al., 2024) or require a much larger model as a source of training data of tutor re- sponses (Sonkar et al., 2024; Scarlatos et al., 2025). Due to these limitations, these prior works have largely focused on single-turn feedback, which fails to capture the multi-turn dynamics that are essential for effective tutoring. This step looks good. We can
https://arxiv.org/abs/2505.15607v1
continue. Just one more step. I will encourage the student. The correct answer should be 4. I got the answer x = 6 My first step: 5x-3-2x=9Walk me through your steps. Good! What's next? We can add 3: 3x = 12 Almost there Thus the answer is: x = 4Student Pedagogical Judges Post-dialog student solve rate = Optimize Reward Reward FunctionTutor Answer Leakage Helpfulness 5x - 3 - 2x = 9 3x = 12 x = 4 5x - 3 - 2x = 9 3x = 12 x = 45x - 3 - 2x = 9 3x = 12 x = 45x - 3 - 2x = 9 3x = 12 x = 4Problem: Solve 5x−3=2x+9 Tutor model (trained) 8 full dialog rollouts Student Model (not updated) Now given our dialog provide a step by step solutionPre-dialog student solve rate = ❄ ......... ..... improvement Figure 2: Workflow of our RL framework. First, we perform multiple complete student-tutor conversation simulations. After each conversation ends, the reward is computed: 1) post-dialog student solve rate (success) conditioned on the dialog, and 2) the pedagogical quality of the tutor guidance throughout the conversation. This setup uses data from the current tutor model (on-policy) and does not use offline static dialog data (online). To address these gaps, we propose a multi-turn reinforcement learning (RL) method that enables the model to learn directly from its own dialogs with a student to find optimal teaching strategies. Grounded in mastery learning and active teaching principles (Chi and Wylie, 2014; Freeman et al., 2014), our system simulates multi-turn interactions on challenging problems from Big-Math (Albalak et al., 2025), with the tutor LLM using Socratic questioning (Shridhar et al., 2022) and targeted hints instead of handing out solutions. We design reward functions that mirror authentic long-term learning outcomes, namely, how often a student can solve a problem after a dialog with the tutor and how much the tutor follows sound pedagogical principles throughout the full conversation. Our key contributions are the following: •Cost-efficient training via synthetic stu- dent–tutor interactions: Our online RL method replaces the need for expensive human-annotated data with a synthetic datapipeline, enabling a 7B Tutor Model to almost match the performance of LearnLM. •Controllable pedagogy–accuracy trade-off: Our method enables explicit control over the balance between pedagogical support and stu- dent answer correctness by adjusting a penalty weight to navigate a Pareto frontier. •Preservation of reasoning capabilities: Our approach maintains performance across stan- dard reasoning benchmarks, unlike prior meth- ods such as SocraticLM (Liu et al., 2024). Evaluations on MMLU, GSM8K, and MATH demonstrate that pedagogical alignment does not come at the cost of reasoning ability. 2 Related Work 2.1 LLMs for Dialog Tutoring While effective human tutors not only provide an- swers but more importantly scaffold the learning of students, LLMs are predominantly trained for pro- viding answers which limits their tutoring capabili- ties (Tack and Piech, 2022; Macina et al., 2023b). Hence, various approaches have been proposed to improve their pedagogical skills. Arguably the simplest is prompt engineering, where pedagogical criteria are encoded in the prompt, for
https://arxiv.org/abs/2505.15607v1
example, for asking questions (Sonkar et al., 2023; Puech et al., 2025) or detecting mis- takes (Wang et al., 2024b) but it is tedious and sensitive to changes (Jurenka et al., 2024). A more robust alternative is to use gradient- based updating, for example, SFT on teacher- student dialogs. However, this is challenging be- cause only a few high-quality tutoring datasets ex- ist publicly, for example, MathDial which is semi- synthetically created by pairing LLM students with real teachers for solving math problems (Macina et al., 2023a). Hence, many works resort to syn- thetic data (Wang et al., 2024a). For example, So- craticLM (Liu et al., 2024) is trained on 35k math tutoring dialogs created using a multi-agent setting and TutorChat (Chevalier et al., 2024) is trained using 80k synthetic teacher–student conversations grounded in textbooks. Larger scale approaches in industry, such as, LearnLM (Jurenka et al., 2024) use a mixture of synthetic and human-collected data but this requires substantial resources. Finally, recent works use Reinforcement Learn- ing from Human Feedback (RLHF) (Ouyang et al., 2022), for example, to improve next tutor dialog act prediction (Sonkar et al., 2024) or to improve math tutors by turn-level rewards using GPT-4-generated preference data (Scarlatos et al., 2025). However, it is unclear how single-turn synthetic data translates to tutoring more complex multi-turn conversations. Prior works treat tutoring as an offline off-policy problem by relying on large-scale synthetic or pro- prietary data which introduces exposure bias (Ross and Bagnell, 2010; Ranzato et al., 2016) as the tu- tor does not learn from its own interactions during training. In contrast, our work adopts an online on-policy setup where the model is trained on its own interactions throughout training. 2.2 Dialog as RL Task & Verifiable Rewards Previous work has commonly framed educational dialog as a next teacher utterance generation task, where the teacher’s last turn serves as a ground truth response and the dialog history serves as con- text (Macina et al., 2023a). However, a dialog is in- herently a multi-turn interaction towards a goal (e.g. student learns to solve a problem) and single-turnmethods limit the model’s ability to plan across multiple turns to achieve longer-term goals. Ef- fective tutoring, however, is a sequential, adaptive and goal-directed process with the aim of helping a student not only solve a current problem, but also learn to solve similar problems. To address this problem, formulating dialog as an RL problem might be helpful which has been explored outside of tutoring recently (Li et al., 2017; Shani et al., 2024; Xiong et al., 2025; Li et al., 2025). In general, RL learns optimal actions by col- lecting a numerical reward from the environment which provides a natural framework for aligning LLM behavior with pedagogical goals by assign- ing rewards to complete conversations rather than to isolated turns. In LLMs, RL has been success- fully used to align with human feedback (Ouyang et al., 2022) and to improve reasoning via verifiable rewards (Shao et al., 2024; Lambert et al., 2024; Wang et al., 2025). Standard on-policy algorithms like Proximal Pol- icy Optimization
https://arxiv.org/abs/2505.15607v1
(PPO) (Schulman et al., 2017) have been crucial for the success of human- preference alignment in GPT models. Direct Pref- erence Optimization (DPO) (Rafailov et al., 2023) has emerged as a simpler alternative without the re- quirement of a reward model that allows fine-tuning on offline pairwise preference data. Extensions of DPO to multi-turn settings, such as multi-turn DPO (MDPO), commonly mask user turns to optimize only over assistant responses (Xiong et al., 2025). Recent algorithms such as MTPO (Shani et al., 2024) and REFUEL (Gao et al., 2025) compare pairs of entire conversations rollouts to improve over DPO. Access to verifiable rewards has been crucial for scaling RL training for LLMs, for ex- ample, by comparing to a reference solution (Shao et al., 2024; DeepSeek-AI et al., 2025) or execut- ing programs (Lambert et al., 2024). While these methods have been used to improve reasoning, ped- agogical criteria have largely been neglected. Our work builds upon a line of research formu- lating a dialog as an RL problem in a synthetic tutor-student environment. By integrating verifi- able correctness rewards with pedagogical rubrics, we explore the control of the trade-off between instruction support and answer accuracy. 3 Pedagogical Principles Effective teaching is not only about providing an- swers but rather about fostering student learning through scaffolding guidance. Here, scaffolding means actively engaging students in problem solv- ing (Chi and Wylie, 2014; Freeman et al., 2014) using questions, hints, and nudges. Avoiding Answer Leakage: A key element is to actively engage students in problem solving instead of letting them passively consume correct answer, which does not lead to learning. Therefore, we discourage the tutor from presenting complete solu- tions. Instead, they should guide students through Socratic questioning, hints, or targeted feedback. This mirrors constraints from prior related work, such as the role of a dean persona (Liu et al., 2024). Helpfulness: The tutor should guide the student with constructive and contextual appropriate sup- port in the right teacher tone. The tutor violates this principle if they provide full answers or dominate the conversation and it is similar to targetedness in prior work (Daheim et al., 2024). Moreover, tutors should be responsive and encouraging, reflecting the tone of real teachers (Tack and Piech, 2022). 4 Dialog Tutoring as Multi-Turn RL We consider multi-turn conversations (u1, . . . ,uT) made up of a sequence of utterances ut∈ V∗taken by either the student or a teacher, both simulated by an LLM. In our training runs, it is decided by ran- dom choice who starts the conversation, as detailed in Section 5.1. The goal of the student is to solve a problem P∈ V∗which has a unique known nu- merical solution s∈R. The objective of the LLM tutor is to guide the student toward the solution sby generating a new utgiven the context u<t. The conversation ends when the tutor considers it finished or after a fixed number of turns. We use autoregressive LLM-based tutors, parameterized by neural network weights θ, to generate outputs by sampling from the model distribution pθ(ut|u<t) =|ut|Y n=1pθ([ut]n|[ut]<n,u<t), where [ut]nis the
https://arxiv.org/abs/2505.15607v1
n-th token of the output sequence ut. In Section 3 we define the pedagogical princi- ples that the generated utterances should fulfill. Learning θcan then also be framed as an RL problem under the lens of Markov Decision Pro- cesses (MDP) for which we re-define the previously introduced quantities in common notation. To be precise, for a given position tin the dialog, we de- fine the state to be st:=u<tand the action to be at:=ut, i.e. the current state in the conversationis fully captured by the sequence of previous utter- ances and the action is the next utterance. The tran- sition dynamics are defined by sequentially append- ing each new utterance (or action) atto the existing conversation history (or state st) to form the new statest+1. Ifatis a tutor utterance, it is sampled from the tutor’s policy; if it is a student utterance, it is sampled from a fixed student LLM conditioned onst. Since the student model is stochastic, the transition dynamics are non-determinsitic from the tutor’s perspective, different from standard RLHF which often assumes deterministic environments. Then, the goal is to learn the tutor policy πθ:= pθsuch that sampled responses at∼πθ(· |st) (1) fulfill the desiderata in Section 3. We achieve this by defining rewards r(aT,sT)that are assigned at the end of a conversation to full sequences aT based on the context sT. That is, we define rewards at the level of the full conversation rather than as- signing them to individual turns. Furthermore, we also sample atdirectly from the current policy πθ at the given training iteration. The on-policy ap- proach means we update the current policy πθand subsequent dialogs are generated from the newly updated model. This is different from DPO-based approaches, which use static data. Ther, the model is always conditioned on context from an older checkpoint. Instead, we use online RL and avoid such context drift by conditioning on context gen- erated with the current model checkpoint. 4.1 Rewarding LLM Tutor Pedagogy Our reward design follows the pedagogical princi- ples laid out in Section 3. This means that we aim to fulfill two goals: the student should be able to successfully solve Pafter the dialog and the ac- tionsatgenerated using the policy πθshould have high pedagogical quality and, for example, not just solve the problem for a student. We judge solution correctness by sampling mul- tiple final answers bs(1),bs(2), . . . ,bs(K)from the stu- dent model conditioned on aTandsTand compute an empirical expected correctness across these so- lutions called post-dialog solve rate: rsol(aT|sT) =1 KKX k=11[bs(k)=s], (2) where sis the ground-truth solution, as a verifiable outcome reward (DeepSeek-AI et al., 2025). We judge pedagogical quality (defined in Sec- tion 3) using LLM judges J1, J2, . . . , J Mto pre- vent overfitting on one specific judge model (Coste et al., 2024). We prompt the judge models indepen- dently to evaluate the full conversation and then only consider a conversation accepted if all judges accept it by measuring: rped(aT|sT) =MY m=11[Jm(aT,sT) =accept ]. (3) Altogether, we combine these rewards as: r(aT|sT) =rsol(aT|sT) + (rped(aT|sT)−1)·λ(4) given
https://arxiv.org/abs/2505.15607v1
a penalty λ≥0which is a hyperparame- ter. The penalty gets subtracted only if any of the pedagogical judges ( rped= 0) do not accept the conversation. Intuitively, this provides a way of trading off solution correctness indicated by rsolagainst ped- agogy measured by rped. If we only care about solution correctness, we can choose λ= 0 but would expect low pedagogy and many answers given away by the tutor. On the other hand, if we sendλ→ ∞ , only pedagogy matters which might mean that the student solves fewer problems but actually learns how to solve them. In between, var- ious trade-offs can be explored. Finally, we also try a version called hard – if the conversation is not accepted by at least one judge ( rped= 0), the overall reward is set to a fixed penalty −λto reflect pedagogical acceptance as a hard prerequisite. 5 Experiments 5.1 Details on the RL Environment Our simulated environment is designed to mimic multi-turn interactions between a student and a tu- tor. Each episode is seeded with the problem Pthat the student is trying to solve. An overview of the environment and an example of a conversation are in Figure 2. The environment supports two types of common educational interactions which differ in who starts the conversation. One option is to let the LLM student provide an attempted solution which may be correct, incorrect, or partially correct. Then, the tutor continues the conversation based on the initial attempted solution. Another scenario is that the tutor initiates the dialog and elicits a solution from the student LLM. We uniformly sample from the two scenarios in our experiments.Furthermore, to enable the tutor model to plan and generate more targeted responses, we adopt thinking tags (OpenAI, 2024; DeepSeek-AI et al., 2025) where the tutor can plan the response. This content is hidden to the student LLM. 5.2 Dataset We evaluate our framework on BigMath (Albalak et al., 2025) which contains multi-step math prob- lems. The dataset is annotated with the solve rate of Llama-3.1-8B-Instruct with chain-of-thought prompting (Wei et al., 2022). We only use prob- lems with a single numerical answer and medium- to-high difficulty, i.e., a solve rate between 1% and 60% out of 64samples. A distribution over prob- lem difficulties is in Figure 4. We partition this dataset into 10,000 training samples and 500 test samples. Dataset statistics are in Appendix B. To evaluate our models, we adopt two test beds: Held -out BigMath (in -domain): We first report re- sults on the 500 held-out BigMath problems. This mirrors the training setting and verifies whether our RL pipeline optimizes the intended conversational rewards. Our main metrics are the ∆Solve rate (%)andLeaked Solution (%) .∆Solve rate (%) measures improvement in the student’s problem- solving success after dialog. It is the difference be- tween pre-dialog solve rate measured using chain- of-thought accuracy and the post-dialog solve rate, with both computed in comparison to the ground truth solution s.Leaked Solution (%) is a portion of conversations where the tutor gives away the solution to the student assessed by an
https://arxiv.org/abs/2505.15607v1
LLM judge (prompt in Figure 9). MathTutorBench (out -of-domain): We addition- ally evaluate on the independent MathTutorBench benchmark (Macina et al., 2025), which provides several automatic metrics for tutor quality. We mainly focus on those metrics that rely on the benchmark’s learned Pedagogical Reward Model (Ped-RM), as they directly reflect the quality of scaf- folding and other pedagogical best practices. Note that the Ped-RM score is only used for evaluation across this paper and not as part of the reward. Finally, to ensure that tutor specialization does not degrade reasoning ability, we also report perfor- mance on the general-purpose benchmarks MMLU (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021), and MATH500 (Lightman et al., 2024). Model ∆Solve rate (%) ↑Leak Solution (%) ↓Ped-RM micro/macro ↑ Our Models Qwen2.5-7B-RL– λ=0.0 36.2 89.5 -2.8/-3.2 Qwen2.5-7B-RL– λ=0.5 30.9 25.1 2.7/1.5 Qwen2.5-7B-RL– λ=0.75 25.3 10.6 3.9/3.2 Qwen2.5-7B-RL– λ=1.0 24.7 18.4 3.2/2.2 Qwen2.5-7B-RL– λ=1.25 29.1 15.1 3.6/3.1 Qwen2.5-7B-RL– λ=1.5 21.2 5.4 4.4/4.0 + think 17.0 7.4 4.9/4.6 Qwen2.5-7B-RL-hard– λ=1.0 12.6 5.3 4.2/3.4 + think 20.5 6.9 4.3/4.9 –rsol 7.6 3.4 3.9/3.1 Baselines – Specialized Tutoring Models SocraticLM 15.9 40.4 1.7/1.7 Qwen2.5-7B-SFT 8.9 36.0 -0.3/-0.7 Qwen2.5-7B-MDPO 16.4 35.6 0.2/-0.3 LearnLM 1.5 Pro Experimental 1.5 2.6 5.9/5.3 LearnLM 2.0 Flash Experimental 4.3 0.9 6.8/6.4 Open-Weights Models Qwen2.5-3B-Instruct 5.2 34.6 -1.6/-1.7 Qwen2.5-7B-Instruct 11.3 29.3 -0.2/-0.5 Qwen2.5-14B-Instruct 29.3 41.9 -0.6/-1.2 Qwen2.5-72B-Instruct 38.7 61.0 1.8/-0.4 DeepSeek V3-0324 39.3 46.6 -1.5/-0.8 Closed-Source Models GPT-4o-2024-11-20 33.1 35.2 1.5/-0.3 Table 1: Main results based on in-domain test set. ∆Solve rate refers to the difference between pre- and post-dialog student solve rate. An independent model ( Gemma3-27B ) judges the leakage solution. The Per-RM score is only used for evaluation. Macro refers to averaging per conversation while micro uses averaging of all individual scores. 5.3 Implementation Details We use Group Relative Policy Optimization (GRPO) (Shao et al., 2024) for model optimiza- tion. For each problem, we simulate 8 complete student–tutor dialogs (rollouts). A single reward score reflecting student success and pedagogical quality of the entire dialog is assigned at the end of each simulation. We follow the standard GRPO to normalize each dialog reward within each group to obtain dialog-level advantages. The advantages are computed by comparing the reward of a sampled dialog with others in its group. Then dialog-level advantages are propagated to the token-level by adjusting the likelihood of generating each token. We mask the student turns to only optimize over tu- tor responses. We treat all tutor utterances equally and apply no discounting factor. The maximum number of total turns is set to 16. Moreover, we use a reward for template following based on the success of DeepSeek-AI et al. (2025), see details in Appendix C. To compute rped, we use two judge prompts: Answer Leakage in Figure 9 and Helpful- ness in Figure 10, and sample twice from each.5.4 Models We use Qwen2.5-7B-Instruct to initialize the tutor model and Llama-3.1-8B-Instruct as the Student model, following the setup in Big- Math (Albalak et al., 2025). As a judge, Qwen2.5-14B-Instruct model is used. To avoid overoptimizing on the judge model used during training,
https://arxiv.org/abs/2505.15607v1
in the held-out test set, a judge from an- other model family is used, namely, Gemma3-27B . We compare to several tutor baselines: Qwen2.5-7B-Instruct without any fine-tuning, SocraticLM (Liu et al., 2024) as a special- ized open-source tutoring model and LearnLM as a specialized close-source tutoring model, GPT-4o-2024-11-20 prompted to behave like a tutor, an SFT model which uses only accepted conversations by the judges for fine-tuning, similar to Macina et al. (2023a), as well as, MDPO (Xiong et al., 2025) which is a multi-turn extension of DPO and is trained on all pairs of chosen and rejected conversations scored by judges, similar to Sonkar et al. (2023); Scarlatos et al. (2025). 0.0 0.5 1.0 1.5 2.0 253035 Solve rate (%) (a)∆Solve rate vs. λ 0.0 0.5 1.0 1.5 2.0 255075Leak Solution (%) (b) Leak Solution Rate vs. λ 0.0 0.5 1.0 1.5 2.0 2 024Ped-RM micro (c) Pedagogical Reward (micro) vs. λ Figure 3: Performance of the RL tuned Qwen2.5-7B-Instruct across different λvalues: (a) student solve rate improvement, (b) leak solution rate, (c) pedagogical reward (micro). Model MMLU (5-shot) (%) GSM8K (4-shot) (%) MATH500 (0-shot) (%) Qwen2.5-Math-7B-Instruct 67.2 89.3 81.2 SocraticLM 65.1 (–2.1) 84.4 (–4.9) 80.4 (–0.8) Qwen2.5-7B-Instruct 77.9 86.8 75.4 Qwen2.5-7B-RL-hard– λ=1.0 77.3 (-0.6) 86.1 (-0.7) 73.6 (-1.8) + think 77.1 (–0.8) 85.3 (–1.5) 76.8 (+1.4) Qwen2.5-7B-SFT 79.3 (+1.4) 79.5 (–7.5) 66.0 (–9.4) Qwen2.5-7B-MDPO 78.0 (+0.1) 87.0 (+0.2) 76.4 (+1.0) Table 2: Performance comparison of tutor models on MMLU, GSM8K, and MATH500 benchmarks, showing the impact of different tutor alignment strategies. SocraticLM is finetuned from Qwen2.5-Math-7B-Instruct and exhibits performance degradation relative to the original model. In contrast, our RL models finetuned from Qwen2.5-7B- Instruct demonstrate reduced degradation. Pedagogical-SFT, which applies supervised fine-tuning on data generated by our tutor pipeline, and still results in noticeable degradation, highlighting the benefits of RL-based alignment. 6 Results 6.1 In-Domain Comparison LLMs prioritize answering over teaching Ta- ble 1 presents results across model categories on an in-domain test set. Overall, we observe a trade-off between student success measured by ∆Solve rate, solution leakage and pedagogical quality, measured by Ped-RM. Qwen2.5-72B-Instruct and DeepSeek V3 achieve the highest gains in student solve rate but also exhibit high solution leakage. Qualitative example reveals that models tend to solve the prob- lem directly for the student, see Figure 14. This supports our hypothesis that, even with engineered prompts, standard LLMs are inherently optimized for answering rather than teaching. Tutoring models show improved pedagogy Specialized tutoring models in Table 1, such as, SocraticLM, SFT, and MDPO demonstrate a more balanced behavior as shown by reduced solu- tion leakage and improved pedagogical scores. However, they often also have lower student suc- cess rates, similar to unfinetuned Qwen2.5-7B- Instruct. The specialized, proprietary tutoring model LearnLM2.0 achieves the highest pedagogi-cal scores while maintaining minimal leakage, indi- cating strong adherence to pedagogical principles. However, its low ∆solve rate suggests that it might overpenalize leaking which limits its effectiveness when students require more direct guidance. Student success and pedagogy are a trade-off Our RL framework enables dynamic control over this trade-off. As shown in Figure
https://arxiv.org/abs/2505.15607v1
3, increasing the penalty λreduces solution leakage and improves pedagogical reward, at the cost of student success. Figure 1 shows how various settings of our frame- work trace a Pareto frontier between student learn- ing gains and pedagogy. At λ= 0.75, for instance, our Qwen2.5-7B-RL model achieves a balanced performance across all three metrics. When λ= 0, the model maximizes student success but does so by leaking answers and scoring negatively on peda- gogy. Qualitative comparison in Figure 11 and Fig- ure 12 further reveals that low-pedagogical-penalty models often exploit shortcuts, such as directly stating solutions or using answer fragments (e.g., “ 2+3=? “), even if prompted not to do so. This highlights the importance of our framework when optimizing LLMs as tutors. Math Expertise Student Understanding Pedagogy Problem solvingSocratic questioningSolution correctnessMistake locationMistake correctionTeacher response generation scaff. ped.IFscaff. [hard]ped.IF [hard] accuracy bleu F1 micro F1 accuracy win rate over human teacher Qwen2.5-7B-Instruct 0.87 0.23 0.63 0.39 0.04 0.37 0.60 0.45 0.56 Qwen2.5-7B-SFT 0.77 0.24 0.27 0.45 0.10 0.64 0.58 0.57 0.59 Qwen2.5-7B-MDPO 0.86 0.23 0.62 0.39 0.03 0.37 0.60 0.47 0.56 Qwen2.5-7B-RL– λ=0.0 0.86 0.24 0.65 0.36 0.07 0.39 0.62 0.48 0.60 Qwen2.5-7B-RL– λ=0.75 0.79 0.23 0.64 0.36 0.04 0.48 0.70 0.54 0.65 Qwen2.5-7B-RL– λ=1.25 0.83 0.23 0.67 0.35 0.05 0.57 0.72 0.61 0.69 Table 3: Results on the independent MathTutorBench benchmark with nine tasks. Scaff. and ped. IF are Scaffolding and Pedagogical Instruction Following tasks. [Hard] refers to the data split of the benchmark. Large tutoring LLMs can be matched without human annotations Our online RL framing of the multi-turn dialog tutoring task trains tutoring models through interaction with a synthetic student without the need for costly human annotation. It en- ables scalable, multi-turn optimization with control over pedagogical behaviour via verifiable reward and LLM judge constraints. Figure 1 shows that de- spite using only a 7B model, our RL-tuned models (e.g. with λ= 1.5or−rsol) outperform specialized closed-source LearnLM models on student solve rates, while nearly matching the solution leakage. Compared to baselines using fine-tuning via SFT or preference-optimization MDPO, our approach (using λ >0) achieves lower solution leakage and better trade-off between tutoring efficacy and stu- dent independence. This highlights the value of modeling tutoring as a multi-turn, interactive pro- cess rather than using static offline responses. Thinking tags allow human observability Ta- ble 1 shows that the ablation with thinking tags (+think) leads to slightly improved performance as the corresponding model without it. We observe that thinking tags allow the model to solve the prob- lem (Figure 13) or the model learns to plan how to explain student mistakes (Figure 15), similar to what has been shown to improve model responses in previous work (Daheim et al., 2024). 6.2 Comparison on the Out-of-Domain Data No degradation of solving capabilities Unlike prior approaches such as SocraticLM (Liu et al., 2024), which sacrifice base model performance in pursuit of pedagogical alignment, our method pre- serves reasoning abilities across standard bench- marks. As shown in Table 2, Qwen2.5-7B-RLmatches or slightly exceeds the performance of its base model (Qwen2.5-7B-Instruct). In contrast, SocraticLM, which
https://arxiv.org/abs/2505.15607v1
is fine-tuned from the Math version of Qwen, degrades performance. Similarly, supervised fine-tuning (SFT) results in decrease on math-heavy benchmarks (–7.5% on GSM8K, –9.4% on MATH500). These findings demonstrate that RL-based alignment better preserves core rea- soning skills, avoiding the trade-off between peda- gogical behaviour and task competence. Out-of-domain tuturing benchmark Table 3 shows evaluation of our models on the out-of- domain MathTutorBench benchmark (Macina et al., 2025), which assesses tutoring ability on nine tasks and uses the Ped-RM to find win-rate over human teachers. Our RL-aligned 7B models match or ex- ceed the pedagogical quality of baseline models. However, SFT remains a strong baseline for Mis- take location and Mistake correction tasks, high- lighting the need to carefully combine SFT and RL to build robust tutoring models in the future. 7 Conclusion In this work, we propose methods to quickly align LLMs for pedagogy using reinforcement learning. Our method does not require human annotations beyond initial problem statements and train on the models’ own context which reduces train and test mismatch. Rewards allow balancing student solv- ing accuracy and pedagogy, which requires strate- gically withholding information while accuracy could trivially be increased by the tutor leaking the solution. We find that smaller models trained with this approach can match large, proprietary models in various tutoring metrics. Limitations Our online RL approach introduces additional com- plexity compared to simpler SFT or single-turn pairwise preferences such as DPO. In particular, as known from other RL tasks, the use of model rollouts to simulate interactions with a student in- troduce variance and can make training potentially unstable or sample-inefficient. Careful implemen- tation is required to maintain stability. Our current reward focuses on conversation-level rewards, for example enabling to focus on longer- term post-dialog student success. However, truly learning a topic is measured with a delayed post- test on student transfer, i.e. the ability to transfer the learned topic over time. Future work could focus on such more precise but very delayed signal. All experiments focus on math-based tutoring tasks. While math is a valuable testbed with enough existing datasets, it represents only one STEM sub- ject. Our approach trains tutoring models using inter- actions with a single student model only, which may not reflect the diversity of real learners. In- corporating additional student models and different student personas in a prompt could lead to more realistic settings better representing a diversity of real learners and their misconceptions. All student responses and reward signals in our framework is generated synthetically by sampling from LLMs. While this enables scalable and cost- efficient training, it has not been validated with real students, which future works can explore, for example the impact of a trade-off between student success and pedagogy. Ethics Statement Intended Usage We will release the code under CC-BY-4.0 license. We use the BigMath, GSM8k, and MATH500 datasets released under the MIT license, the MathTutorBench benchmark released under CC-BY-4.0, and the MMLU with the Apache License 2.0. We use all of the datasets within their intended usage. Potential Misuse The overall goal of this work is
https://arxiv.org/abs/2505.15607v1
to support the community in improving LLMs at tutoring capabilities and align them with good pedagogical practice based on learning sciences. However, there are potential risks related to the reward function and reward hacking. If the reward function is redefined or an inappropriate penalty isused, the model might learn a suboptimal tutoring behaviour. Similarly, if the reward function is un- derspecified, the risk of model hacking the reward and finding shortcuts is present. We mitigate this by including several datasets and evaluation setups. Moreover, we share the code, hyperparameters, and the setup openly. However, before deploying the model with real students we emphasize caution, adding safeguards and proper user testing. Acknowledgements This work was supported in part by the Swiss AI Initiative under a project (ID a04) on AI for Ed- ucation. We thank Yilmazcan Ozyurt for valu- able feedback and discussions. Jakub Macina acknowledges funding from the ETH AI Cen- ter Doctoral Fellowship, Asuera Stiftung, and the ETH Zurich Foundation. This work has been funded by the LOEWE Distinguished Chair “Ubiquitous Knowledge Processing”, LOEWE initiative, Hesse, Germany (Grant Number: LOEWE/4a//519/05/00.002(0002)/81 ) and by the State of Hesse, Germany, as part of the project “LLMentor: Expert-AI Coteaching of ‘Introduc- tion to Scientific Work’” (Connectom Networking and Innovation Fund). References Alon Albalak, Duy Phung, Nathan Lile, Rafael Rafailov, Kanishk Gandhi, Louis Castricato, Anikait Singh, Chase Blagden, Violet Xiang, Dakota Mahan, and Nick Haber. 2025. Big-math: A large-scale, high- quality math dataset for reinforcement learning in language models. Preprint , arXiv:2502.17387. Yuri Chervonyi, Trieu H. Trinh, Miroslav Olšák, Xi- aomeng Yang, Hoang Nguyen, Marcelo Menegali, Junehyuk Jung, Vikas Verma, Quoc V . Le, and Thang Luong. 2025. Gold-medalist performance in solving olympiad geometry with alphageometry2. Preprint , arXiv:2502.03544. Alexis Chevalier, Jiayi Geng, Alexander Wettig, Howard Chen, Sebastian Mizera, Toni Annala, Max Aragon, Arturo Rodriguez Fanlo, Simon Frieder, Si- mon Machado, Akshara Prabhakar, Ellie Thieu, Ji- achen T. Wang, Zirui Wang, Xindi Wu, Mengzhou Xia, Wenhan Xia, Jiatong Yu, Junjie Zhu, and 3 others. 2024. Language models as science tutors. InForty-first International Conference on Machine Learning . Michelene TH Chi and Ruth Wylie. 2014. The icap framework: Linking cognitive engagement to ac- tive learning outcomes. Educational psychologist , 49(4):219–243. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. arXiv preprint arXiv:2110.14168 . Thomas Coste, Usman Anwar, Robert Kirk, and David Krueger. 2024. Reward model ensembles help miti- gate overoptimization. In The Twelfth International Conference on Learning Representations . Nico Daheim, Jakub Macina, Manu Kapur, Iryna Gurevych, and Mrinmaya Sachan. 2024. Stepwise verification and remediation of student reasoning er- rors with large language model tutors. In Proceed- ings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 8386–8411, Miami, Florida, USA. Association for Computational Linguistics. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou,
https://arxiv.org/abs/2505.15607v1
Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint , arXiv:2501.12948. Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer. 2022. 8-bit optimizers via block-wise quantization. In International Conference on Learn- ing Representations . Scott Freeman, Sarah L Eddy, Miles McDonough, Michelle K Smith, Nnadozie Okoroafor, Hannah Jordt, and Mary Pat Wenderoth. 2014. Active learn- ing increases student performance in science, engi- neering, and mathematics. Proceedings of the na- tional academy of sciences , 111(23):8410–8415. Zhaolin Gao, Wenhao Zhan, Jonathan Daniel Chang, Gokul Swamy, Kianté Brantley, Jason D. Lee, and Wen Sun. 2025. Regressing the relative future: Ef- ficient policy optimization for multi-turn RLHF. In The Thirteenth International Conference on Learning Representations . Adit Gupta, Jennifer Reddig, Tommaso Calo, Daniel Weitekamp, and Christopher J. MacLellan. 2025. Be- yond final answers: Evaluating large language mod- els for math tutoring. Preprint , arXiv:2503.16460. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Irina Jurenka, Markus Kunesch, Kevin R McKee, Daniel Gillick, Shaojian Zhu, Sara Wiltberger, Shub- ham Milind Phal, Katherine Hermann, Daniel Kasen- berg, Avishkar Bhoopchand, and 1 others. 2024. To- wards responsible development of generative ai for education: An evaluation-driven approach. arXiv preprint arXiv:2407.12687 .Soonwoo Kwon, Sojung Kim, Minju Park, Seunghyun Lee, and Kyuseok Kim. 2024. BIPED: Pedagogi- cally informed tutoring system for ESL education. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3389–3414, Bangkok, Thailand. Association for Computational Linguistics. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, and 1 others. 2024. Tülu 3: Pushing fron- tiers in open language model post-training. arXiv preprint arXiv:2411.15124 . Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2017. Learning through dialogue interactions by asking questions. In International Conference on Learning Representations . Yubo Li, Xiaobin Shen, Xinyu Yao, Xueying Ding, Yidi Miao, Ramayya Krishnan, and Rema Padman. 2025. Beyond single-turn: A survey on multi-turn inter- actions with large language models. arXiv preprint arXiv:2504.04717 . Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harri- son Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2024. Let’s verify step by step. In The Twelfth Inter- national Conference on Learning Representations . Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei- Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024. Awq: Activation-aware weight quantization for on- device llm compression and acceleration. In MLSys . Jiayu Liu, Zhenya Huang, Tong Xiao, Jing Sha, Jinze Wu, Qi Liu, Shijin Wang, and Enhong Chen. 2024. SocraticLM: Exploring
https://arxiv.org/abs/2505.15607v1
socratic personalized teach- ing with large language models. In The Thirty-eighth Annual Conference on Neural Information Process- ing Systems . Jakub Macina, Nico Daheim, Sankalan Chowdhury, Tan- may Sinha, Manu Kapur, Iryna Gurevych, and Mrin- maya Sachan. 2023a. MathDial: A dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems. In Findings of the As- sociation for Computational Linguistics: EMNLP 2023 , pages 5602–5621, Singapore. Association for Computational Linguistics. Jakub Macina, Nico Daheim, Ido Hakimi, Manu Kapur, Iryna Gurevych, and Mrinmaya Sachan. 2025. Math- tutorbench: A benchmark for measuring open-ended pedagogical capabilities of llm tutors. Preprint , arXiv:2502.18940. Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, and Mrinmaya Sachan. 2023b. Opportunities and challenges in neu- ral dialog tutoring. In Proceedings of the 17th Con- ference of the European Chapter of the Association for Computational Linguistics , pages 2357–2372, Dubrovnik, Croatia. Association for Computational Linguistics. Kaushal Kumar Maurya, Kv Aditya Srivatsa, Kseniia Petukhova, and Ekaterina Kochmar. 2025. Unify- ing AI tutor evaluation: An evaluation taxonomy for pedagogical ability assessment of LLM-powered AI tutors. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 1234– 1251, Albuquerque, New Mexico. Association for Computational Linguistics. OpenAI. 2024. Learning to reason with llms. https://openai.com/index/ learning-to-reason-with-llms/ . [Accessed 19-09-2024]. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, and 1 others. 2022. Training language models to follow in- structions with human feedback. Advances in neural information processing systems , 35:27730–27744. Romain Puech, Jakub Macina, Julia Chatain, Mrinmaya Sachan, and Manu Kapur. 2025. Towards the peda- gogical steering of large language models for tutor- ing: A case study with modeling productive failure. Preprint , arXiv:2410.03781. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your lan- guage model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728– 53741. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In 4th Inter- national Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings . Stephane Ross and Drew Bagnell. 2010. Efficient re- ductions for imitation learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics , volume 9 of Proceedings of Machine Learning Research , pages 661–668, Chia Laguna Resort, Sardinia, Italy. PMLR. Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, Juanma Zam- brano Chaves, Szu-Yeu Hu, Mike Schaekermann,Aishwarya Kamath, Yong Cheng, David G. T. Bar- rett, Cathy Cheung, Basil Mustafa, Anil Palepu, and 48 others. 2024. Capabilities of gemini models in medicine. Preprint , arXiv:2404.18416. Alexander Scarlatos, Naiming Liu, Jaewook Lee, Richard Baraniuk, and Andrew Lan. 2025. Train- ing llm-based tutors to improve student learn- ing outcomes in dialogues. arXiv preprint arXiv:2503.06424 . John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford,
https://arxiv.org/abs/2505.15607v1
and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint arXiv:1707.06347 . Lior Shani, Aviv Rosenberg, Asaf Cassel, Oran Lang, Daniele Calandriello, Avital Zipori, Hila Noga, Or- gad Keller, Bilal Piot, Idan Szpektor, Avinatan Has- sidim, Yossi Matias, and Remi Munos. 2024. Multi- turn reinforcement learning with preference human feedback. In The Thirty-eighth Annual Conference on Neural Information Processing Systems . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. Preprint , arXiv:2402.03300. Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan. 2022. Automatic generation of socratic subquestions for teaching math word problems. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 4136–4149, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Shashank Sonkar, Naiming Liu, Debshila Mallick, and Richard Baraniuk. 2023. CLASS: A design frame- work for building intelligent tutoring systems based on learning science principles. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 1941–1961, Singapore. Association for Computational Linguistics. Shashank Sonkar, Kangqi Ni, Sapana Chaudhary, and Richard Baraniuk. 2024. Pedagogical alignment of large language models. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2024 , pages 13641–13650, Miami, Florida, USA. Associa- tion for Computational Linguistics. Anaïs Tack and Chris Piech. 2022. The AI teacher test: Measuring the pedagogical ability of blender and GPT-3 in educational dialogues. In Proceedings of the 15th International Conference on Educational Data Mining , pages 522–529, Durham, United King- dom. International Educational Data Mining Society. LearnLM Team, Abhinit Modi, Aditya Srikanth Veerub- hotla, Aliya Rysbek, Andrea Huber, Brett Wiltshire, Brian Veprek, Daniel Gillick, Daniel Kasenberg, Derek Ahmed, Irina Jurenka, James Cohan, Jennifer She, Julia Wilkowski, Kaiz Alarakyia, Kevin R. Mc- Kee, Lisa Wang, Markus Kunesch, Mike Schaeker- mann, and 27 others. 2024. Learnlm: Improving gemini for learning. Preprint , arXiv:2412.16429. Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, Shengyi Huang, Kashif Rasul, and Quentin Gal- louédec. 2020. Trl: Transformer reinforcement learn- ing. https://github.com/huggingface/trl . Junling Wang, Jakub Macina, Nico Daheim, Sankalan Pal Chowdhury, and Mrinmaya Sachan. 2024a. Book2Dial: Generating teacher student interactions from textbooks for cost-effective development of ed- ucational chatbots. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 9707– 9731, Bangkok, Thailand. Association for Computa- tional Linguistics. Rose Wang, Qingyang Zhang, Carly Robinson, Susanna Loeb, and Dorottya Demszky. 2024b. Bridging the novice-expert gap via models of decision-making: A case study on remediating math mistakes. In Pro- ceedings of the 2024 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 2174–2199, Mexico City, Mexico. Association for Computational Linguistics. Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, Mon- ica Lam, Yiping Lu, Kyunghyun Cho, Jiajun Wu, Li Fei-Fei, Lijuan Wang, Yejin Choi, and
https://arxiv.org/abs/2505.15607v1
Manling Li. 2025. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. Preprint , arXiv:2504.20073. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems . Wei Xiong, Chengshuai Shi, Jiaming Shen, Aviv Rosen- berg, Zhen Qin, Daniele Calandriello, Misha Khal- man, Rishabh Joshi, Bilal Piot, Mohammad Saleh, Chi Jin, Tong Zhang, and Tianqi Liu. 2025. Build- ing math agents with multi-turn iterative preference learning. In The Thirteenth International Conference on Learning Representations . A Implementation Details A.1 Compute Resources All GRPO runs were conducted using 4 ×A100 80GB GPUs over approximately 48 hours per run. Each run covered roughly 20% of the training data and involved around 300 policy updates. At an estimated cost of $2 per GPU hour, each full RL training run costs approximately $400.A.2 Configuration We adapt the standard GRPOTrainer from the TRL library (von Werra et al., 2020) to support our multi-agent tutor-student interaction setting. For each problem instance P, we randomly select one of the two supported tutoring scenarios in our environment—either student-initiated ortutor- initiated —and apply it uniformly across all rollouts in the corresponding batch. To compute the student solve rate, we set K= 8. All dialog rollouts start from an empty dialog history and only problem P as input. The key hyperparameters are: • Learning rate: 5×10−7 • KL coefficient: β= 0.001 • Gradient steps per batch: µ= 2 • Batch size: 16 problems per batch, each with 8 rollouts • Sampling temperature: T = 1.0 We use the paged_adamw_8bit optimizer (Dettmers et al., 2022) to reduce memory usage. A.3 Baselines: SFT and MDPO To generate data for the MDPO and SFT baselines, we sample 30% of the full dataset and generate 8 rollouts (conversations) D= (u1, . . . ,uT)per problem. For MDPO , we construct within-group preference pairs (Dacc, Drej)such that r(Dacc)> r(Drej), resulting in 36k preference pairs. For SFT, we filter the MDPO data to keep only accepted responses, remove duplicates, and obtain approxi- mately 14k accepted samples. Training hyperparameters for baselines: •SFT: batch size 16, learning rate 2×10−5, trained for 1 epoch •MDPO: batch size 32, learning rate 2×10−7, trained for 1 epoch (all settings follow the original MDPO paper (Xiong et al., 2025)) A.4 Inference and Quantization To enable efficient tutor–student–judge simulation at scale, we serve all models through vLLM li- brary (Kwon et al., 2023), which enables fast batched decoding with KV-caching. To reduce memory footprint and inference latency we also em- ploy quantization. The student model is quantized using FP8, enabling fast inference while not notice- ably degrading performance. The judge model is quantized using 4-bit Activation-Aware Quantiza- tion (AWQ) (Lin et al., 2024), significantly reduc- ing compute cost. B Dataset Details Details of the subset of the BigMath (Albalak et al., 2025) we used are in Table 4. Figure 4 shows the distribution of the difficulties of our dataset. On av- erage our student model
https://arxiv.org/abs/2505.15607v1
Llama-3.1-8B-Instruct achieves an accuracy of 25% while the tutor model Qwen2.5-7B-Instruct achieves 66% on the dataset. 1-10% 11-20% 21-30% 31-40% 41-50% 51-60% Llama-3.1-8B Solve Rate (%)010002000300040005000Number of Problems4352 2022 1802 12641319 1186Distribution of Problem Difficulties Figure 4: Distribution of problem difficulties in our dataset(solve -rate buckets obtained with Llama-3.1-8B-Instruct ). The dataset contains mostly hard (1-10% solve) problems. This ensures each item requires meaningful guidance from the tutor model rather than being trivial for our student model. C Template reward In addition to the primary pedagogical and correctness rewards, we incorporate several template-based auxiliary rewards inspired by prior work (DeepSeek-AI et al., 2025). These rewards encourage structured and concise tutor interactions and penalize incorrect use of format tags and con- versation mechanics. C.1 Thinking Tag Usage Reward To promote transparent and interpretable internal reasoning by the tutor, we reward explicitly for- matted thinking tags. Each tutor’s turn can include structured reasoning enclosed within tags of the format: <think>...</think>We compute the reward as follows: rthink(aT|sT) =c×|{ui∈D|uicorrect tags }| |{ui∈D}|, where uiare individual tutor utterances and c is a constant which we set to 0.5. The correct format- ting implies that tags are both opened and properly closed without structural errors. C.2 Penalty for Incorrect Thinking Tag Formatting To enforce the correctness of thinking tag format- ting and ensure structured output, we penalize the model for each incorrectly formatted or unclosed thinking tag: pmisuse(aT|sT) =c×(# of wrong tags in D). This includes scenarios where: • A thinking tag is opened but not closed. •A thinking tag is malformed or incorrectly structured. C.3 End-of-Conversation Reward To encourage the tutor model to efficiently and naturally conclude dialogs, we reward the explicit use of the special termination tag: ⟨end_of_conversation ⟩ Only the tutor is permitted to terminate the con- versation by generating this special token. The reward is defined as: rend(aT|sT) =( 0.1,if dialog is ended early 0, otherwise . This incentivizes concise, purposeful interactions, discouraging overly long dialogs. C.4 Penalty for Exceeding Max Tokens per Turn We set a maximum number of tokens allowed per tutor turn. If any tutor’s turn exceeds this limit (thus failing to generate the EOS token within the maximum length), we apply a fixed penalty: plen(aT|sT) =( 0.5,no EOS token generated 0, otherwise . This penalty ensures the tutor generates concise and complete responses without truncation, pro- moting conversational coherence. Table 4: Composition of Training and Test datasets with Student model solve rates (pre-dialog). Training Set Test Set Dataset Samples Solve Rate (%) Dataset Samples Solve Rate (%) Big_math 3360 23.56 Big_math 177 24.86 Cn_k12 3324 22.11 Cn_k12 168 22.34 Math 1264 27.40 Math 57 23.93 Aops_forum 1263 10.13 Aops_forum 56 10.07 Omnimath 374 12.57 Omnimath 22 15.41 Openmath 315 38.18 Openmath 13 36.30 Gsm8k 100 36.30 Gsm8k 7 32.14 Total 10 000 Total 500 C.5 Combined Template Reward The combined auxiliary reward incorporating all these components is: rtempl(aT|sT) =rthink(aT|sT) +rend(aT|sT) −pmisuse(aT|sT) −plen(aT|sT). D Prompts Pre-dialog solution by a student is computed using the prompt in Figure 5 and post-dialog solution by a student using the prompt in Figure 6. Student
https://arxiv.org/abs/2505.15607v1
and tutor system prompts used during a conversation are in Figure 7 and Figure 8. Exact prompt for judg- ing the leakage of the solution by a teacher model is in Figure 9 and Figure 10 shows the prompt for the helpfulness of the tutor response. E Example Conversations Examples of the conversations from our model are in Figure 11, Figure 12, Figure 13, Figure 14, and Figure 15. Please reason step by step, and put your final answer within \boxed{}. {{ problem }} Figure 5: Prompt for pre-dialog student solution where problem is a placeholder for a math problem. {{ conversation }} The conversation with the teacher has ended. It is time you create a step by step complete solution including everything. It is essential you include the final answer in the \boxed{} format. You will not miss any steps such that this solution is complete without the past conversation. Figure 6: Prompt for post-dialog student solution, where conversation is a placeholder for tutor-student simulated conversation. You will act as a student in a conversation with a teacher in training. You will need to act as much like a student as possible. If possible do not respond with overly long messages. The conversation with the teacher will be about this math problem: {{ problem }} You may or may not know how to solve it already, let the teacher guide you to the correct understanding. You will be tested at the end and scored thus it is best if you collaborate with the teacher as it has more experience in math than you. Figure 7: A student system prompt used in a dialog with a teacher. You are tasked with being a teacher and helping a student with a math problem. You must not reveal the answer to the problem to the student at any point in time. Your task is to guide the student to have a complete understanding of the problem. Even if the student is already able to solve the problem, you should help them understand and improve the solution so that they get as high of a grade as possible. If possible, do not respond with overly long responses to the student. You can end a conversation by writing <end_of_conversation>, please try to end conversations as soon as they are finished instead of prolonging them if not needed. But do not end them prematurely either. Here is the math problem: {{ problem }} Figure 8: A teacher system prompt used during a simulated conversation. You are tasked with inspecting a conversation between a student and a teacher (tutor). Your goal is to determine whether the teacher properly guided the student to the solution or whether they revealed too much information too early, such as directly giving away the final answer or solving key steps of the problem without involving the student. The teacher should: - Guide the student interactively, using questions, hints, or feedback. - Encourage the student to reason through the problem themselves. - Avoid solving major parts of the problem immediately or presenting large chunks
https://arxiv.org/abs/2505.15607v1
of the final solution too early. - Be allowed to confirm correct reasoning, correct mistakes, and clarify concepts gradually. - Clarify certain parts in more detail if the student does not get it. It is **not acceptable** for the teacher to: - Provide the full answer upfront or within the first message. - Reveal all key steps in a single message without student involvement. - Walk through the entire solution without engaging the student meaningfully. ## **Examples of correct and incorrect guidance**: ### Example 1 – Good Guidance (Integration by parts): - Student: How do I integrate x∗e(2x)? - Teacher: Good question! Can you think of an integration technique useful when integrating a product of functions? - Student: Integration by parts? - Teacher: Exactly. Can you identify what you’d set as ’u’ and what as ’dv’ here? - Student: u = x and dv = e(2x)dx? - Teacher: Nice job. Can you find du and v from here? { "reasoning": "The tutor carefully guided the student through recalling and applying the integration by parts technique without revealing key intermediate or final results prematurely.", "decision": "OK" } [1 more example reject example ] ### Example 3 – Immediate Final Answer (Limit computation): - Student: How can I compute the limit as x →0 of sin(5x)/x? - Teacher: The limit is simply boxed{5}. { "reasoning": "The tutor immediately gave away the answer without encouraging the student to recall or rea- son about the relevant limit properties.", "decision": "REJECT" } [1 more ok example ] ## **Evaluate the following conversation**: ==BEGIN CONVERSATION== [conversation ] ==END CONVERSATION== Please output a JSON evaluation following exactly this schema: { "reasoning": "your explanation here", "decision": "OK or REJECT" } Now write the JSON evaluation for the conversation above, make sure the JSON is valid: Figure 9: Prompt for judging whether the tutor leaked the answer. Your task is to inspect a conversation between a **student** and a **teacher**. Evaluate the **style and appropriateness** of the teacher’s messages, ensuring the conversation is realistic, natural, and educationally effective. ### **Acceptable Teacher Style**: - Messages are concise and easy to understand. - Teacher patiently and respectfully engages the student. - **Most of the talking is done by the student**, with the teacher primarily responding or briefly clarifying. - Conversation feels natural and believable (e.g., conversational tone, realistic student-teacher interaction). ### **You must REJECT the conversation if any of these occur**: - **Overly Long Messages**: Teacher messages that are excessively long, detailed, or overwhelming with unnecessary information. *(Example: Teacher sends multiple paragraphs of explanation when a brief, clear sentence would suffice.)* - **Teacher Dominates Conversation**: The teacher speaks significantly more than the student or overly controls the conversation. *(Example: Teacher repeatedly interrupts or unnecessarily elaborates when the student could continue.)* - **Language Mixing/Switching**: The conversation must remain entirely in **English**. Switching or mixing languages—even briefly—is not allowed. *(Example: Teacher says something partially or completely in Spanish.)* - **Unrealistic/Unnatural Interaction**: Conversation doesn’t feel believable between a real teacher and student (awkward phrasing, unnatural dialog, abrupt changes in tone). *(Example: Student abruptly discusses unrelated personal
https://arxiv.org/abs/2505.15607v1
issues mid-explanation.)* - **Incomplete or Empty**: Teacher sends incomplete, empty, or abruptly cut-off messages. *(Example: Teacher’s message suddenly ends mid-sentence.)* ##**Examples:** ### **Good Example (OK)**: - Student: I don’t get the second step of this solution. - Teacher: Could you tell me exactly which part confuses you? - Student: It’s the part where you simplify the fraction. - Teacher: Sure! Let’s focus on that simplification. Do you remember how we reduce fractions? ### **Bad Example (REJECT - Overly Long)**: - Student: I’m confused about simplifying fractions. - Teacher: Simplifying fractions involves finding the greatest common divisor (GCD) of the numerator and denominator and dividing both by it. For example, if you have 12/16, the greatest common divisor is 4, so dividing numerator and denominator by 4 gives you 3/4. Another example is 10/20; the GCD here is 10, so this reduces to 1/2. You must always look carefully to identify the largest divisor... *(Reason: Excessively long and detailed explanation.)* ### **Bad Example (REJECT - Language Mixing)**: - Student: Should I divide both sides next? - Teacher: Yes, exactly! Divide ambos lados por dos. *(Reason: Mixing English and Spanish.)* Carefully inspect the conversation below based **only** on the style, conciseness, language consistency, re- alism, and appropriateness outlined above. **Conversation to evaluate**: ==BEGIN CONVERSATION== [conversation ] ==END CONVERSATION== Please output a JSON evaluation following exactly this schema: { "reasoning": "your explanation here", "decision": "OK or REJECT" } Now write the JSON evaluation for the conversation above, make sure the JSON is valid: Figure 10: Prompt for judging helpfulness which consists of constructive support and teacher tone. /check-circleGood Example: Teacher Guides Student (Qwen2.5-7B-RL λ= 0.75) Problem: A student uses a calculator to find an answer but instead of pressing the x2key, presses the√xkey by mistake. The student’s answer was 9. What should the answer have been if the student had pressed the correct key? Express your answer as a whole number. /chalkboard-◎eacherTeacher Sam, let’s think about this step-by-step. The student pressed the√xkey and got 9. What does this tell us about the number they originally entered? /user-graduateStudent So if the student pressed the√xkey and got 9, that means that the number they entered must be a perfect square because when you take the square root of a perfect square, you get the original number back. /chalkboard-◎eacherTeacher Very good, Sam! You’re on the right track. If pressing√xgave them 9, then the original number must be 92. Can you calculate what 92is? /user-graduateStudent 92is 81. /chalkboard-◎eacherTeacher Exactly! So if the student had pressed the x2key instead of√x, the correct answer would have been 81. But remember, we’re not giving the answer away — we want you to figure it out yourself. So, what should the answer have been if the student had pressed the correct key? /user-graduateStudent So the answer should be the original number they entered, which is... 81. /chalkboard-◎eacherTeacher Perfect, Sam! You’ve figured it out on your own. The answer should have been 81if the student had pressed the x2key instead of√x. Great job working through this problem and understanding the concept! Figure 11: Good
https://arxiv.org/abs/2505.15607v1
Example: Teacher guides the student without directly giving the answer. /times-circleBad Example: Teacher Solves the problem fully (Qwen2.5-7B-RL λ= 0.0) Problem: Calculate the sum of all values of xthat satisfy the equation (x2−5x+ 5)(x2+4x−60)= 1. Consider the conditions for ab= 1and provide the exact sum of valid xvalues. /chalkboard-◎eacherTeacher Hello! I’m glad you’re working on this problem. Let’s break it down together. We have the equation: (x2−5x+ 5)x2+4x−60= 1. Forab= 1, there are three main cases to consider: •a= 1(for any b) •a=−1andbis even •a̸= 0andb= 0 Let’s apply these cases to our equation. Case 1: x2−5x+ 5 = 1 Solve: x2−5x+ 4 = 0 ⇒(x−1)(x−4) = 0 So,x= 1orx= 4 Case 2: [... Many rows omitted] Sum of valid xvalues: 1 + 4 + 2 + ( −10) + 6 = 3 So, the sum of all valid xvalues is 3. Figure 12: Bad Example: Teacher explains too much and gives the full solution. ♂lightbulbGuided Reasoning with Thinking (Qwen2.5-7B-RL-hard λ= 1.0+think) Problem: Given vectors ⃗ a= (x−1,2)and⃗b= (4, y), if⃗ a⊥⃗b, then the minimum value of 9x+ 3yis: /chalkboard-◎eacherTeacher Thinking: Given that the vectors are perpendicular, their dot product is zero. So we can set up the equation: (x−1)·4 + 2·y= 0. From there, we can express one variable in terms of the other and then substitute into: 9x+ 3y. This gives a single-variable function we can minimize. Can you start by finding a relationship between xandyusing the fact that the vectors are perpendicular? Remember, for two vectors to be perpendicular, their dot product must be zero. [conversation continues] Figure 13: Example with structured reasoning and no solution leak. /times-circleBad Example: Teacher Solves the problem fully (DeepSeek-V3-0324) Problem: A student uses a calculator to find an answer but instead of pressing the x2key presses the√xkey by mistake. The student’s answer was 9. What should the answer have been if the student had pressed the correct key? Express your answer as a whole number. /chalkboard-◎eacherTeacher Alright, let’s break this problem down step by step. First, the student meant to press the ( x2) key (which squares a number), but instead pressed the (√x) key (which takes the square root of a number). The incorrect answer they got was 9. So, the student took the square root of some number and got 9. Let’s call the original number ( x). That means: √x= 9⇒x= 81 Now, the student was supposed to square this number, so: x2= 812= 6561 But wait—that seems like a very large number. Maybe I misunderstood. Let me think differently. The student intended to square a number, say ( y), but instead took its square root and got 9. √y= 9⇒y= 81⇒y2= 6561 [... Many lines omitted] Final answer: 6561 /user-graduateStudent .. so the correct answer is 6561. I think I understand now. If the student pressed the (√x) key by mistake and got 9, it means that the original number was actually 81, and if they had pressed the correct key, they would have gotten 6561. /chalkboard-◎eacherTeacher Exactly! You’ve got it. The key steps are: The student took the
https://arxiv.org/abs/2505.15607v1
square root of some number ( x) and got 9: √x= 9⇒x= 81 If they had pressed the correct key ( x2), they would have squared 81: 812= 6561 Great job understanding the problem! Figure 14: Bad Example: The model solves the entire problem directly instead of prompting the student to think through the steps. ♂lightbulbMistake analysis in thinking tags (Qwen2.5-7B-RL λ= 1.5+think) Problem: Roll two dice, labeled A and B, one after the other. Let aandbrepresent the numbers rolled on dice A and B, respectively. If the point M(a, b)falls within the region defined by the inequality x2+y2≤m(where mis a constant), and this event is denoted as event C, find the minimum value of mto make the probability of event C,P(C), equal to 1. /user-graduateStudent Here is my attempt at this problem: To solve this problem, we need to determine the region defined by the inequality x2+y2≤mand find the minimum value of mthat makes the probability of event Cequal to 1. Since we are rolling two dice, the possible values for aandbare 1 to 6. The point M(a, b)can be any point in a 6×6grid. The inequality x2+y2≤mrepresents a circle with radius√mcentered at the origin (0, 0). To make the probability of event Cequal to 1, the circle must cover the entire 6×6grid. The maximum distance from the origin in the 6×6grid is√ 36 = 6 (when a= 6andb= 6). So, to cover the entire grid, the radius of the circle must be at least 6. Therefore, the minimum value of mis the square of the radius, which is: m= 62= 36 However, this is not the final answer because we need to consider the edge cases where a= 6andb= 1ora= 1 andb= 6. In these cases, the distance from the origin is√ 37, which is greater than 6. To cover these edge cases, we need a circle with a radius of at least√ 37. m= (√ 37)2= 37 Note that the probability of event Cis not actually 1, but we’ve made the circle large enough to cover all possible points in the 6×6grid. 37 /chalkboard-◎eacherTeacher Thinking: While the student’s reasoning is on the right track, there’s a misunderstanding about the region and the probability. The student correctly identified the need for the circle to cover the 6×6grid but made an incorrect assumption about the edge cases. I should guide them to recheck the maximum distance from the origin to any point (a, b)where aandbare between 1 and 6. That’s a great start, Riley! You’re on the right track by considering the 6×6grid and the circle that needs to cover it. However, there’s a small mistake in your reasoning. Let’s think about the maximum distance from the origin to any point (a, b)where aandbare the numbers rolled on the dice. Can you recheck the maximum distance and why it’s important for the probability of event Cto be 1? [conversation continues] Figure 15: Example where the teacher analyses the mistake of the student attempt inside the thinking tags without revealing a large part of the solution.
https://arxiv.org/abs/2505.15607v1
Learn to Reason Efficiently with Adaptive Length-based Reward Shaping Wei Liu1∗Ruochen Zhou2Yiyun Deng1Yuzhen Huang1Junteng Liu1 Yuntian Deng3Yizhe Zhang4Junxian He1∗ 1The Hong Kong University of Science and Technology2City University of Hong Kong 3University of Waterloo4Apple Abstract Large Reasoning Models (LRMs) have shown remarkable capabilities in solving complex problems through reinforcement learning (RL), particularly by generating long reasoning traces. However, these extended outputs often exhibit substantial redundancy, which limits the efficiency of LRMs. In this paper, we investigate RL-based approaches to promote reasoning efficiency. Specifically, we first present a unified framework that formulates various efficient reasoning methods through the lens of length-based reward shaping. Building on this perspective, we propose a novel Length-b AsedStEpReward shaping method ( LASER ), which employs a step function as the reward based on target length. LASER surpasses previ- ous methods, achieving a superior Pareto-optimal balance between performance and efficiency. Next, we further extend LASER based on two key intuitions: (1) The reasoning behavior of the model evolves dynamically during training, ne- cessitating reward specifications that are also adaptive and dynamic; (2) Rather than uniformly encouraging shorter or longer chains of thought (CoT), we posit that length-based reward shaping should be difficulty-aware i.e., it should pe- nalize lengthy CoTs more for easy queries. This approach is expected to facili- tate a combination of fast and slow thinking, leading to a better overall tradeoff. The resulting method is termed LASER -D(Dynamic and Difficulty-aware). Ex- periments on DeepSeek-R1-Distill-Qwen-1.5B ,DeepSeek-R1-Distill-Qwen-7B , andDeepSeek-R1-Distill-Qwen-32B demonstrate that our approach significantly enhances both reasoning performance and response length efficiency. For in- stance, LASER -Dand its variant achieve a +6.1improvement on AIME2024 while reducing token usage by 63%. Further analysis reveals that our RL- based compression produces more concise reasoning patterns with less redun- dant “self-reflections”. All resources (Models, Code, Data) are available at https://github.com/hkust-nlp/Laser . 1 Introduction Recent advancements leveraging reinforcement learning (RL) [ 4,11,15,28] demonstrate that LLMs can evolve into powerful Large Reasoning Models (LRMs), capable of producing extended chains of thought (CoT) to enhance their reasoning abilities. However, these longer reasoning trajectories come at the cost of increased token usage and potentially incorporate more compounding errors. Many of the generated tokens tend to be unnecessarily verbose. For example, LRMs may output thousands of tokens to solve elementary math problems that could otherwise be addressed within just a few hundred tokens, as shown in Figure 1 (right). This phenomenon is commonly referred to as the over-thinking issue [3]. ∗Correspondence to Wei Liu (wliucn@cse.ust.hk) and Junxian He (junxianh@cse.ust.hk)arXiv:2505.15612v1 [cs.CL] 21 May 2025 3500 4500 5500 6500 7500 8500 Average Tokens222426283032343638Accuracy 15956Acc 28.9%Original Model Truncation Group-based ThinkPrune LASER (Ours) LASER-D (Ours) LASER-DE (Ours)Model Performance on AIME2024 Question: What is the answer of 1+1?<think>Alright, so I have this math problem here: 1 plus 1. Hmm, okay, let me think about how to solve this. I remember from school that adding is combining numbers together. So, if I have one object and I add another one, how many do I have in total?Let me visualize it. Imagine I have a single apple. I pick up another apple from my
https://arxiv.org/abs/2505.15612v1
friend. Now, how many apples do I have? I count them: one, two. So, that must be two apples in total. Therefore, 1 plus 1 equals 2.Wait, is there another way to think about this? Maybe using objects or fingers. If I hold up one finger and then add another finger, how many fingers are up? That's two. Yeah, that still makes sense. So, regardlessof the method, whether it's counting objects or using my fingers, the result is the same.…**Answer:** 2Original Model<think>Okay, so I need to figure out what 1+1 is. Hmm, I think it's pretty straightforward. I remember that in basic arithmetic, adding one and one should give me two. Yeah, that's right. So the answer should be 2.</think>The sum of 1 and 1 is 2.\boxed{2}After Training [1490 tokens in total][76 tokens in total]Figure 1: Left: Accuracy and response length on AIME2024. For the figure of more benchmarks, please refer to Appendix A. Each point represents a single training run with different hyper-parameters. Given the high computational cost of obtaining this figure, the base model used is DeepSeek-R1-Distill-Qwen-1.5B . Results on 7B and 32B models are in §6.3. Our methods, LASER ,LASER -D, and LASER -DE achieve a Pareto-optimal trade-off compared to all other methods. Notably, they yield a +6.1improvement in accuracy while reducing tokens usage by 63%compared to the original model. Right : Example of a reasoning process after LASER -D training. While the original model produces meaningless “self-reflection” repeatedly for trivial questions like “1+1=?”, LRMs after LASER -Dtraining efficiently recognize such questions during thinking and provide the answer directly. Previous work [ 16,22] has explored various approaches to improving reasoning efficiency in LRMs, including continuous chain-of-thought (CoT) reasoning [ 6,18], supervised fine-tuning (SFT) [ 24,14], and reinforcement learning [ 11,2,1,10]. Typically, substantial reductions in token usage are accompanied by significant decreases in reasoning accuracy, suggesting a trade-off between efficiency and reasoning performance. Recently, RL-based approaches have demonstrated the most favorable balance between token efficiency and accuracy [11, 2, 1, 10]. In this paper, we study RL-based CoT compression, beginning with a simple baseline that yields surprisingly effective results (§3). Specifically, we further train long CoT reasoning models using RL with a rule-based correctness reward, while restricting the context window size to a smaller value than the model’s typical generated length, so that long responses will be truncated. This approach can substantially improve token efficiency with only a modest reduction in accuracy. To better understand the effectiveness of this truncation method and to connect it with other RL approaches that incorporate length-based rewards [ 11,2,1], we introduce a unified length-based reward shaping perspective that encompasses various RL strategies for mitigating overthinking (§4). Building on this reward shaping formulation, we extend the truncation approach as a novel reward shaping method (§4.3) that employs a step function as the reward guided by a desired target length. We refer to this approach as LASER (Length-b AsedStEpReward). LASER achieves the best trade-off between reasoning performance and efficiency among all evaluated baselines. Next, we identify two key points that are lacking in LASER : (1)
https://arxiv.org/abs/2505.15612v1
the desired reasoning length should evolve during training as the model’s reasoning behaviors dynamically change, and (2) rather than uniformly encouraging short or long CoT, length-based reward shaping should be difficulty-aware – allowing harder questions a higher token limit while constraining easier questions to fewer tokens. To this end, we propose a Dynamic and Difficulty-aware Length-b AsedStEpReward for RL ( LASER -D), which adaptively applies different length reward shaping based on problem difficulty. Notably, our algorithm is fully automated with an integrated automatic adapting module, eliminating the need for manual procedural tuning. We also introduce a variant of LASER -D, called LASER -DE, which explicitly encourages additional exploration on incorrect responses, enabling models to discover potentially correct reasoning patterns through extended deliberation. We conduct comprehensive experiments on three reasoning models ranging from 1.5B to 32B param- eters, across four challenging benchmarks: MATH500, AIME2024, AMC2023, and OlympiadBench. Our extensive evaluations demonstrate that our proposed LASER series outperform existing works, while LASER -Dand its variant LASER -DE achieve the best Pareto-optimal balance between accuracy and token efficiency. Unlike methods that improve token efficiency at the expense of accuracy, our pro- posed approaches deliver substantial gains in both dimensions—even on the challenging AIME2024 2 Table 1: Results of baseline truncation method with different context window. Tkdenotes the models after RL training with context window k. Accuracy (%) with average token usage for each dataset. “Original” denotes the original DeepSeek-R1-Distill-Qwen-1.5B. Accuracy (%) Generation Length (tokens) MATH 500AIME AMCOlympiad BenchAvg.MATH 500AIME AMCOlympiad BenchAvg. Original 83.9 28.9 71.6 43.3 56.9 5042 15956 8202 11510 10177 T8192 81.8 24.8 70.9 43.9 55.3 1795 4465 2560 2841 2915 T6144 80.9 20.2 66.2 42.1 52.3 1351 2821 1917 1947 2009 T4096 77.7 19.2 62.2 38.5 49.4 1054 2481 1484 1564 1646 benchmark. For example, applying LASER -D/LASER -DE toDeepSeek-R1-Distill-Qwen-1.5B im- proves accuracy by +6.1percentage points while reducing token usage by 63% on AIME24. Our further analysis reveals that after these RL-based CoT compressions, the reasoning behaviors of LRMs become more concise and demonstrate improved quality with fewer redundant and unhelpful “self-reflection”. 2 Preliminary Enhancing Reasoning via RL RL has been demonstrated as an effective way to train strong large reasoning models [ 4,15,11] across different domains like math [ 28], coding [ 4] and agentic tasks [ 15]. For example, using a simple rule-based outcome reward [ 4,28], the mathematical reasoning capabilities of models can be substantially improved after RL training, often accompanied by the emergence of “self-reflection” style thinking patterns. Following these previous works, we leverage rule-based reward designed as a simple scoring system [ 4]: +1 for correct responses, -0.5 for incorrect responses, and -1 for responses with invalid format. Suppose xis the question and yis the response generated by the models, the optimization objective with KL-constrained [19] in RL can be formulated as: π∗ θ= arg max θEx∼D Ey∼π(·|x)[R(x, y)]−βDKL[πθ(·|x)||πref(·|x)] (1) where R(x, y)represents the reward of the entire trajectory, and πrefis the reference model, which is the model prior to the RL training phase. βis the parameter to control the two optimization targets. In this paper, we
https://arxiv.org/abs/2505.15612v1
utilize GRPO [20] to optimize this objective. RL for Efficient Reasoning In addition to enhancing reasoning capabilities, RL also holds promise for improving token efficiency in LRMs [ 11,16]. Several approaches have been proposed to this end. Most methods involve reward shaping, where models are incentivized to produce shorter responses by associating higher rewards with more concise outputs [11, 1, 2]. 3 Truncation: A Simple Yet Effective Baseline In this section, we start from a simple yet effective baseline, where we simply set max generation length to a value significantly smaller than the model’s original context window during RL train- ing—for example, 8,192 tokens versus 32,768 in DeepSeek-R1-Distilled models. Intuitively, this approach truncates the responses beyond the context window and regards them as incorrect, thus it pushes the model to generate accurate yet more concise responses under strict token constraints. This baseline has been explored recently in concurrent works [ 10,12]. In our experiments, we adopt DeepSeek-R1-Distill-Qwen-1.5B as the base model and investigate the effects of truncation by limiting maximum generation lengths to 4,096, 6,144, and 8,192 tokens. Effectiveness of Truncation Table 1 presents the performance of models across various benchmarks under different truncation sizes. Compared to the original model, surprisingly, RL training with a context window of 8192 tokens achieves a substantial 71%improvement in token efficiency, while 3 maintaining competitive accuracy with a 1.6 absolute point degradation on average. This demonstrates that truncation is a simple yet effective approach for enhancing reasoning efficiency in LRMs. Efficacy-Efficiency Trade-off Although truncation proves effective on average across benchmarks, its impact varies significantly with task difficulty. A closer look at the results reveals that performance on the most challenging benchmark, AIME, suffers a notable 4.1drop in accuracy under the 8192 token limit. When the context window is further reduced to 4096, the accuracy on AIME deteriorates even more sharply, with a 9.7decline, by far the largest drop observed, compared to only a 7% decrease on MATH500. This highlights that the benefits of truncation involve a trade-off: while it improves efficiency overall, it may disproportionately affect harder tasks. To better understand this disproportionate performance drop on harder benchmarks, we note that the truncation ratio during training is initially very high (Figure 6), exceeding 45%, and remains above 10%even after 200 rollout steps. Specifically, for the AIME dataset, over 75%of responses exceed 8192 tokens, compared to only 15%for MATH500. This indicates that truncation disproportionately impacts more complex tasks like AIME, where long reasoning trajectories are often necessary. Next, we formulate the truncation baseline from the reward shaping perspective, and connect it with related works. 4 A Unified View on Efficient Reasoning with RL In this section, we aim to understand the truncation baseline and other RL-based efficient reasoning approaches through a unified perspective. We first connect them together via length-based reward shaping, and then derive new alternatives with this view. 4.1 The Unified Formulation Here we first present a unified formulation, and then we show how the truncation baseline and other works fit into this formulation. Specifically, we define the reward function with two parts: a correctness
https://arxiv.org/abs/2505.15612v1
term C(y)and a length-based term S(y)controlled by a control variable λ(y): ˆR(x, y) =C(y) +λ(y)·S(y) (2) In most length reward methods, C(y) =R(x, y), representing the original rule-based reward for correctness. However, in truncation-based approaches, C(y) = 0 as we discuss below. The term S(y) denotes the length reward, which varies across different methods. Formulating the Truncation Baseline As shown in Table 2, truncation is a special case of the length reward with C(y) = 0 , where the target length LTis enforced by the context window. ThinkPrune [ 10] is another truncation-based approach, which extends vanilla truncation by introduc- ing an adaptive target length LAto replace fixed target length LT. They iteratively choose LAand separate their training into three stages. Table 2 also outlines other formulations, we will introduce them individually in the following sections. 4.2 Connecting Previous Efforts Together In this part, we build on our unified formulation in Eq. 2 and aim to connect previous approaches, below we describe several main categories of them. Table 2 formulates different length-based reward shaping by different designs for C(y),λ(y)andS(y). Parameter αis a hyperparameter coefficient that controls the magnitude of the length reward S(y). We provide detailed explanations for each formulation in Appendix D. Group-based Reward In group-based reward, the length reward S(y)is designed to encourage brevity by assigning higher scores to shorter responses within a rollout group, such as Efficient Reasoning [ 2] and Kimi-k1.5 [ 11] as formulated in Table 2. However, this comparison-based approach can lead to reward hacking. Models tend to exploit S(y)by generating overly concise responses, particularly for simpler questions. We demonstrate this phenomenon for the Efficient Reasoning baseline in Figure 7a and Figure 7b in Appendix C, where training accuracy initially decreases while total reward increases. Additionally, Table 3 shows a more significant drop in MATH500 accuracy compared to other methods, further supporting this observation. 4 Table 2: Formulation of different approaches based on Eq. 2. C(y)is mainly for correctness, S(y)is the length reward, and λ(y)is a control variable to control how length reward is applied. I(R)stands for I(R(x, y) = 1) andI(·)is an indicator function. ρis the negative reward given for incorrect responses. L(y)is the length of the generated response. αis the coefficient that controls the magnitude of the length reward. The shapes of different rewards are shown in the visualization, where x axis is the length of the response.Blue represents the curve for correct responses, while Red represents the curve for incorrect responses. For approaches, ThinkPrune, LASER -DandLASER -DE, there are different lines with similar colors indicate that the reward is dynamic which is realized by different LAvalues. The details of visualization are available in the Appendix J. Name C(y), λ(y) S(y) Visualization Truncation Method Vanilla Truncation 0,1R(x, y)ifL(y)≤LT ρ ifL(y)> LT ThinkPrune [10] 0,1R(x, y)ifL(y)≤LA ρ ifL(y)> LA Group-based Reward Efficient Reasoning [2] R(x, y),I(R) −α·σ L(y)−Mean( y) STD( L) Kimi-k1.5 [11] R(x, y),1( 0.5−L(y)−Lmin Lmax−LminifI(R) = 1 min 0,0.5−L(y)−Lmin Lmax−Lmin ifI(R) = 0 Budget-based Reward L1-Exact [1] R(x, y),1 −α· |L(y)−LT| L1-Max [1] 0,I(R) clip( α·(L(y)−LT) +δ,0,1) Length-Based Step Reward and Variants LASER
https://arxiv.org/abs/2505.15612v1
R(x, y),I(R) α·I(L(y)≤LT) LASER -D R(x, y),I(R) α·I(L(y)≤LA) LASER -DE R(x, y),1 α·I(R)·I(L(y)≤ LA) +α·(1−I(R))·I(L(y)> LA) Budget-based Reward Budget-based rewards use query-specific target lengths (budgets) and penalize responses that deviate from these instructions. While this mitigates reward hacking seen in group-based schemes, it can destabilize training. Models require exposure to diverse budgets, but in large context windows (e.g., 16,384 tokens), these targets become sparsely distributed, causing reward fluctuations. Figure 7b illustrates this instability. Using L1-Max as a representative method, we observe that with smaller contexts (4,096 tokens), it achieves stable reward increases compa- rable to other methods. However, with 16,384-token contexts, rewards fluctuate significantly and underperform alternative approaches. 4.3 Bridging the Gap: Length-based Step Reward As shown in Eq. 2 and visualized in Table 2, a key limitation of the truncation method is that it assigns the same penalties to overlong responses as it does to incorrect ones, which may over-penalize long but correct explorations. To address this issue, we extend it as a novel reward shaping approach called 5 Length-b AsedStEpReward ( LASER ), which adopts a step reward function guided by a target length, rather than performing hard truncation. Specifically, we design the length reward term S(y)as an indicator function based on a target length LT. This function assigns a length-based bonus to responses shorter than LT. We also set the context window significantly larger than the target length LT(e.g., 16,384 vs. 4,096) where truncation rarely happens. And the length reward term S(y)is only activated when responses are correct, thereby improving the efficacy-efficiency trade-off. As visualized in Table 2, LASER closely resembles the vanilla truncation approach; the only difference is that, instead of truncating long responses, it awards bonus rewards to correct responses that do not exceed the target length. To balance the correctness reward C(y)and length reward S(y), we follow a typical setting and set αas0.5. Empirical results are demonstrated in Figure 1 and Table 3, training with the LASER reward achieves improved Pareto-optimality compared to all previous methods. Notably, it is the first approach to simultaneously deliver significant improvements in both accuracy and token efficiency on the challenging AIME24 benchmark. These results establish LASER as a promising reward design framework for enhancing the balance between efficacy and efficiency. 5 Adaptive Length-based Step Reward Shaping 5.1 Design Principles We highlight two key limitations not addressed in the design of LASER : (1) LASER requires specifying a fixed target length prior to training; however, as the model evolves during training, the optimal response length may also change and should ideally adapt dynamically. (2) Additionally, different questions demand reasoning traces of varying lengths—simple questions may be effectively addressed with shorter reasoning, while more complex questions benefit from longer, more detailed deliberation. Therefore, we extend LASER to be Dynamic and Difficulty-aware, which we term as LASER -D. Rather than using a single fixed target length, our approach dynamically adjusts the target length throughout training and tailors it to questions of varying difficulty. Concretely, LASER -Ddecouples the target length hyperparameter across different queries, allowing distinct target lengths to be assigned to various queries.
https://arxiv.org/abs/2505.15612v1
Moreover, these target length hyperparameters are dynamically adjusted throughout training. We separate queries into three buckets of easy, medium, and hard difficulty levels, based on the correctness rates within the rollout batch – for each question, we have krollouts and use thresholds k/3and2k/3to separate them. As such, we have three distinct target length hyperparameters for these three query groups. Notably, we perform difficulty assessment for the queries during real- time RL training and use the training rollout batch, thus it only incurs negligible overhead on the computation. Being dynamic and difficulty-aware, one challenge raised is how to set the dynamic processes of the decoupled target length hyperparameters. Next, we introduce an automatic adapting mechanism, to adapt them without any manual intervention. 5.2 Automatic Adapting Mechanism LASER -Dis driven by an automatic adapting mechanism that periodically evaluates and adjusts the target length parameters ( LAin Table 2) for each difficulty level. Specifically, we first extract a small monitoring dataset DM(e.g., 500 samples) from training data that mirrors the distribution of the training data. Every N training steps (e.g., 20), our approach searches and sets the target length hyperparameters based on this monitoring dataset. Denote the three-class difficulty level of a query as d, to determine the target length hyperparameters, we propose a metric called Expected Correct Responses (ECR), which estimates how many complete, correct responses we can expect for each difficulty level given response length limits. Formally, we sample Kresponses for each query in the monitoring set,2and ECR is computed as ECR d=Pl,d· |Cd| (3) 2Practically, Kis set to be the same as the rollout size used during training, in order to maintain consistency with the training scenario. 6 where Pl,dis the coverage ratio (proportion of responses that fit within a given token length l). The value|Cd|is fixed for each difficulty group. Since we use the ratio of correct responses within each rollout group to determine the difficulty level, there is a minimum number of correct responses for each group (e.g., 6, 3, and 1 correct responses for easy, medium, and hard levels, respectively, when K= 8). We set |Cd|as these minimum values for each group. The monitoring module enumerates potential target lengths from the lower bound target length LT tokens up to the maximum context window (16,384 tokens) with an interval of I, computing coverage ratios Pl,dfor each length. We select the smallest target length as the adaptive target length LA satisfying ECR d≥1for each difficulty level d, ensuring at least one complete and correct response. Intuitively, this mechanism sets the target length as the minimal generation length such that at least one rollout response is expected to be correct. This approach is reasonable because generating sequences shorter than this length would likely be detrimental, as correct responses are less probable. Conversely, generating longer sequences may be redundant, since correct responses can already be obtained with a shorter generation length. Dynamic and Difficulty-Aware Reward During training, we apply these monitoring-derived parameters to implement dynamic and difficulty-aware rewards. Each training question’s difficulty level is determined using the same classification method described earlier. Easier
https://arxiv.org/abs/2505.15612v1
questions receive smaller target lengths (i.e. smaller scaling factor β), while harder questions receive larger ones (i.e. larger scaling factor β). Since monitoring runs every N steps, the difficulty-dependent target lengths are automatically adapted to the evolving policy model. Computational Efficiency This automatic adapting mechanism adds minimal computational overhead. By using a small monitoring dataset and evaluating only periodically, our method increases computation by just 3.5% in our experiments. 5.3 L ASER -DE: A Variant of L ASER -D to Encourage Exploration Previous works [ 13,10] find that with more test-time compute, the reasoning ability of models will improve. Meanwhile, some other works [ 28] show that incorrect responses tend to produce more tokens. Both findings are related to the exploration of policy models, where models try to explore by consuming more compute to get the correct answers. Therefore, we further propose a variant of LASER -D, named LASER -DE, to encourage the exploration of policy models for those incorrect responses. The only difference for LASER -DE is to encourage those incorrect responses to be further explored to find correct pattern by applying reduced penalties to those that are incorrect and exceed the target length. The form of L ASER -DE can be seen in Table 2. 6 Experiments 6.1 Experimental Setup Setup We experiment with three capable and representative LRMs across three dif- ferent sizes known for their overthinking tendencies: DeepSeek-R1-Distill-Qwen-1.5B , DeepSeek-R1-Distill-Qwen-7B andDeepSeek-R1-Distill-Qwen-32B . We adhere to the original prompt from DeepSeek-R1 [ 4], with the full prompt available in Appendix E.1. We train these models using the DeepScaleR-Preview-Dataset [ 12], a high-quality mathematics dataset containing 40K competition-level question-answer pairs. We evaluate the models on four benchmarks of varying difficulty: MATH500 [ 9], OlympiadBench [ 7], AIME 2024, and AMC 2023. We set α= 0.5for our methods in all experiments to balance the trade-off between correctness rewards and solution length penalties. LTis a hyper-parameter for our approaches because the automatic adapting mechanism will enumerate the target length from LTto the context window size to select the adaptive target lengths LA, as described in §5.2. Parameter settings for baseline methods are provided in Appendix E.3, and full details of our training procedure and evaluation methodology can be found in Appendix E.2. Baselines According to Table 2, we train models using different types of length rewards design and compare our LASER ,LASER -D,LASER -DE to previous works. Considering the high computational cost of RL training, we select Efficient Reason [ 2] and L1-Max [ 1] as the representatives, since they perform better accuracy compared to other methods inside same group and are more close to our settings. For ThinkPrune [10], we re-evaluate their open-sourced models. 7 Table 3: Accuracy (%) with average token usage for each dataset and different methods. Most impor- tant results in this table are visualized in Figure 1 and Figure 5 in Appendix A. The base model is DeepSeek-R1-Distill-Qwen-1.5B . "Original" denotes the original model. Tkis the truncation method with context window k. “Group” denotes the Efficient Reasoning [ 2] with different α. Due to the space limit,
https://arxiv.org/abs/2505.15612v1
we only show three most representative results here. For the full results, please refer to Tabel 6 in Appendix H. Accuracy (%) Generation Length (tokens) MATH 500AIME AMCOlympiad BenchAvg.MATH 500AIME AMCOlympiad BenchAvg. Original 83.9 28.9 71.6 43.3 56.9 5042 15956 8202 11510 10177 T8192 81.8 24.8 70.9 43.9 55.3 1795 4465 2560 2841 2915 T6144 80.9 20.2 66.2 42.1 52.3 1351 2821 1917 1947 2009 T4096 77.7 19.2 62.2 38.5 49.4 1054 2481 1484 1564 1646 Groupα=0.4 74.6 25.0 69.2 43.1 53.0 1069 4747 2162 2536 2629 Groupα=0.2 78.1 28.1 68.0 44.4 54.7 1135 5628 2635 2944 3085 Groupα=0.1 77.0 29.0 69.5 44.9 55.1 1228 6301 2808 3271 3402 Groupα=0.05 74.4 30.2 65.5 43.1 53.3 1193 4839 2457 2703 2798 L1-Max-1024 76.4 15.0 59.4 39.1 47.5 661 1303 933 938 959 L1-Max-4096 79.7 20.0 65.0 41.0 51.4 875 1718 1159 1229 1245 LASER LT=2048 83.6 29.2 71.6 44.1 57.1 1913 4815 2493 2767 2895 LASER LT=4096 83.9 31.0 74.1 45.7 58.7 1914 5915 3136 3579 3636 LASER LT=8192 85.6 31.5 75.9 47.7 60.2 2736 6589 4162 4547 4509 LASER -DLT=1024 83.0 30.6 72.8 43.7 57.5 1362 4991 256 2837 2862 LASER -DLT=2048 82.2 31.0 73.3 46.2 58.2 1623 5158 2572 2960 3059 LASER -DLT=4096 84.2 34.2 75.3 47.3 60.3 1872 5750 2981 3474 3520 LASER -DE LT=1024 82.1 33.8 72.2 43.7 58.0 1350 4794 2254 2654 2763 LASER -DE LT=2048 83.9 31.5 75.3 46.4 59.3 1456 5263 2679 2971 3092 LASER -DE LT=4096 83.5 35.0 73.3 46.0 59.5 1949 5789 3080 3488 3577 6.2 Efficacy-Efficiency Trade-off Since there is a trade-off between accuracy and response length, one of the best ways to evaluate different methods is to compare their Pareto-optimal frontiers. We start with the DeepSeek-R1-Distill- Qwen-1.5B model as its small size allows us to run multiple experiments to investigate the trade-off of different approaches. To fully evaluate the potential of each method, we adjust key parameters ( αfor group-based reward, LTfor other methods) to explore different tradeoffs along the accuracy-length trade-off curve. The full details of different hyper-parameters for different methods can be found in Table 5. As a result, each point in Figure 1 and Figure 5 represents a separate experiment with a fully trained model using a distinct hyperparameter configuration. We also list the results in different benchmarks in Table 3. Due to the space limit, we leave some results of truncation methods in Table 6. As shown in Figure 1, both LASER -DandLASER -DE achieve better Pareto-optimal frontiers compared to all other methods. On the AIME2024 benchmark, LASER -DE attains the highest accuracy of 35% using just over 5,500tokens—a substantial reduction by 63%. Meanwhile, LASER - Dstill achieves 34% accuracy with only 4,600+ tokens, underscoring its strong trade-off. Across all benchmarks (Figure 5), LASER -DE achieves the most optimal trade-off when the average token usage is below 3,500, while LASER -Dperforms the best in higher token regimes. Specifically, LASER -D achieves 60.3%accuracy with only 3,520tokens on average, representing a substantial reduction from the 10,177tokens used by the original model. Compared to the LASER method, both LASER -D andLASER
https://arxiv.org/abs/2505.15612v1
-DE achieve significant improvements, demonstrating that incorporating a dynamic and difficulty-aware mechanism greatly enhances the efficacy-efficiency trade-off. Compared to other baseline methods, L ASER still exhibits a more favorable trade-off. 6.3 Experiments on Larger Models To better evaluate the effectiveness of our proposed methods, LASER ,LASER -D, and LASER -DE. We conduct experiments on DeepSeek-R1-Distill-Qwen-7B , as shown in Table 4. Given the computa- tional cost of larger models, we set key hyperparameters for each method to achieve an appropriate trade-off. Specifically, we set α= 0.2for the group-based reward, LT= 8192 for the truncation method in LASER ,LT= 4096 forLASER -DandLASER -DE. Notably, we do not tune αwith fixed value 0.5in all experiments of our methods. As shown in Table 4, LASER -Dachieves the best 8 Table 4: Accuracy (%) with average token usage for each dataset and different methods using 7B and 32B models. "Original" denotes the original model. Tkis the truncation method with context window k. Accuracy (%) Generation Length (tokens) MATH 500AIME AMCOlympiad BenchAvg.MATH 500AIME AMCOlympiad BenchAvg. DeepSeek-R1-Distill-Qwen-7B Original 92.6 53.1 88.4 58.9 73.3 4017 13414 6433 8987 8213 T8192 92.0 51.9 88.3 56.4 72.2 1972 5655 3159 3606 3598 Group 89.4 48.1 82.8 53.7 68.5 780 4271 1693 2348 2273 LASER 92.2 54.4 89.7 58.1 73.6 2317 6320 3733 4262 4158 LASER -D 92.2 58.3 90.0 61.0 75.4 1836 5379 2694 3350 3315 LASER -DE 92.0 55.8 89.1 58.9 74.0 1658 4969 2612 3157 3099 DeepSeek-R1-Distill-Qwen-32B Original 94.4 71.7 93.1 64.6 80.95 3553 10335 6177 7697 6941 LASER -DE 93.2 70.8 93.1 62.2 79.83 2314 6785 3545 4608 4313 2000 3000 4000 5000 6000 7000 Average Tokens33343536Accuracy (%) 11200.8Acc 33.62% Original Model Truncation Group-based LASER LASER-D LASER-DEModel Performance on GPQA 2000 3000 4000 5000 6000 Average Tokens272931333537Accuracy (%) 9771.2Acc 34.59% Original Model Truncation Group-based LASER LASER-D LASER-DEModel Performance Across All Benchmarks Figure 2: Performance on out-of-domain benchmarks: GPQA and average performance across all three benchmarks (GPQA, MMLU, LSAT). trade-off with better accuracy and significantly fewer tokens. On the AIME dataset, it achieves an accuracy of 58.3%, representing a gain of +5.2points, while using only 5,379tokens—substantially fewer than the 13,414tokens used by the original model. Compared to other methods, LASER , LASER -D, and LASER -DE also attain better trade-offs on most benchmarks, particularly on the more challenging ones. For the 32B model, due to computational constraints, we compare the LASER -DE-trained model with the original baseline under this larger setting and set LT= 8192 .LASER -DE achieves competitive accuracy with only a minor drop ( 1%), while still significantly reducing output length. Notably, the accuracy of DeepSeek-R1-Distill-Qwen-32B on our training dataset is already very high—over 76%, leaving little room for further improvement. We speculate that with more challenging and diverse training data, L ASER -DE could yield further accuracy gains. 6.4 Experiments on Out-of-Domain Benchmarks We evaluate whether LASER ,LASER -DandLASER -DE can generalize to domains outside the RL training distribution. We select three out-of-domain benchmarks: GPQA [ 17], LSAT [ 23], and MMLU [ 8], following the evaluation settings established by L1 [ 1]. Figure 2 illustrates
https://arxiv.org/abs/2505.15612v1
the efficacy- efficiency trade-off on GPQA and the average performance across all benchmarks. Compared to the original model, LASER ,LASER -DandLASER -DE consistently achieve significant improvements in both accuracy and token usage, demonstrating robust generalization capabilities. And LASER -Dand LASER -DE maintain the best trade-off even when compared to L ASER . 9 7 Analysis We use DeepSeek-R1-Distill-Qwen-1.5B as the backbone, conduct a comprehensive analysis that includes budget-forcing inference, dynamics of adaptive target lengths, shifts in reasoning patterns, and qualitative evaluations. Due to the space limit, please refer to Appendix F for budget-forcing inference, Appendix G for dynamics of adaptive target lengths. 7.1 Changes of Thinking Patterns To better understand the changes of response length, we analyze the changes of thinking patterns over RL iterations on AIME2024 with 16 samples per question. We analyze through two approaches, keywords counts [26] and reasoning behavior ratios [28]. Shifts in “Self-Reflection” Keywords “Self-reflection” or “Aha-moment” reasoning has emerged as an intriguing behavior in LRMs [ 4]. Following previous work [ 26], we track this behavior by monitoring seven representative keywords: [ “recheck” ,“rethink” ,“try again” ,“wait” ,“alterna- tively” ,“retry” ,“however” ]. As shown in Figure 3, the average amount of these keywords (occur- rences per token) declines notably as response length decreases across all methods. This suggests that RL may reduce instances of spurious “self-reflection,” previously identified as a contributor to over-thinking [ 3]. Interestingly, as training progresses, we observe increased keyword amount while maintaining shorter outputs, indicating models develop more efficient self-reflection behaviors without producing verbose responses. Changes in Thinking Behaviors To further investigate the changes of reasoning patterns beyond keyword statistics, we employ gpt-4.1-mini to perform a more fine-grained analysis of cognitive behaviors throughout the training process. Specifically, we adopt the cognitive behavior framework proposed by [ 5], which identifies reasoning-related behaviors such as Backtracking ,Verification , Subgoal Setting , and Enumeration . We report the proportion of each behavior relative to the total number of behaviors, focusing on these four representative categories. The complete list of behaviors and implementation details are provided in Appendix K. As shown in Figure 4, the proportion of Backtracking behavior decreases significantly, from over 30% to just above 10%, as the response length is reduced. This trend aligns with the keyword statistics, as many of the tracked keywords (e.g., “recheck”, “retry”, “rethink”) are indicative of Backtracking . While Backtracking declines during training, the proportions of other reasoning behaviors, Verification ,Enumeration , and Subgoal Setting , remain stable, with a slight increase observed in Subgoal Setting . These results suggest that reducing response length does not degrade the model into a non-reasoning baseline. On the contrary, core reasoning behaviors are preserved, while unnecessary backtracking is minimized, indicating more efficient reasoning in the refined models. 7.2 Qualitative Analysis We conduct a qualitative analysis on the trivial question “1+1=?” and the MATH500 dataset to understand how RL improves reasoning efficiency. Comparing the original DeepSeek-R1-Distill-Qwen-1.5B model with the LASER -D-trained version, Figure 1 illustrates how the original model generates repetitive “self-reflection” even for trivial questions, while the trained model directly provides
https://arxiv.org/abs/2505.15612v1
the answer. Our analysis of MATH500 (detailed in Appendix L) reveals that the original model tends towards verbose, redundant explanations of single ideas. In contrast, the LASER -D-trained model expresses the same concepts more succinctly using structured formulas, significantly improving token efficiency. This suggests our RL-based approach not only reduces unproductive backtracking but also encourages a shift towards more concise and direct expression. 8 Conclusion In this paper, we propose a unified view for RL-based CoT compression, unifying various reward- shaping and truncation methods. Building on this view, we introduce new approaches with adaptive, length-based reward shaping. Extensive experiments demonstrate that our proposed methods achieve superior Pareto-optimality and significant improvements in both accuracy and token efficiency. Our 10 0 200 400 600 80020406080100120140Average KeywordsTruncation Keywords Tokens 0 200 400 60020406080100120140LASER 0 200 400 60020406080100120140LASER-D 0 200 400 60020406080100120140LASER-DE 46810121416 46810121416 46810121416 46810121416 Average Tokens (K) Training StepsFigure 3: Average keyword amount and response length over RL training on AIME24. The truncation method uses a 8192 token context window. L ASER , LASER -D, and L ASER -DE employ a target length of LT= 2048 . 0 200 400 6000.00.10.20.30.40.5Behavior Frequency Backtracking Frequency Tokens 0 200 400 6000.00.10.20.30.40.5 Verification 0 200 400 6000.00.10.20.30.40.5 Enumeration 0 200 400 6000.00.10.20.30.40.5 Subgoal Setting 68101214 68101214 68101214 68101214 Average Tokens (K) Global Step Figure 4: Changes in reasoning behaviors ratio and response length over RL training iterations on AIME2024. The figure shows how LASER -DE’s thinking patterns change during training with a target length of LT= 2048 . analysis of reasoning behaviors reveals that our RL-based CoT compression effectively encourages models to reason more concisely and productively. References [1]P. Aggarwal and S. Welleck. L1: Controlling how long a reasoning model thinks with reinforcement learning, 2025. URL https://arxiv.org/abs/2503.04697 . [2]D. Arora and A. Zanette. Training language models to reason efficiently, 2025. URL https://arxiv. org/abs/2502.04463 . [3]X. Chen, J. Xu, T. Liang, Z. He, J. Pang, D. Yu, L. Song, Q. Liu, M. Zhou, Z. Zhang, R. Wang, Z. Tu, H. Mi, and D. Yu. Do not think that much for 2+3=? on the overthinking of o1-like llms, 2025. URL https://arxiv.org/abs/2412.21187 . [4]DeepSeek-AI, D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, X. Zhang, X. Yu, Y . Wu, Z. F. Wu, Z. Gou, Z. Shao, Z. Li, Z. Gao, A. Liu, B. Xue, B. Wang, B. Wu, B. Feng, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, D. Dai, D. Chen, D. Ji, E. Li, F. Lin, F. Dai, F. Luo, G. Hao, G. Chen, G. Li, H. Zhang, H. Bao, H. Xu, H. Wang, H. Ding, H. Xin, H. Gao, H. Qu, H. Li, J. Guo, J. Li, J. Wang, J. Chen, J. Yuan, J. Qiu, J. Li, J. L. Cai, J. Ni, J. Liang, J. Chen, K. Dong, K. Hu, K. Gao, K. Guan, K. Huang, K. Yu, L. Wang, L. Zhang, L. Zhao, L. Wang, L. Zhang, L. Xu, L. Xia, M. Zhang, M. Zhang, M. Tang, M. Li, M. Wang, M.
https://arxiv.org/abs/2505.15612v1
Li, N. Tian, P. Huang, P. Zhang, Q. Wang, Q. Chen, Q. Du, R. Ge, R. Zhang, R. Pan, R. Wang, R. J. Chen, R. L. Jin, R. Chen, S. Lu, S. Zhou, S. Chen, S. Ye, S. Wang, S. Yu, S. Zhou, S. Pan, S. S. Li, S. Zhou, S. Wu, S. Ye, T. Yun, T. Pei, T. Sun, T. Wang, W. Zeng, W. Zhao, W. Liu, W. Liang, W. Gao, W. Yu, W. Zhang, W. L. Xiao, W. An, X. Liu, X. Wang, X. Chen, X. Nie, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yang, X. Li, X. Su, X. Lin, X. Q. Li, X. Jin, X. Shen, X. Chen, X. Sun, X. Wang, X. Song, X. Zhou, X. Wang, X. Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . Zhang, Y . Xu, Y . Li, Y . Zhao, Y . Sun, Y . Wang, Y . Yu, Y . Zhang, Y . Shi, Y . Xiong, Y . He, Y . Piao, Y . Wang, Y . Tan, Y . Ma, Y . Liu, Y . Guo, Y . Ou, Y . Wang, Y . Gong, Y . Zou, Y . He, Y . Xiong, Y . Luo, Y . You, Y . Liu, Y . Zhou, Y . X. Zhu, Y . Xu, Y . Huang, Y . Li, Y . Zheng, Y . Zhu, Y . Ma, Y . Tang, Y . Zha, Y . Yan, Z. Z. Ren, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Xie, Z. Zhang, Z. Hao, Z. Ma, Z. Yan, Z. Wu, Z. Gu, Z. Zhu, Z. Liu, Z. Li, Z. Xie, Z. Song, Z. Pan, Z. Huang, Z. Xu, Z. Zhang, and Z. Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . [5]K. Gandhi, A. Chakravarthy, A. Singh, N. Lile, and N. D. Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars, 2025. URL https://arxiv.org/abs/2503. 01307 . [6]S. Hao, S. Sukhbaatar, D. Su, X. Li, Z. Hu, J. Weston, and Y . Tian. Training large language models to reason in a continuous latent space, 2024. URL https://arxiv.org/abs/2412.06769 . [7]C. He, R. Luo, Y . Bai, S. Hu, Z. Thai, J. Shen, J. Hu, X. Han, Y . Huang, Y . Zhang, J. Liu, L. Qi, Z. Liu, and M. Sun. OlympiadBench: A challenging benchmark for promoting AGI with olympiad-level bilingual 11 multimodal scientific problems. In L.-W. Ku, A. Martins, and V . Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3828–3850, Bangkok, Thailand, Aug. 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.211. URL https://aclanthology.org/2024.acl-long.211/ . [8]D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Measuring massive multitask language understanding, 2021. URL https://arxiv.org/abs/2009.03300 . [9]D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. In Thirty-fifth Conference on Neural
https://arxiv.org/abs/2505.15612v1
Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. [10] B. Hou, Y . Zhang, J. Ji, Y . Liu, K. Qian, J. Andreas, and S. Chang. Thinkprune: Pruning long chain-of- thought of llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2504.01296 . [11] Kimi, A. Du, B. Gao, B. Xing, C. Jiang, C. Chen, C. Li, C. Xiao, C. Du, C. Liao, C. Tang, C. Wang, D. Zhang, E. Yuan, E. Lu, F. Tang, F. Sung, G. Wei, G. Lai, H. Guo, H. Zhu, H. Ding, H. Hu, H. Yang, H. Zhang, H. Yao, H. Zhao, H. Lu, H. Li, H. Yu, H. Gao, H. Zheng, H. Yuan, J. Chen, J. Guo, J. Su, J. Wang, J. Zhao, J. Zhang, J. Liu, J. Yan, J. Wu, L. Shi, L. Ye, L. Yu, M. Dong, N. Zhang, N. Ma, Q. Pan, Q. Gong, S. Liu, S. Ma, S. Wei, S. Cao, S. Huang, T. Jiang, W. Gao, W. Xiong, W. He, W. Huang, W. Wu, W. He, X. Wei, X. Jia, X. Wu, X. Xu, X. Zu, X. Zhou, X. Pan, Y . Charles, Y . Li, Y . Hu, Y . Liu, Y . Chen, Y . Wang, Y . Liu, Y . Qin, Y . Liu, Y . Yang, Y . Bao, Y . Du, Y . Wu, Y . Wang, Z. Zhou, Z. Wang, Z. Li, Z. Zhu, Z. Zhang, Z. Wang, Z. Yang, Z. Huang, Z. Huang, Z. Xu, and Z. Yang. Kimi k1.5: Scaling reinforcement learning with llms, 2025. URL https://arxiv.org/abs/2501.12599 . [12] M. Luo, S. Tan, J. Wong, X. Shi, W. Y . Tang, M. Roongta, C. Cai, J. Luo, L. E. Li, R. A. Popa, and I. Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl. https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview\ -with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2 , 2025. Notion Blog. [13] N. Muennighoff, Z. Yang, W. Shi, X. L. Li, L. Fei-Fei, H. Hajishirzi, L. Zettlemoyer, P. Liang, E. Candès, and T. Hashimoto. s1: Simple test-time scaling, 2025. URL https://arxiv.org/abs/2501.19393 . [14] T. Munkhbat, N. Ho, S. H. Kim, Y . Yang, Y . Kim, and S.-Y . Yun. Self-training elicits concise reasoning in large language models, 2025. URL https://arxiv.org/abs/2502.20122 . [15] OpenAI, :, A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, A. Iftimie, A. Karpenko, A. T. Passos, A. Neitz, A. Prokofiev, A. Wei, A. Tam, A. Bennett, A. Kumar, A. Saraiva, A. Vallone, A. Duberstein, A. Kondrich, A. Mishchenko, A. Applebaum, A. Jiang, A. Nair, B. Zoph, B. Ghorbani, B. Rossen, B. Sokolowsky, B. Barak, B. McGrew, B. Minaiev, B. Hao, B. Baker, B. Houghton, B. McKinzie, B. Eastman, C. Lugaresi, C. Bassin, C. Hudson, C. M. Li, C. de Bourcy, C. V oss, C. Shen, C. Zhang, C. Koch, C. Orsinger, C. Hesse, C. Fischer, C. Chan, D. Roberts, D. Kappler, D. Levy, D. Selsam, D. Dohan, D. Farhi, D. Mely, D. Robinson, D. Tsipras, D. Li, D. Oprica, E. Freeman, E. Zhang, E. Wong, E. Proehl, E. Cheung, E. Mitchell, E. Wallace, E. Ritter, E.
https://arxiv.org/abs/2505.15612v1
Mays, F. Wang, F. P. Such, F. Raso, F. Leoni, F. Tsimpourlas, F. Song, F. von Lohmann, F. Sulit, G. Salmon, G. Parascandolo, G. Chabot, G. Zhao, G. Brockman, G. Leclerc, H. Salman, H. Bao, H. Sheng, H. Andrin, H. Bagherinezhad, H. Ren, H. Lightman, H. W. Chung, I. Kivlichan, I. O’Connell, I. Osband, I. C. Gilaberte, I. Akkaya, I. Kostrikov, I. Sutskever, I. Kofman, J. Pachocki, J. Lennon, J. Wei, J. Harb, J. Twore, J. Feng, J. Yu, J. Weng, J. Tang, J. Yu, J. Q. Candela, J. Palermo, J. Parish, J. Heidecke, J. Hallman, J. Rizzo, J. Gordon, J. Uesato, J. Ward, J. Huizinga, J. Wang, K. Chen, K. Xiao, K. Singhal, K. Nguyen, K. Cobbe, K. Shi, K. Wood, K. Rimbach, K. Gu-Lemberg, K. Liu, K. Lu, K. Stone, K. Yu, L. Ahmad, L. Yang, L. Liu, L. Maksin, L. Ho, L. Fedus, L. Weng, L. Li, L. McCallum, L. Held, L. Kuhn, L. Kondraciuk, L. Kaiser, L. Metz, M. Boyd, M. Trebacz, M. Joglekar, M. Chen, M. Tintor, M. Meyer, M. Jones, M. Kaufer, M. Schwarzer, M. Shah, M. Yatbaz, M. Y . Guan, M. Xu, M. Yan, M. Glaese, M. Chen, M. Lampe, M. Malek, M. Wang, M. Fradin, M. McClay, M. Pavlov, M. Wang, M. Wang, M. Murati, M. Bavarian, M. Rohaninejad, N. McAleese, N. Chowdhury, N. Chowdhury, N. Ryder, N. Tezak, N. Brown, O. Nachum, O. Boiko, O. Murk, O. Watkins, P. Chao, P. Ashbourne, P. Izmailov, P. Zhokhov, R. Dias, R. Arora, R. Lin, R. G. Lopes, R. Gaon, R. Miyara, R. Leike, R. Hwang, R. Garg, R. Brown, R. James, R. Shu, R. Cheu, R. Greene, S. Jain, S. Altman, S. Toizer, S. Toyer, S. Miserendino, S. Agarwal, S. Hernandez, S. Baker, S. McKinney, S. Yan, S. Zhao, S. Hu, S. Santurkar, S. R. Chaudhuri, S. Zhang, S. Fu, S. Papay, S. Lin, S. Balaji, S. Sanjeev, S. Sidor, T. Broda, A. Clark, T. Wang, T. Gordon, T. Sanders, T. Patwardhan, T. Sottiaux, T. Degry, T. Dimson, T. Zheng, T. Garipov, T. Stasi, T. Bansal, T. Creech, T. Peterson, T. Eloundou, V . Qi, V . Kosaraju, V . Monaco, V . Pong, V . Fomenko, W. Zheng, W. Zhou, W. McCabe, W. Zaremba, Y . Dubois, Y . Lu, Y . Chen, Y . Cha, Y . Bai, Y . He, Y . Zhang, Y . Wang, Z. Shao, and Z. Li. Openai o1 system card, 2024. URL https://arxiv.org/abs/2412.16720 . [16] X. Qu, Y . Li, Z. Su, W. Sun, J. Yan, D. Liu, G. Cui, D. Liu, S. Liang, J. He, P. Li, W. Wei, J. Shao, C. Lu, Y . Zhang, X.-S. Hua, B. Zhou, and Y . Cheng. A survey of efficient reasoning for large reasoning models: Language, multimodality, and beyond, 2025. URL https://arxiv.org/abs/2503.21614 . 12 [17] D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y . Pang, J. Dirani, J. Michael, and S. R. Bowman. Gpqa: A graduate-level google-proof q&a benchmark, 2023. URL https://arxiv.org/abs/2311.12022 . [18] Y . Ruan, N. Band, C. J. Maddison, and
https://arxiv.org/abs/2505.15612v1
T. Hashimoto. Reasoning to learn from latent thoughts, 2025. URL https://arxiv.org/abs/2503.18866 . [19] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [20] Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, M. Zhang, Y . Li, Y . Wu, and D. Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [21] G. Sheng, C. Zhang, Z. Ye, X. Wu, W. Zhang, R. Zhang, Y . Peng, H. Lin, and C. Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. [22] R. Wang, H. Wang, B. Xue, J. Pang, S. Liu, Y . Chen, J. Qiu, D. F. Wong, H. Ji, and K.-F. Wong. Harnessing the reasoning economy: A survey of efficient reasoning for large language models, 2025. URL https://arxiv.org/abs/2503.24377 . [23] S. Wang, Z. Liu, W. Zhong, M. Zhou, Z. Wei, Z. Chen, and N. Duan. From lsat: The progress and challenges of complex reasoning. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 2022. [24] H. Xia, Y . Li, C. T. Leong, W. Wang, and W. Li. Tokenskip: Controllable chain-of-thought compression in llms, 2025. URL https://arxiv.org/abs/2502.12067 . [25] A. Yang, B. Zhang, B. Hui, B. Gao, B. Yu, C. Li, D. Liu, J. Tu, J. Zhou, J. Lin, K. Lu, M. Xue, R. Lin, T. Liu, X. Ren, and Z. Zhang. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement, 2024. URL https://arxiv.org/abs/2409.12122 . [26] E. Yeo, Y . Tong, M. Niu, G. Neubig, and X. Yue. Demystifying long chain-of-thought reasoning in llms, 2025. URL https://arxiv.org/abs/2502.03373 . [27] Q. Yu, Z. Zhang, R. Zhu, Y . Yuan, X. Zuo, Y . Yue, T. Fan, G. Liu, L. Liu, X. Liu, H. Lin, Z. Lin, B. Ma, G. Sheng, Y . Tong, C. Zhang, M. Zhang, W. Zhang, H. Zhu, J. Zhu, J. Chen, J. Chen, C. Wang, H. Yu, W. Dai, Y . Song, X. Wei, H. Zhou, J. Liu, W.-Y . Ma, Y .-Q. Zhang, L. Yan, M. Qiao, Y . Wu, and M. Wang. Dapo: An open-source llm reinforcement learning system at scale, 2025. URL https://arxiv.org/abs/2503.14476 . [28] W. Zeng, Y . Huang, Q. Liu, W. Liu, K. He, Z. Ma, and J. He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild, 2025. URL https://arxiv.org/abs/2503.18892 . [29] W. Zhong, S. Wang, D. Tang, Z. Xu, D. Guo, J. Wang, J. Yin, M. Zhou, and N. Duan. Ar-lsat: Investigating analytical reasoning of text, 2021. 13 A Pareto-Optimality We illustrate the efficacy-efficiency trade-off in Figure 5. Our proposed methods, LASER ,LASER -D, andLASER -DE, demonstrate significant improvements in both accuracy and token usage across all benchmarks, particularly in the most challenging ones. Notably, LASER -DandLASER -DE achieve a Pareto-optimal trade-off compared to all other methods. 2000 3000 4000 5000 Average Tokens525456586062Accuracy 10177Acc 56.92% Original Model Truncation Group-based ThinkPrune LASER (Ours) LASER-D (Ours) LASER-DE (Ours)Model Performance Across All Benchmarks (a) 3500 4500 5500 6500 7500 8500 Average Tokens222426283032343638Accuracy 15956Acc
https://arxiv.org/abs/2505.15612v1
28.9%Original Model Truncation Group-based ThinkPrune LASER (Ours) LASER-D (Ours) LASER-DE (Ours)Model Performance on AIME2024 (b) Figure 5: Pareto-optimal trade-off between accuracy and response length across various methods. Each point represents a single training run with different hyper-parameters. Our methods, LASER -DE,LASER -D, and LASER , achieve a Pareto-optimal trade-off compared to all other methods. (a) Accuracy and response length on all benchmarks (MATH500, AIME2024, AMC2023, Olympiad Bench) (b) Accuracy and response length on AIME2024 B Ratio of Truncated Responses During Training with Truncation We analyze the ratio of truncated responses when applying an 8192 token limit during training. Our findings show that the proportion of truncated responses is initially very high—exceeding 45%, and remains substantial (above 10%) even after 200 rollout steps. This high truncation rate highlights the context window constraints in training is sub-optimal. 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 Step0.00.10.20.30.4Truncation Ratio Figure 6: The ratio of truncated responses in training data with 8192 tokens limit. C Dynamics of Accuracy and Rewards Throughout Training We present the accuracy and rewards for various methods across training iterations in Figure 7a and Figure 7b. As discussed in §4, group-based rewards tend to exploit the length rewards S(y) 14 0 100 200 300 400 500 Step0.4000.4250.4500.4750.5000.5250.5500.5750.600AccuracyPerformance of Different Methods on Training Dataset LASER-DE LASER-D Efficient-Reason-0.05 LASEREfficient-Reason-0.2 Efficient-Reason-0.4 Truncation-8192(a) 0 100 200 300 400 500 600 700 Step1.0 0.5 0.00.51.0RewardReward Comparison Across Different Methods LASER-DE LASER-D L1-MAX-16384L1-MAX-4096 LASEREfficient-Reason-0.05 Truncation (b) Figure 7: (a) Accuracy on training dataset across training iterations for different methods (b) Rewards across training iterations for different methods while causing a significant drop in accuracy. Budget-based rewards such as L1-Max-16384 [ 1] suffer from unstable training when the context window is large. In contrast, other methods like truncation methods, LASER ,LASER -D, and LASER -DE demonstrate a simultaneous increase in both rewards and accuracy throughout the training process. D Supplementary Details: Length-based Reward Shaping Formulations In this section, we provide additional details regarding the various formulations of length-based reward shaping as presented in Table 2. These formulations can viewed as different variants of our unified framework in Eq. 2 which can be implemented by making specific design choices for three key components: C(y),λ(y), and S(y)inside the framework. Here we review the formulation of Eq. 2 to better illustrate following approaches. ˆR(x, y) =C(y) +λ(y)·S(y) D.1 Truncation Vanilla Truncation As aforementioned discussions (§4), truncation is a special case of the length reward with C(y) = 0 , where the target length LTis enforced by the context window. ρis set as 0. It follows the design: C(y) = 0 λ(y) = 1 S(y) =R(x, y)ifL(y)≤LT ρ ifL(y)> LT ThinkPrune ThinkPrune [ 10] is another truncation-based approach, which extends vanilla trun- cation by introducing an adaptive target lengths LAto replace fixed target lengths LT.ρis set as 0. The design follows: C(y) = 0 λ(y) = 1 S(y) =R(x, y)ifL(y)≤LA ρ ifL(y)> LA Their training methodology employs a progressive three-stage process with iterative refinement of LA. Each subsequent stage initializes from the checkpoint of the previous stage
https://arxiv.org/abs/2505.15612v1
while mannually 15 reducing the value of LA. Specifically, they progressively decrease LAthrough values of 4096 ,3072 , and2048 across the three stages. D.2 Group-based Rewards In the context of group-based rewards, the length reward S(y)is specifically designed to promote brevity by assigning higher scores to shorter responses within a rollout group. This mechanism functions as a comparison-based reward system that inherently favors more concise responses. Most of them follow the design C(y) =R(x, y)to keep the accuracy performance of models. Efficient Reasoning Efficient Reasoning [ 2] follows the principle of group-based reward by specifically encouraging conciseness within correct responses. The mean and variance scalars are computed exclusively from the subset of correct responses, ensuring appropriate statistical distributions. By selectively rewarding conciseness only when answers are correct, this approach maintains higher accuracy compared to Kimi-k1.5 [ 11], which encourages wrong responses to be shorter. Considering the similarity between the two approaches and the better efficacy-efficiency trade-off, we select Efficient Reasoning as the representative group-based reward in this paper. The corresponding design can be formulated as follows: C(y) =R(x, y) λ(y) =I(R) S(y) =−α·σL(y)−Mean (y) STD (L) Kimi-k1.5 The design of Kimi-k1.5 is similar to Efficient Reasoning [ 2], with two main differences. First, the scalar factors are computed using the minimum response length and the difference between maximum response length and maximum length within a rollout group. Second, Kimi-k1.5 encour- ages all responses to be shorter, rather than focusing solely on shortening correct responses. Such a design has the potential to intensify reward hacking, as models may exploit the reward function by favoring shorter responses to maximize their scores. The designs follows: C(y) =R(x, y) λ(y) = 1 S(y) =( 0.5−L(y)−Lmin Lmax−LminifI(R) = 1 min 0,0.5−L(y)−Lmin Lmax−Lmin ifI(R) = 0 D.3 Budget-based Reward Budget-based rewards use query-specific target lengths (budgets) and penalize responses that deviate from these instructions. And the coefficient αcontrols the trade-off between length reward and correctness reward. They come in two flavors: exact mode and max mode. We follow same settings as L1 [1] and set α= 0.0003 for exact mode, α= 0.01for max mode. Exact Mode In exact mode, the model must hit the specified target length LTexactly, and any deviation (even shorter outputs) is penalized. The design can be formulated as: C(y) =R(x, y) λ(y) = 1 S(y) =−α· |L(y)−LT| Max Mode In max mode, only outputs that exceed LTincur a penalty. The designs follow: 16 C(y) = 0 λ(y) =I(R) S(y) = clip( α·(L(y)−LT) +δ,0,1) E Training Configurations We leverage the prompt from DeepSeek-AI et al. [4], which is shown in Figure 8. And the mark for thinking is “<think>...</think>”. E.1 Training Prompt We list our training prompt in Figure 8, which follows the prompt from DeepSeek-R1 [4]. <|User|>{input}Please reason step by step, and put your final answer within \boxed{}.<|Assistant|>Prompt: Figure 8: Training prompt for our training. E.2 Training and Evaluation Details We employ the verl [ 21] framework for model training and Qwen-Math-Eval [ 25] for evaluation. During training, we set the rollout batch size to 128, conduct 8 rollouts per prompt, use a temperature
https://arxiv.org/abs/2505.15612v1
of 0.6, and train with a mini-batch size of 64. In our preliminary experiments, we found long-to-short RL benefits from clip-higher strategy [ 27]. So we follow DAPO [ 27] and set ϵhigh as 0.28. For evaluation, we maintain a sampling temperature of 0.6 and permit a maximum of 32,768 tokens to be generated. The number of samplings during evaluation is contingent on the dataset size: 4 samples per question for MATH500 and OlympiadBench, and 16 samples for AIME 2024 and AMC 2023. E.3 Full Hyper-Parameter List for Different Length-based Rewards We list the all hyper-parameters for LT,αandLAin Table 5. Table 5: The details of key hyper-parameters for different methods Methods Hyper-Parameters Truncation LT= [10240 ,8192,7168,6144,4098,2048] Think-Prune LA= [4096 ,3072,2048] Group-Based Rewards α= [0.4,0.2,0.1,0.05] L1-Max α= 0.01 LASER LT= [8192 ,4096,2048] LASER -D LT= [4096 ,2048,1024] LASER -DE LT= [4096 ,2048,1024] F Budget-Forcing Inference To further analyze the impact of different length rewards, we conduct experiments using the budget- forcing setup introduced in S1 [ 13], which restricts the model to stop reasoning after a fixed number of tokens B. We adopt their experimental setting and evaluate across B= [500 ,1000,2000,4000,8000] We follow the budget-forcing implementations of Muennighoff et al. [13], Hou et al. [10]. Specif- ically, we follow their implementations and modify the codebase of Qwen-Math-Eval. We stop the thinking process of LRMs by appending “</think> \n\n **Final Answer.**”. Since empirically, DeepSeek-R1-Distill-Qwen-1.5B typically summarize its final answer starting with “ \n\n **Final 17 Answer.**”. We use the same settings as our evaluations where we sample responses for multi- ple times with temperature = 0 .6.. As shown in Figures 9a and 9b, despite not being explicitly trained with any budget-forcing mechanisms, LASER -DandLASER -DE consistently achieve strong trade-offs between accuracy and token efficiency, particularly on harder questions or when inference budgets are moderately constrained. While LASER performs competitively on average benchmarks, it lags behind LASER -D/LASER - DEunder strict token budgets or on more challenging examples. L1-Max, specifically trained to meet varying budget constraints during training, performs best under extremely tight budgets, demonstrating the strength of budget-specific optimization. However, its performance plateaus when more budget is available, limiting its ability to improve on harder tasks and resulting in a suboptimal trade-off, as shown in Figure 9b. Group-based methods are also effective in low-budget scenarios due to their reward structure favoring shorter outputs, though this often leads to overly brief responses. ThinkPrune shows comparable performance to LASER under looser budgets but inherits the limitations of truncation-based approaches, struggling on difficult problems even when more tokens are available. 500 1000 1500 2000 2500 3000 Average Tokens25303540455055Average Accuracy (%) Average Across Benchmarks ThinkPrune Group-Based L1-Max Truncation LASER LASER-D LASER-DE (a) 1000 2000 3000 4000 5000 Tokens51015202530Accuracy (%) Benchmark: AIME2024 ThinkPrune Group-Based L1-Max Truncation LASER LASER-D LASER-DE (b) Figure 9: Budget-forcing inference with different methods. (a) Average accuracy with different output budget on all benchmarks (b) The accuracy of different methods on AIME2024 with different output budget. G Dynamics of Adaptive Target Lengths In this section, we analyze the dynamics of adaptive target lengths during
https://arxiv.org/abs/2505.15612v1
the training process of LASER -DandLASER -DE. Figure 10 shows how the adaptive target length LAchanges over training iterations for both methods. As demonstrated in Figure 10, our method dynamically selects appropriate target lengths based on problem difficulty. For easy problems (left figure), the model quickly identifies that shorter target lengths are sufficient. For medium-difficulty problems (middle figure), the model begins with longer target lengths (10,000+) and gradually reduces them to 3000-4000 as training continues. For difficult problems (right figure), the model consistently maintains target lengths near the maximum context window size, with some fluctuations attributable to computational precision issues. This adaptive behavior highlights the effectiveness of our approach in efficiently allocating computational resources based on problem complexity. H Full Main Results We list the full results of different methods in Table 6. I Full Experimental Results on Out-of-Domain Benchmarks Figure 11 illustrates the performance of various methods on out-of-domain benchmarks, including GPQA [ 17], LSAT [ 29,23], and MMLU [ 8]. Across all benchmarks, LASER ,LASER -DandLASER - DEconsistently demonstrate significant improvements in both accuracy and efficiency. Notably, 18 0 100 200 300 400 500 600 700 Step1000150020002500300035004000Adaptive T arget LengthEasy Level LASERDLT=1024 LASERDLT=2048 LASERDLT=4096 LASERDELT=1024 LASERDELT=2048 LASERDELT=4096 0 100 200 300 400 500 600 700 Step2000400060008000100001200014000Adaptive T arget LengthMedium Level LASERDLT=1024 LASERDLT=2048 LASERDLT=4096 LASERDELT=1024 LASERDELT=2048 LASERDELT=4096 0 100 200 300 400 500 600 700 Step120001300014000150001600017000Adaptive T arget LengthHard Level LASERDLT=1024 LASERDLT=2048 LASERDLT=4096 LASERDELT=1024 LASERDELT=2048 LASERDELT=4096 Figure 10: Dynamics of adaptive target lengths during the training process of LASER -DandLASER -DE. The figure shows how the adaptive target length LAchanges over training iterations for problems of different difficulty levels (easy, medium, hard). For easy problems, the model selects short target lengths; for medium problems, it gradually decreases from higher initial values; and for hard problems, it maintains consistently high target lengths near the context window limit. This demonstrates the methods’ ability to adaptively allocate computational resources based on problem complexity, unlike fixed-length approaches. Table 6: Full results of accuracy (%) with average token usage for each dataset and different methods. The base model is DeepSeek-R1-Distill-Qwen-1.5B . "Original" denotes the original model. Tkis the truncation method with context window k. “Group” denotes the Efficient Reasoning [ 2] with different α. Due to the space limit, we only show three most representative results of truncation method here. Accuracy (%) Generation Length (tokens) MATH 500AIME AMCOlympiad BenchAvg.MATH 500AIME AMCOlympiad BenchAvg. Original 83.9 28.9 71.6 43.3 56.9 5042 15956 8202 11510 10177 T10240 82.7 26.9 73.1 44.1 56.7 2056 5458 3036 3405 3489 T8192 81.8 24.8 70.9 43.9 55.35 1795 4465 2560 2841 2915 T7168 81.8 23.3 68.6 43.0 54.18 1553 3726 2251 2323 2463 T6144 80.9 20.2 66.2 42.1 52.35 1351 2821 1917 1947 2009 T4096 77.7 19.2 62.2 38.5 49.4 1054 2481 1484 1564 1646 T2048 73.2 15.8 56.9 35.9 45.45 721 1029 936 1084 943 Groupα=0.4 74.6 25.0 69.2 43.1 53.0 1069 4747 2162 2536 2629 Groupα=0.2 78.1 28.1 68.0 44.4 54.7 1135 5628 2635 2944 3085 Groupα=0.1 77.0 29.0 69.5 44.9 55.1 1228 6301 2808 3271 3402 Groupα=0.05 74.4 30.2 65.5
https://arxiv.org/abs/2505.15612v1
43.1 53.3 1193 4839 2457 2703 2798 L1-Max-1024 76.4 15.0 59.4 39.1 47.5 661 1303 933 938 959 L1-Max-4096 79.7 20.0 65.0 41.0 51.4 875 1718 1159 1229 1245 LASER LT=2048 83.6 29.2 71.6 44.1 57.1 1913 4815 2493 2767 2895 LASER LT=4096 83.9 31.0 74.1 45.7 58.7 1914 5915 3136 3579 3636 LASER LT=8192 85.6 31.5 75.9 47.7 60.2 2736 6589 4162 4547 4509 LASER -DLT=1024 83.0 30.6 72.8 43.7 57.5 1362 4991 2556 2837 2862 LASER -DLT=2048 82.2 31.0 73.3 46.2 58.2 1623 5158 2572 2960 3059 LASER -DLT=4096 84.2 34.2 75.3 47.3 60.3 1872 5750 2981 3474 3520 LASER -DE LT=1024 82.1 33.8 72.2 43.7 58.0 1350 4794 2254 2654 2763 LASER -DE LT=2048 83.9 31.5 75.3 46.4 59.3 1456 5263 2679 2971 3092 LASER -DE LT=4096 83.5 35.0 73.3 46.0 59.5 1949 5789 3080 3488 3577 these improvements extend even to the knowledge-intensive MMLU benchmark, highlighting the robust generalization capabilities of our proposed methods. J Visualization Details In this appendix, we provide details about the visualization of different reward functions depicted in Table 2. These visualizations illustrate how different methods calculate rewards based on response length. 19 2000 3000 4000 5000 6000 7000 Average Tokens33343536Accuracy (%) 11200.8Acc 33.62% Original Model Truncation Group-based LASER LASER-D LASER-DEModel Performance on GPQA(a) GPQA 2000 4000 6000 8000 Average Tokens2223242526Accuracy (%) 15421Acc 24.62% Original Model Truncation Group-based LASER LASER-D LASER-DEModel Performance on LSAT (b) LSAT 0 1000 Average Tokens434445464748Accuracy (%) 2691.7Acc 45.52% Original Model Truncation Group-based LASER LASER-D LASER-DEModel Performance on MMLU (c) MMLU 2000 3000 4000 5000 6000 Average Tokens272931333537Accuracy (%) 9771.2Acc 34.59% Original Model Truncation Group-based LASER LASER-D LASER-DEModel Performance Across All Benchmarks (d) Average Figure 11: Performance on out-of-domain benchmarks including GPQA [17], LSAT [29, 23], and MMLU [8]. J.1 Visualization Parameters Each visualization captures the relationship between response length and reward value with the following specifications: •X-axis : L(y) represents the response length, ranging from 0 to 20 tokens. •Y-axis : Reward value, with different ranges depending on the method. •Line styles : Solid lines represent rewards for correct responses (blue), while dashed lines represent rewards for incorrect responses (red). •Target length (LT): Set to 10 tokens for all methods. The visualizations were generated using a high-resolution grid of 400 points between 0 and 20 tokens. J.2 Unified Reward Formulation Each method can be represented using the unified reward formula: ˆR(x, y) =C(y) +λ(y)·S(y) We implement the specific components for each method in this simulation as follows. Note that the paramters are only used for better visualization which are different from the practical experiments. 20 Vanilla Truncation C(y) = 0 λ(y) = 1 S(y) =R(x, y)ifL(y)≤LT ρ ifL(y)> LT where LT= 10 andρ= 0. ThinkPrune C(y) = 0 λ(y) = 1 S(y) =R(x, y)ifL(y)≤LA ρ ifL(y)> LA where LA∈ {10,7.5,5}. Efficient Reasoning C(y) =R(x, y) λ(y) =I(R) S(y) =−α·σL(y)−Mean (y) STD (L) where µ= 10 andσ= 2. Kimi-k1.5 C(y) =R(x, y) λ(y) = 1 S(y) =( 0.5−L(y)−Lmin Lmax−LminifI(R) = 1 min 0,0.5−L(y)−Lmin Lmax−Lmin ifI(R) = 0 where Lmin= 2.5andLmax= 20 . L1-Exact C(y) =R(x, y) λ(y) = 1 S(y)
https://arxiv.org/abs/2505.15612v1
=−α· |L(y)−LT| where α= 0.03andLT= 10 . L1-Max C(y) = 0 λ(y) =I(R) S(y) = clip( α·(L(y)−LT) +δ,0,1) where α= 0.03andLT= 10 . LASER C(y) =R(x, y) λ(y) =I(R) S(y) =α·I(L(y)< LT) where LT= 10 . LASER -D C(y) =R(x, y) λ(y) =I(R) S(y) =α·I(L(y)< LA) where LA∈ {10,7.5,5}. 21 LASER -DE C(y) =R(x, y) λ(y) = 1 S(y) =α·I(R)·I(L(y)≤LA) +α·(1−I(R))·I(L(y)> LA) where LA∈ {12.5,10,7.5}. For methods with multiple adaptive target lengths LAvalues (ThinkPrune, LASER -D, and LASER - DE), different shades of the base colors were used: • Correct responses (blue): RGB(26,71,142), RGB(62,101,184), RGB(125,154,230) • Incorrect responses (red): RGB(139,0,0), RGB(183,50,40), RGB(224,93,86) K Analysis of Reasoning Behaviros We apply the cognitive behavior framework proposed by Gandhi et al. [5]to conduct a detailed analysis of how reasoning behaviors change during our long-to-short RL. We use gpt-4.1-mini to perform a more fine-grained analysis of cognitive behaviors throughout the training process. Follow- ing Zeng et al. [28], we use the prompt shown in Figure 12 to prompt gpt-4.1-mini to identify and analyze reasoning behaviors. We analyze these behaviors on AIME2024 by sampling one question 16 times, resulting in 480 responses for analysis. Since we start from a LRM, reasoning behaviors such as backtracking naturally appear in every response, especially for challenging benchmarks. We specifically track four key behaviors: Backtracking ,Verification ,Enumeration , and Subgoal Setting . For each behavior, we calculate its frequency ratio relative to all behaviors and report how these ratios change throughout the training process. The complete list of all reasoning behaviors analyzed is provided in Table 7. Table 7: Complete list of reasoning behaviors Reasoning Behavior Subgoal Setting Enumeration Verification Backtracking Creative Analogy and Abstraction Abstraction and Parametrization Analytical Insight via Asymptotic Analysis Creative Abstraction / Coordinate Setup Use of Multiple Mathematical Tools and Identities Creative Analogies and Insightful Generalizations Algebraic Manipulation and Insightful Generalization Abstraction to Modular Arithmetic and Divisibility Creative Analogies and Abstractions Insightful Generalization / Alternative Modeling L Qualitative Analysis on Efficient Reasoning We conduct a qualitative analysis on the trivial question “1+1=?” and the MATH500 dataset to understand how RL improves reasoning efficiency. Comparing the original DeepSeek-R1-Distill-Qwen-1.5B model with the LASER -D-trained version, Figure 1 illustrates how the original model generates repetitive “self-reflection” even for trivial questions, while the trained model directly provides the answer. Our analysis of MATH500 (detailed in Figure 13, Fig- ure 14 and Figure 15) reveals that the original model tends towards verbose, redundant explanations of single ideas. In contrast, the LASER -D-trained model expresses the same concepts more succinctly using structured formulas, significantly improving token efficiency. This suggests our RL-based approach not only reduces unproductive backtracking but also encourages a shift towards more concise and direct expression. 22 Below is a chain-of-reasoning generated by a Language Model when attempting to solve a math problem. Evaluate this chain-of-reasoning to determine whether it demonstrates beneficial problem-solving behaviors that deviate from typical linear, monotonic reasoning patterns commonly observed in language models. <start_of_reasoning> {input} <end_of_reasoning> Specifically, actively identify and emphasize beneficial behaviors such as: (1) Backtracking: Explicitly revising approaches upon identifying errors or dead ends (e.g.,
https://arxiv.org/abs/2505.15612v1
"This approach won't work because..."). (2) Verification: Systematically checking intermediate results or reasoning steps (e.g., "Let's verify this result by..."). (3) Subgoal Setting: Breaking down complex problems into smaller, manageable steps (e.g., "To solve this, we first need to..."). (4) Enumeration: Solving problems by exhaustively considering multiple cases or possibilities. Additionally, remain attentive to and encourage the identification of other beneficial behaviors not explicitly listed here, such as creative analogies, abstraction to simpler cases, or insightful generalizations. Important: Clearly specify each beneficial behavior you identify. Provide explicit examples from the reasoning chain. If no beneficial behaviors are observed, explicitly return an empty list. Provide your evaluation clearly, formatted as follows: ```json { "behaviour": "", "example": "" } ```Prompt:Figure 12: Prompt used to identify and analyze reasoning behaviors with gpt-4.1-mini M Limitations Despite our work’s effective improvements in performance and efficiency, limitations remain. Our and most previous works focus primarily on the math as it provides an excellent verification environment and testbed for validating new methodologies. We believe further validation in code generation and agentic tasks would be valuable to determine if similar favorable trade-offs can be achieved in these contexts. Importantly, our methods were not specifically designed for mathematical tasks but were developed as domain-agnostic approaches that should naturally extend to other areas. In future work, we plan to explore more realistic scenario tasks, particularly those involving agentic reasoning, to further validate our approach and improve the efficacy-efficiency trade-off in more broader areas. 23 <think> Okay, so I need to find the distance between the two points (2, -6) and (-4, 3). Hmm, I remember there's a formula for the distance between two points in a coordinate plane. It's something like the square root of the sum of the squares of the differences in each coordinate. Let me write that down. The formula is d = sqrt[(x2 - x1)^2 + (y2 - y1)^2]. So I can plug in the coordinates into this formula. Let me assign (x1, y1) as (2, -6) and (x2, y2) as (-4, 3). Then the differences would be x2 - x1 = -4 - 2 = -6 and y2 - y1 = 3 - (-6) = 9. So now I square those differences: (-6)^2 = 36 and 9^2 = 81. Then I add those together: 36 + 81 = 117. Finally, the distance is the square root of 117. I should check if this can be simplified. The factors of 117 are 9*13, so sqrt(9*13) = 3*sqrt(13). So the distance is 3Ã13 units. **Final Answer** The distance between the points is \boxed{3\sqrt{13}} units. </think> To find the distance between the points \((2, -6)\) and \((-4, 3)\), we use the distance formula: \[ d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2} \] Substituting the given points \((2, -6)\) as \((x_1, y_1)\) and \((-4, 3)\) as \((x_2, y_2)\), we calculate the differences: \[ x_2 - x_1 = -4 - 2 = -6 \] \[ y_2 - y_1 = 3 - (-6) = 9 \] Squaring these differences and adding them: \[ (-6)^2 + 9^2 = 36 + 81 =
https://arxiv.org/abs/2505.15612v1
117 \] Taking the square root: \[ \sqrt{117} = 3\sqrt{13} \] Thus, the distance between the points is \(\boxed{3\sqrt{13}}\) units.<think> Okay, so I need to find the distance between two points: (2, -6) and (-4, 3). Hmm, I remember from school that there's a formula for this kind of problem. Let me think. Is it something with the distance formula? Yeah, I think it's called the distance formula. It's used to find the distance between two points in a coordinate plane. Let me try to recall how that formula works. I think it's something like taking the difference in the x-coordinates, squaring it, taking the difference in the y-coordinates, squaring that, adding them together, and then taking the square root. Yeah, that sounds right. So, if I have two points, let's say (x1, y1) and (x2, y2), the distance d between them is calculated by the formula: d = sqrt[(x2 - x1)^2 + (y2 - y1)^2] Alright, let me write that down step by step so I don't get confused. First, I need to identify the coordinates of both points. The first point is (2, -6), so that means x1 is 2 and y1 is -6. The second point is (-4, 3), so x2 is -4 and y2 is 3. Now, I need to find the differences in the x and y coordinates. So, subtract x1 from x2, which is (-4 - 2). Let me calculate that. Hmm, -4 minus 2 is -6. So, that part is - 6. Next, subtract y1 from y2, which is (3 - (-6)). Wait, subtracting a negative is like adding the positive, so that would be 3 + 6, which is 9. So now, I have the differences: -6 for the x-coordinate and 9 for the y-coordinate. But since we're going to square them, the negative sign doesn't matter because squaring a negative number makes it positive. So, (-6)^2 is 36, and 9^2 is 81. Now, I need to add those squared differences together. 36 plus 81 is... let me add that. 36 plus 80 is 116, and then plus 1 more is 117. So, the sum inside the square root is 117. Therefore, the distance d is the square root of 117. Now, I need to simplify that radical. Let me see if 117 has any square factors. Hmm, 117 divided by 9 is 13. So, 9 is a perfect square, and 13 is a prime number. So, I can write sqrt(117) as sqrt(9 * 13). Since sqrt(9) is 3, that becomes 3*sqrt(13). Wait, let me make sure I did that correctly. So, 9 times 13 is 117, and sqrt(9) is 3, so yes, that would make sense. So, putting it together, the distance is 3 times the square root of 13. Let me just verify the steps again to make sure I didn't make a mistake. Coordinates: (2, -6) and (-4, 3). Compute x2 - x1: -4 - 2 = -6. y2 - y1: 3 - (-6) = 9. Square them: (- 6)^2 = 36; 9^2 = 81. Sum: 36 + 81 = 117.
https://arxiv.org/abs/2505.15612v1
Square root: sqrt(117). Factor: 117 = 9 * 13, so sqrt(9*13) = 3*sqrt(13). Yeah, that all looks correct. Is there another way to approach this problem? Maybe by plotting the points on a graph and using the Pythagorean theorem? Let me think. If I plot (2, -6) and (-4, 3), the distance between them would be the hypotenuse of a right-angled triangle where the other two sides are the differences in x and y coordinates. So, the horizontal leg would be the distance between x = 2 and x = -4, which is indeed 6 units. The vertical leg would be the distance between y = -6 and y = 3, which is 9 units. Then, using Pythagoras, the hypotenuse would be sqrt(6^2 + 9^2) = sqrt(36 + 81) = sqrt(117). So, same result as before, which is reassuring. Another thought: Could I have mixed up the order of subtraction? Like, if I had done x1 - x2 instead of x2 - x1, would that change anything? Let me see. If I do x1 - x2, that would be 2 - (-4) = 2 + 4 = 6, same as before. And y1 - y2 is -6 - 3 = -9, which when squared is also 81. So, same result: 6^2 + (-9)^2 = 36 +81 = 117. So, that's consistent. ...Question: If $f(x) = \frac{3x-2}{x-2}$, what is the value of $f(-2) +f(-1)+f(0)$? Express your answer as a common fraction. [2145 Tokens][550 Tokens]Figure 13: The full example of Figure 1 24 <think> Okay, so I have this problem where I need to express a double sum in terms of p and q, which are known as the sums of reciprocals of squares and cubes respectively. Let me write down what I need to find: The sum is over all j and k from 1 to infinity of 1/(j + k)^3. So, S = ∑_{j=1}^∞ ∑_{k=1}^∞ 1/(j + k)^3. And p = ∑_{k=1}^∞ 1/k², q = ∑_{k=1}^∞ 1/k³. ... Therefore, S = ∑_{m=2}^∞ (m - 1)/m³. So S = ∑_{m=2}^∞ (m - 1)/m³ = ∑_{m=2}^∞ (1/m² - 1/m³). Which can be written as: S = ∑_{m=2}^∞ 1/m² - ∑_{m=2}^∞ 1/m³. We know p = ∑_{m=1}^∞ 1/m², so ∑_{m=2}^∞ 1/m² = p - 1. Similarly, q = ∑_{m=1}^∞ 1/m³, so ∑_{m=2}^∞ 1/m³ = q - 1. Therefore, S = (p - 1) - (q - 1) = p - q. ... Thus, the expression becomes: \[ (p - 1) - (q - 1) = p - q \] The final answer is: \[ \boxed{p - q} \]<think> Okay, so I have this problem where I need to express a double sum involving reciprocals of cubes as a combination of two known sums, p and q. Let me write down what I know. First, p is defined as the sum from k = 1 to infinity of 1/k². I remember that p is known as the Basel problem, and its value isπ²/6. But I don't know if I need the exact value of p, or if I can just keep it as p for now.
https://arxiv.org/abs/2505.15612v1
Similarly, q is the sum from k = 1 to infinity of 1/k³, and I think that's called the Apery's constant. I don't recall its exact value, but again, maybe I don't need to compute it exactly. The main goal is to find an expression for the double sum: S = ∑_{j=1}^∞ ∑_{k=1}^∞ 1/(j + k)³. And express this S in terms of p and q. Hmm, okay. So, I need to work with S, which is a double sum over j and k starting at 1, and each term is 1 over (j + k) cubed. I remember that sometimes, double sums can be transformed into single sums by changing variables or using some symmetry. Maybe I can find a substitution that simplifies this. Let me think: in the double sum, the indices j and k are symmetric. So perhaps I can make a substitution where I set n = j + k, and then see how many times each n appears in the double sum. That is, for each n ≥ 2 (since j and k start at 1), the number of pairs (j, k) such that j + k = n is (n - 1). For example, when n=2, only (1,1); when n=3, (1,2) and (2,1); for n=4, (1,3), (2,2), (3,1), etc. Therefore, I can rewrite the double sum S as: S = ∑_{n=2}^∞ (n - 1) * [1/n³]. Because for each n, there are (n - 1) pairs (j, k) where j + k = n, and each of these contributes 1/n³ to the sum. So now, S can be written as: S = ∑_{n=2}^∞ (n - 1)/n³. Let me simplify this expression. Let's split the fraction: (n - 1)/n³ = n/n³ - 1/n³ = 1/n² - 1/n³. Therefore, S = ∑_{n=2}^∞ (1/n² - 1/n³). Now, let's write this as: S = ∑_{n=2}^∞ 1/n² - ∑_{n=2}^∞ 1/n³. But wait, the original p is ∑_{n=1}^∞ 1/n², which is 1 + ∑_{n=2}^∞ 1/n². Similarly, q is ∑_{n=1}^∞ 1/n³, which is 1 + ∑_{n=2}^∞ 1/n³. So, if I subtract 1 from p, I get ∑_{n=2}^∞ 1/n² = p - 1. Similarly, subtracting 1 from q gives ∑_{n=2}^∞ 1/n³ = q - 1. Therefore, substituting back into S: S = (p - 1) - (q - 1) = p - 1 - q + 1 = p - q. Wait, but that can't be right. Because when I split the sum S = ∑_{n=2}^∞ (1/n² - 1/n³) = ∑_{n=2}^∞ 1/n² - ∑_{n=2}^∞ 1/n³ = (p - 1) - (q - 1) = p - q. So, is S = p - q? But hold on, let me check that. Let me compute S numerically for a few terms to see if this holds. ... Thus, substituting these into the expression for \( S \), we get: \[ S = (p - 1) - (q - 1) = p - q. \] Therefore, the final answer is: \[ \boxed{p - q} \]Question: Define \[p = \sum_{k = 1}^\infty \frac{1}{k^2} \quad \text{and} \quad q = \sum_{k = 1}^\infty \frac{1}{k^3}.\]Find a way to write \[\sum_{j
https://arxiv.org/abs/2505.15612v1
= 1}^\infty \sum_{k = 1}^\infty \frac{1}{(j + k)^3}\]in terms of $p$ and $q.$ [Total Tokens: 17523][Total Tokens: 1114]Figure 14: Additional case study demonstrating the evolution of reasoning efficiency. In this example, the original model required over 17K tokens to solve a question from the MATH500 dataset, while our trained model accomplished the same task using only 1K+ tokens. 25 Okay, so I need to find f(-2) + f(-1) + f(0) where f(x) is given by (3x - 2)/(x - 2). Hmm, let's start by plugging in each value step by step. First, f(-2): f(-2) = (3*(-2) - 2)/(-2 - 2) = (-6 - 2)/(-4) = (-8)/(-4) = 2. Next, f(-1): f(-1) = (3*(-1) - 2)/(-1 - 2) = (-3 - 2)/(-3) = (-5)/(-3) = 5/3. Then, f(0): f(0) = (3*0 - 2)/(0 - 2) = (-2)/(-2) = 1. Now, add them together: 2 + 5/3 + 1. That's 3 + 5/3. Convert 3 to thirds: 9/3 + 5/3 = 14/3. The value of f(-2) + f(-1) + f(0) is \boxed{\dfrac{14}{3}}. To find \( f(-2) + f(-1) + f(0) \) where \( f(x) = \frac{3x - 2} {x - 2} \): 1. Calculate \( f(-2) \):\[f(-2) = \frac{3(-2) - 2}{-2 - 2} = \frac{-6 - 2}{-4} = \frac{-8}{-4} = 2\]2. Calculate \( f(-1) \):\[f(-1) = \frac{3(-1) - 2}{-1 - 2} = \frac{-3 - 2}{-3} = \frac{-5}{-3} = \frac{5}{3}\]3. Calculate \( f(0) \):\[f(0) = \frac{3(0) - 2}{0 - 2} = \frac{-2}{-2} = 1\]Add the results together:\[2 + \frac{5}{3} + 1 = 3 + \frac{5}{3} = \frac{9}{3} + \frac{5}{3} = \frac{14}{3}\]The value is \(\boxed{\dfrac{14}{3}}\).Okay, so I have this function f(x) which is equal to (3x - 2) divided by (x - 2). The question is asking me to find the value of f(-2) plus f(-1) plus f(0) and express the answer as a common fraction. Hmm, let me break this down step by step because I want to make sure I don't make any mistakes. First, I know that to find f(-2), f(-1), and f(0), I need to substitute each of these x-values into the function f(x). So let me write down each one separately. Starting with f(-2): Let me compute the numerator and denominator separately to avoid mistakes. The numerator is 3*(-2) which is -6, then subtract 2, so -6 - 2 is -8. The denominator is (-2) - 2, which is -4. So putting it together, f(-2) is (-8)/(-4). Hmm, dividing two negatives gives a positive, so that's 2. Got that, f(-2) is 2. Next, let me compute f(-1): f(-1) = (3*(-1) - 2) / ((-1) - 2) Now, f(0):f(0) = (3*0 - 2) / (0 - 2) Compute numerator and denominator. Numerator: 3*0 is 0, minus 2 is -2. Denominator: 0 - 2 is -2. So f(0) is (-2)/(-2), which is 1. So f(0) is 1.Now, I need to add these three values together: f(-2) + f(-1) + f(0) which is 2 + (5/3) + 1. Hmm, let me convert all numbers to fractions to make it easier. 2 is equal to 6/3, 1 is equal to 3/3, and 5/3 is already
https://arxiv.org/abs/2505.15612v1
a fraction. So adding them up:6/3 + 5/3 + 3/3.Adding the numerators: 6 + 5 + 3 is 14, so the total is 14/3.Wait, hold on, let me double-check my calculations because 2 + 5/3 + 1 is 3 + 5/3, which is 14/3. Yeah, that seems right.But let me make sure I didn't make any mistakes in computing each f(x). Maybe I should check each substitution again.Starting with f(-2):Numerator: 3*(-2) is indeed -6, minus 2 is -8. Denominator: (-2) - 2 is -4. So -8/-4 is 2. That seems correct.f(-1):Numerator: 3*(-1) is -3, minus 2 is -5. Denominator: (-1) - 2 is -3. So -5/-3 is 5/3. That's correct. f(0):Numerator: 3*0 is 0, minus 2 is -2. Denominator: 0 - 2 is -2. So -2/-2 is 1. Correct.Adding 2 + 5/3 + 1: 2 is 6/3, 5/3 is 5/3, and 1 is 3/3. So 6/3 + 5/3 + 3/3 is 14/3. That is 4 and 2/3. Hmm, I wonder if I can express that as a mixed number, but the question says to express as a common fraction, so 14/3 is the answer.Wait, but just to make sure I didn't make a mistake in the initial function. The function is f(x) = (3x - 2)/(x - 2). Yeah, that looks right. So for x = -2, x = -1, x = 0, I substituted correctly.Wait a second, but let me make sure about f(0). If I plug in x = 0, it's (0 - 2)/(0 - 2) which is (-2)/(-2) = 1. Yeah, that's correct.Wait, but just to make sure I didn't make any computation errors: 3*(-2) is - 6, minus 2 is -8, divided by (-4) is 2. Correct.3*(-1) is -3, minus 2 is -5, divided by (-3) is 5/3. Correct.3*0 is 0, minus 2 is -2, divided by (-2) is 1. Correct.So adding them together: 2 is 6/3, 5/3 is 5/3, and 1 is 3/3. So 6/3 + 5/3 + 3/3 is 14/3. Yes, that seems right.Wait, but I just thought, is there any chance that x = 2 is a problem? Because the function is undefined at x = 2, but in this case, we're plugging in -2, -1, and 0, so x = 2 isn't an issue here. So all three f(x) are defined, and we can safely compute them. Therefore, the sum is 14/3.But just to be absolutely thorough, let me compute each term again:For f(-2): Numerator: 3*(-2) = -6, minus 2 is -8.Denominator: (-2) - 2 = -4.So -8 divided by -4 is 2. Correct.For f(-1): Numerator: 3*(-1) = -3, minus 2 is -5.Denominator: (-1) - 2 = -3.So -5 divided by -3 is 5/3. Correct.For f(0): Numerator: 3*0 is 0, minus 2 is -2.Denominator: 0 - 2 is - 2.So -2 divided by -2 is 1. Correct.Adding them together: 2 + 5/3 + 1.2 is 6/3, 1 is 3/3, so 6/3 + 5/3 + 3/3 = (6 + 5 + 3)/3 = 14/3. So yes, 14/3 is the correct answer.Wait, but just to make sure, maybe I can represent 14/3 as a mixed number?
https://arxiv.org/abs/2505.15612v1
14 divided by 3 is 4 with a remainder of 2, so that's 4 and 2/3. But the question says to express as a common fraction, so 14/3 is fine.Alternatively, if I were to write it as a decimal, it's approximately 4.666..., but the question wants an exact fraction, so 14/3 is the way to go. I think that's thorough. I don't see any mistakes in my calculations, so I'm confident that 14/3 is the correct answer.**Final Answer**The value of \( f(-2) + f(-1) + f(0) \) is \boxed{\dfrac{14}{3}}.Question: If $f(x) = \frac{3x-2}{x-2}$, what is the value of $f(-2) +f(-1)+f(0)$? Express your answer as a common fraction. [2095 Tokens][567 Tokens]Figure 15: Further example demonstrating improvements in reasoning approach 26
https://arxiv.org/abs/2505.15612v1
arXiv:2505.15623v1 [cs.CL] 21 May 2025Can LLMs understand Math? Exploring the Pitfalls in Mathematical Reasoning Tiasa Singha Roy∗, Aditeya Baral∗, Ayush Rajesh Jhaveri, Yusuf Baig New York University {ts5478, ab12057, aj4332, yb2510}@nyu.edu Abstract Large language models (LLMs) demonstrate considerable potential in various natural language tasks but face significant challenges in mathematical reasoning, particularly in executing precise, multi-step logic. However, current evaluation frameworks judge their performance solely based on accuracy, which only accounts for the final answer. This study explores these pitfalls by employing a novel evaluation framework. We propose an evaluation metric called the MAPLE score, which holistically quantifies reasoning misalignment by integrating error rates, redundancy, and validity. 1 Introduction Large Language Models (LLMs) have demonstrated impressive capabilities across tasks such as text generation, language translation, question answering, and sentiment analysis. However, their performance diminishes in complex reasoning tasks, particularly within the mathematical domain. While LLMs perform adequately on elementary math problems, they often need help with tasks requiring precise, step-by-step reasoning, leading to errors in solution validity and logical consistency. These limitations underscore the need for a holistic evaluation of their mathematical reasoning abilities to identify weaknesses and guide targeted improvements. This study introduces a multi-stage evaluation methodology to systematically assess LLMs’ mathemat- ical reasoning capabilities. We prompt various LLMs to generate solutions using the MATH dataset. The process involves iterative self-reflection to evaluate reasoning steps, identify misalignments, and compile error labels such as calculation errors, misinterpretations, and incoherent outputs. We propose the MAPLE (Mathematical Pitfalls and Logical Evaluation) score as a holistic metric to quantify reasoning misalignment. This score incorporates error rates, redundancy, and validity, offering a comprehensive evaluation. Our findings reveal patterns of errors and limitations across different mathematical topics and levels, providing insights into the challenges LLMs face in complex reasoning tasks. 2 Related Work We draw on insights from ReasonEval[ 1], which argues that solely relying on final answer accuracy can mask the use of unnecessary or incorrect intermediate steps in the mathematical reasoning process. It introduces a methodology highlighting the importance of going beyond accuracy in evaluating LLM performance for mathematical reasoning. To extend this methodology, we leverage ideas of self-reflection[ 2], which proposes a method for using LLMs to self-correct themselves for reasoning. We take motivation from this method to use the LLM to identify pitfalls and patterns in its reasoning evaluation. However, these methods depend on external sources for effective self-improvement. Our work builds on this by using oracle labels directly within the LLM, allowing it to identify and analyze patterns in its reasoning failures autonomously. Furthermore, while past studies, such as [ 3][4], argue that using Oracle labels for self-correction may not be realistic for all applications, we propose employing them here in a self-feedback context. 3 Approach Figure 1: Architecture of our LLM Agent evaluation and identification of errors . The LLM’s generated answer is evaluated in a multi-turn set-up to identify the failing points in the generated response using self-reflection andclustering . 3.1 Stage 1 - Evaluating the Final Answer and Approach As shown in Figure 1, we initialize the LLM agent
https://arxiv.org/abs/2505.15623v1
to generate an answer, a′ ibased on the initial prompt piand question qi. For each a′ i, we create a multi-turn setup, which includes providing the agent with the correct solution to induce self-checking. We provide the LLM with the correct final answer value afito check. This approach verifies whether the final answers (a′ fi, afi)match to identify incorrect cases for the LLM agent. For these incorrect responses, we invoke self-reflection for the incorrect samples. This step uses generated and correct response pair (a′ i, ai)to return a generation analysis which highlights the points of misalignment of the reasoning steps with the actual solution. We use BERT[ 5] based embeddings to wrap these failing points and perform clustering to compile a consistent set of error labels Lto encompass issues across the samples. Figure 2: Architecture of our Judge LLM Agent and MAPLE Score . The Judge LLM provides step-wise analysis to compute MAPLE score using label-frequencies andlabel-weights . 3.2 Stage 2 - LLM as a Judge and Computing Incorrectness While the compiled error labels Lhelp provide insights into the broad classification of mathematical mistakes made while solving a problem, it is also crucial to identify and correlate each mathematical reasoning step with these labels. 2 As shown in Figure 2, we initialize a judge LLM and prompt it with the error labels L, the question qi, the correct solution ai, and the generated solution a′ ito create a set of error labels Scorresponding to each erroneous reasoning step. These step-wise error labels are then used to compute the degree of incorrectness in the incorrect solution. 3.3 Stage 3 - Computing MAPLE Score Given a step-wise collection of error labels S= [[l1, l2, ..., l n],[l1, l2, ..., l n], ...], we first compute the frequency flof each label l∈L. This represents the relevance of a particular label to an incorrectly generated sample. We consider the logarithm of frequencies instead of raw frequencies to reduce sensitivity to large frequency values, with a +1added term to ensure numerical stability when no errors are made. We further compute the error rate eas the weighted average of the frequencies flweighted by the penalty score for each label wl. The penalty score for each label, as shown in section A.1, was aggregated over the results of a human survey which ranked the error labels in increasing order of incorrectness. The penalty score allows us to individually weigh the contribution of each error label to compute the MAPLE score. e=P l∈Lwl·log(1 +fl)P l∈Lwl(1) The redundancy rand validity vof the overall solution are computed using ReasonEval[ 1] within the range of values r, v∈[0,1]. The MAPLE score decreases with an increase in validity and increases with an increase in redundancy of the solution. Finally, we express the error metric eby applying the tanh function such that MAPLE score∈[0,1]. MAPLE score =tanhe·v r (2) 4 Experiments 4.1 Data We use the MATH [ 6] dataset, which comprises 12,500 competition mathematics problems. The problems vary in complexity from levels 1 through 5 and span mathematics categories, consisting of Intermediate Algebra, Precalculus,
https://arxiv.org/abs/2505.15623v1