text
string
source
string
from the GoEmotions dataset paired with rich descriptive annotations generated by GPT-4 (see Section 3.1). GoEmotions is selected for train- ing due to its diverse emotional content and its fine- grained human annotation schema, which allows a controlled comparison between human and GPT-4 supervision. We evaluate the trained model’s zero- shot performance on three additional datasets using their test split, each with distinct label spaces. GoEmotions. GoEmotions (Demszky et al., 2020) contains approximately 58k English Reddit comments annotated with one or more of 27 fine- grained emotion classes. For training, we use only the raw text from the training split ( N= 43.4k), paired with GPT-4 generated descriptive emotion labels (not the original human annotations). SemEval. We use the data from SemEval-2018 Task 1: Affect in Tweets (Mohammad et al., 2018). It also provides multi-label categorical emotion annotations. This setup is most similar to GoEmo- tions, but it only has 11 emotion classes. ISEAR. The International Survey on Emotion Antecedents and Reactions dataset (Wallbott and Scherer, 1986) contains short self-reported descrip- tions of emotional experiences collected across different countries. Each instance is labeled in a single-label manner with one of seven emotions. Emobank. EmoBank (Buechel and Hahn, 2017) is the only dataset we use with dimensional emotion annotations. It consists of approximately 10k English sentences from a variety of genres, including news articles, blogs, and fiction. Each sample is annotated by readers along three dimen- sions on a 5-point scale. In this study, we focus on predicting valence and activation scores. 4.2 Baseline Models Upper-bound models. We contextualize our model’s performance with two upper-bound base- lines. First, we evaluate GPT-4 zero-shot (GPT- ZS)performance by prompting GPT-4 to perform zero-shot ER. The prompts are adapted from prior work (Niu et al., 2024), formatted similarly to our generation prompts (Section 3.1) but modified to elicit dataset-specific outputs. Given that GPT-4 is over 10k times larger than BERT1, and our model is distilled from its outputs, we consider GPT-ZS as a performance upper bound. Second, we include a finetuned BERT model (BERT-FT) , which shares the same backbone as our model but is trained with direct supervision. We train a separate model for each evaluation dataset and select the best check- point based on validation performance (macro-F1 for classification; PCC for regression). 1Estimate based on public parameter counts: GPT-4 >100B vs. BERT-base 110MBaseline Models We compare our model against three baseline models. We use BERT as the base model across the baselines and our methods. The first is a BERT model finetuned on GoEmotions and directly applied to other datasets without fur- ther adaptation ( BERT-ZS ). For classification tasks (SemEval and ISEAR), we reuse the prediction heads for overlapping emotion labels and randomly initialize new heads for unseen labels. For regres- sion tasks (EmoBank), the prediction heads are entirely randomly initialized. This serves as a con- servative and relatively weak baseline. We also include two strong zero-shot baselines with com- parable model sizes from prior work. The first is asimilarity-based method (Olah et al., 2021), where cosine similarity is computed between the embedding of
https://arxiv.org/abs/2505.18040v1
the input text and each emotion la- bel augmented with its definition. The second is anentailment-based method (Bareiß et al., 2024), where a BERT model is finetuned on multiple NLU tasks and predicts the plausibility of the crafted in- put “[text] This text expresses [emotion].” 4.3 Experimental Setup and Metrics Training. To ensure that no validation or test sam- ples from GoEmotions are seen during the train- ing of our model, we reserve 20% of GoEmotions training set for model selection. Additionally, to assess model fit on the training task itself, we re- port micro-averaged F1 scores on the GoEmotions test set. (Macro-averaging is not feasible due to the extensive number of training labels.) Evaluation. For classification datasets, we evaluate performance using the macro-averaged F1- score to reflect overall classification effectiveness across classes. For regression tasks, we report Pear- son correlation coefficient (PCC) to measure the linear relationships between predicted and ground- truth scores, and Spearman’s rank correlation coef- ficient ( ρ) to assess monotonic relationships. For EmoBank, we report regression performance sepa- rately for valence and activation dimensions. It is important to note that comparisons on the GoEmotions test set are not fully controlled, as models differ in the additional information avail- able during training: our model is exposed to the text content in the GoEmotions training set, while the entailment-based baseline is finetuned on a broad range of NLU tasks, including some emotion- related ones. The similarity-based baseline relies on emotion label definitions from WordNet. We still report their performance as zero-shot since ModelGoEmotions SemEval ISEAR EmoBank-V EmoBank-A macro-F1 macro-F1 macro-F1 Rho ( ρ) PCC Rho ( ρ) PCC BERT-FT 0.477 0.563 0.677 0.764 0.790 0.470 0.556 GPT-4-ZS 0.319 0.486 0.728 0.678 0.718 0.369 0.324 BERT-ZS / 0.205 0.336 -0.033 -0.004 -0.011 -0.022 Similarity 0.137 0.391 0.504 0.428 0.401 0.110 0.119 Entailment 0.073 0.430 0.222 0.534 0.485 0.102 0.105 Ours 0.299±0.005 0.480±0.003 0.479±0.015 0.613±0.008 0.593 ±0.003 0.155±0.028 0.202 ±0.040 Table 1: Zero-shot performance across four datasets. For our model, we train the model with five random seeds and we report the mean and standard deviation of the performance metrics. The best overall performance is underlined. The best zero-shot performance among comparably sized models is highlighted in bold. none of the models has seen the GoEmotions hu- man annotations or test set text samples, but we note that the results should be interpreted with care. For multi-label classification tasks, we perform an additional threshold calibration step on a vali- dation set for our model and the baseline methods. Specifically, we determine the optimal threshold for each emotion class by searching between 0 and 1 (in increments of 0.05), selecting the value that yields the best performance on the validation split of each dataset. While this setup is not strictly zero-shot, it is consistent with prior zero-shot SER work (Olah et al., 2021). We believe this step is necessary because of the nature of multi-label clas- sification: in this setup, the goal is not to output the most likely emotion, but instead to identify all emotions that are present. In this case,
https://arxiv.org/abs/2505.18040v1
a threshold is needed to make this judgment. We further dis- cuss this limitation in the Limitation Section. The regression and single-label classification tasks are evaluated in a strictly zero-shot manner. For all baseline models, we report results based on our own replication to ensure consistency across datasets and evaluation setups. Our replicated re- sults are comparable to those reported in the origi- nal papers. Additional training and hyperparameter details can be found in our released code. 5 Results 5.1 Overall Performance We first compare the zero-shot performance of our model against all baseline methods in Table 1. First, comparing the two upper-bound models, we find that dataset-specific fine-tuning provides significant benefits: BERT-FT achieves the best performance on most benchmarks. Although GPT- 4 is substantially larger, it only outperforms BERT- FT on the ISEAR dataset. This is possibly due to the specificity of emotion datasets: label sets, annotation instructions, and cultural or contextualassumptions can vary widely, making it difficult even for powerful general-purpose models to fit specific emotion distributions without adaptation. Our model shows competitive performance on multi-label classification tasks. It outperforms both strong zero-shot baselines and approaches GPT- 4 performance (e.g., 0.486for GPT-4 vs. 0.480 for ours on SemEval). Although our model has been exposed to GoEmotions training texts (but not the label space, see Section 4.3) and may be more familiar with the text domain, the substantial per- formance margin over baselines suggests a genuine ability to generalize to new label spaces. This is further validated on SemEval, where all models are fully zero-shot, and ours achieves 0.480F1, out- performing 0.391for the similarity-based method and0.430for the entailment-based method. We observe that the performance of entailment-based method varies substantially across datasets (e.g., strong on SemEval but nearly-random on GoE- motions), likely due to their sensitivity to domain shifts and prompt formulations (Yin et al., 2019). Our model performs slightly worse than the similarity-based baseline on the single-label ISEAR dataset (macro-F1: Ours 0.479, Similarity 0.504, Entailment 0.222). We suspect this is due to our model’s multi-label training setup, which encourages capturing multiple plausible emotions rather than selecting the most dominant one. For instance, it often confuses guilt and shame, which do naturally co-occur. Further analysis and tar- geted experiments may help clarify this behavior or improve the model for single-label scenarios. Finally, although not explicitly trained for re- gression tasks, ours and both baseline models achieve surprisingly strong results on valence re- gression. Our model obtains a Spearman corre- lation of 0.613and a PCC of 0.593, outperform- ing0.534ρand0.485 PCC for the entailment- based model, and 0.428ρand0.401PCC for the Dim Train GPT-4 distillation G S I 50 0.428 0.274 0.477 0.451 100 0.443 0.290 0.475 0.449 200 0.447 0.293 0.479 0.486 768 0.454 0.296 0.476 0.488 Table 2: Comparison of emotion space dimensionality. “Train” column shows the performance on GoEmotions test set with GPT-4 labels, measured by micro-F1. “G”, “S”, and “I” refer to performance on GoEmotions, Se- mEval, and ISEAR respectively, measured by macro-F1. similarity-based model. Performance on activation prediction is notably lower across all models. Ac- tivation is generally more difficult
https://arxiv.org/abs/2505.18040v1
to infer from text alone (Buechel and Hahn, 2017; Wagner et al., 2023), and activation-related terms such as “high activation” or “low activation” are less commonly included in both human-annotated labels and GPT- 4-generated descriptions. Overall, our approach shows encouraging results for zero-shot regression tasks, but further research is needed to close its performance gap with supervised models. 5.2 Ablation Studies Since the standard deviation of performance met- rics was found to be small during training (see Table 1), we use a fixed random seed (42) for all ablation studies to reduce computational cost. Dimension Size. We investigate the impact of the emotion space dimension d. In the Table 1, we used= 768 to maintain consistency with the base- line models for fair comparison. However, smaller dis desirable for downstream applications due to lower computational costs and smaller model sizes. To explore this trade-off, we conducted addi- tional experiments with d∈ {200,100,50}across the classification datasets. As shown in Table 2, performance generally drops slightly as the dimen- sion decreases, but the decline remains small even under aggressive reductions (e.g., d= 50 ). This suggests that our model can maintain strong perfor- mance even with a compact emotion space, making it suitable for resource-efficient applications. GPT-4 Supervision. We next probe the effect of using GPT-4 generated labels for supervision. We compare our model with a variant trained on the same samples but supervised with human an- notations, rather than GPT-4 generated annotations. As Table 3 shows, models trained on human la- bels perform better on the GoEmotions dataset it- self—as expected, due to direct supervision—butTrain G S I Ours (GPT-4 labels) 0.454 0.296 0.476 0.488 Human labels overall 0.587 0.475 0.414 0.410 seen classes / 0.475 0.509 0.488 unseen classes / / 0.161 0.215 Table 3: Comparison of models trained with GPT-4 gen- erated labels versus human-labels. For models trained with human labels, we also report separate results on classes seen in the training dataset (8 out of 11 in Se- mEval, 5 out of 7 in ISEAR). Note that GoEmotions performance under human supervision reflects a super- vised setting, while all others are zero-shot evaluations. exhibit worse zero-shot generalization to SemEval and ISEAR. The model trained on human labels performs well on seen classes, even outperform- ing our GPT-4-distilled model on SemEval, but its performance drops sharply on unseen classes, with macro-F1 scores of only 0.161for SemEval and0.215for ISEAR. Yet, both models share the same contrastive architecture, making it theoreti- cally possible for the human-supervised model to generalize to unseen labels because of BERT’s ex- isting semantic embedding space. However, the richness of the supervision makes a substantial dif- ference. Notably, GoEmotions already provides one of the most extensive categorical label sets among text-based ER datasets. These results sug- gest that it remains difficult to learn a generalizable emotion space from a fixed and limited set of la- bels, underscoring the advantage of distilling from rich, descriptive annotations. 5.3 Emotion Space Probing Finally, to interpret the learned emotion space, we examine its nearest-neighbor structure. We use all 27 classes from the GoEmotions
https://arxiv.org/abs/2505.18040v1
dataset as target emotions and GPT-4-generated emotion descrip- tion terms on its test split as the candidate pool (N= 684 ). We only use the test set to ensure that the retrieved structure reflects generalization rather than overfitting to the training data. We compare our model against a BERT encoder baseline. For BERT, we extract the [CLS] token embeddings of all emotion labels/terms. For our model, we encode these terms using our trained Label Encoder and Projector (as shown in Figure 2). Since the Label Encoder is initialized with the BERT encoder and frozen during training, our encoder only differs from BERT by one linear layer. Target Emotion BERT (768D) Ours (768D) Ours (50D) Admiration Sympathy, Gratitude Reverence, Amazement Accomplishment, Reverence Gratitude Satisfaction, Admiration Appreciation, Grateful Appreciation, Grateful Approval Disapproval, Recommendation Positive Surprise, Positive Positive Interest, Positive Surprise Annoyance Distraction, Reluctance Irritation, Annoyed Irritation, Annoyed Curiosity Suspicion, Surprise Interest, Intrigue Interest, Slight Hopefulness Table 4: Top-2 most similar GPT-4-generated emotion terms retrieved for each of the five most frequent emotion labels in the GoEmotions test set, using BERT text embeddings, our full model, and a reduced 50D version. For each target emotion, we retrieve the top-4 most similar terms from the candidate pool using cosine similarity. Due to space constraints, Table 4 shows the five most frequent target emotions and their top- 2 neighbors; full results are provided in Appendix B. Manual inspection suggests our model retrieves more emotionally aligned neighbors, compared to BERT. For instance, for “Admiration”, our model returns “Reverence” and “Amazement”, whereas BERT returns “Sympathy” and “Gratitude”. We also observe that BERT tends to prioritize part- of-speech consistency, e.g., failing to retrieve “an- noyed” for “annoyance” or “grateful” for “grati- tude”. In some cases, it even retrieves antonyms such as “Disapproval” for “Approval”. These be- haviors are likely due to semantic relatedness in the general language space, but are unfavorable for emotion-specific use cases. Our contrastive train- ing helps mitigate these effects. Additionally, we conduct the same experiment using our 50D model, aggressively compressing the emotion space. The retrieved neighbors remain largely consistent with those from the 768D model and, in our judgment, still align more closely with the intended target emotions than BERT. These results demonstrate that our approach can learn a more efficient repre- sentation space that better preserves emotion nu- ances compared to language-focused embeddings. 6 Discussions Zero-shot generalization is a highly desirable capa- bility for ER systems, as it enables flexible adapta- tion to applications without the need for extra data collection or retraining. In this work, we design a compact model that distills emotional knowledge from GPT-4, achieving zero-shot generalization without the prohibitive scale of LLMs. Our results are encouraging. First, we demon- strate that our contrastive learning framework en- ables the model to handle diverse ER setups, in- cluding both multi- and single-label classification,as well as regression. Second, through a nearest- neighbor retrieval analysis, we show that our model captures emotional saliency from general language representations, and this structure remains largely preserved even when the embedding dimension- ality is reduced from 768to50. Together,
https://arxiv.org/abs/2505.18040v1
these results suggest that our approach yields compact, generalizable emotion representations that can be readily applied to a variety of downstream tasks. In settings where further (few-shot) tuning is possible, these representations provide a strong and efficient starting point for adaptation. Our results also invite reflection on what makes an effective representation space for emotions. While LLMs demonstrate strong emotion under- standing (Section 2.2), they operate in the full lan- guage space, which encodes a wide range of in- formation beyond emotion. As a result, they can be larger and more resource-intensive than neces- sary for emotion tasks. In contrast, our approach distills a dedicated emotion space that focuses on emotionally salient features, showing that strong emotion understanding can be achieved with signif- icantly less computation. We hope these findings encourage future research in this domain. 7 Conclusion We present a contrastive distillation framework that extracts emotional knowledge from LLMs into a compact, BERT-sized model. Our method learns to map both text inputs and GPT-4 generated label descriptors into a joint representation space with- out the need of human annotations, and it enables zero-shot prediction across diverse label sets and task types. Experiments show strong performance across datasets and label spaces, outperforming comparable zero-shot baselines and approaching GPT-4 zero-shot performance while remaining far smaller in size. We discuss practical directions to further improve the model, as well as potential ethi- cal considerations surrounding emotion annotation and model fairness. Limitations For zero-shot inference on multi-label classifica- tion tasks, we calibrate prediction thresholds on a validation set for both our model and baseline models. We also ran experiments without calibra- tion, where all models showed significant drops in F1 scores, but are still above random baseline and the relative ranking among the models remains the same. We think that threshold calibration is an es- sential step for multi-label classification under the current setups, where the model independently pre- dicts each label without considering the full label set. However, human judgments are influenced by the full set of available alternatives, For example, in ISEAR, all positive samples are typically labeled as “joy”, as it is the only available positive category. If given more fine-grained options, annotators may choose more accurate descriptions like “proud’ or “excited” while dropping “joy”. The calibration step serves as weak supervision to help models ad- just to these differences. To remove this constraint, future work could explore methods that considers all label options jointly for each sample. While our framework shows strong potential, there are several practical directions for further im- proving its generalizability and robustness. First, this work focuses on generalization to unseen la- bels, while new text domains can also pose chal- lenges. Instead of using GoEmotions as the sole source of text for supervision, future work could in- corporate more diverse textual sources to improve generalization across varied contexts. Second, al- though our model is compatible with multiple emo- tion label spaces, the current loss design is best aligned with multi-label classification—where we also observe the strongest empirical performance. Future work could explore multi-stage training strategies to better
https://arxiv.org/abs/2505.18040v1
prepare the model for specific downstream applications while preserving its zero- shot generalization ability. Ethical Considerations Emotion recognition inherently involves subjectiv- ity, as emotional expressions and interpretations can vary significantly across individuals and cul- tural backgrounds (Zhang et al., 2022; Scherer et al., 2011). As such, bias and fairness are per- sistent concerns in emotion annotation and mod- eling (Mao et al., 2023; Zhang et al., 2022; Xu et al., 2020). In our work, we distill emotion su- pervision from GPT-4. This approach has potentialbenefits: LLMs like GPT-4 are trained on large- scale, diverse data and are explicitly designed with fairness considerations in mind (Mu et al., 2024), which may reduce some forms of annotator-specific bias. However, any representational or linguistic biases in GPT-4 will propagate into the distilled model. In our annotations, we also observe that GPT-4 tends to generate emotion terms that are more complex or infrequent, sometimes diverging from how many people naturally express emotions in everyday settings. While such richness can en- hance expressiveness, it may also reflect linguistic or cultural preferences that do not generalize across populations. These patterns underscore the need for careful reflection on the sources and implications of emotion supervision, especially when deploying models in sensitive applications. Acknowledgments References Abdullah Al Maruf, Fahima Khanam, Md Mahmudul Haque, Zakaria Masud Jiyad, Muhammad Firoz Mridha, and Zeyar Aung. 2024. Challenges and op- portunities of text-based emotion detection: a survey. IEEE access , 12:18416–18450. Iqra Ameer, Necva Bölücü, Muhammad Ham- mad Fahim Siddiqui, Burcu Can, Grigori Sidorov, and Alexander Gelbukh. 2023. Multi-label emotion classification in texts using transfer learning. Expert Systems with Applications , 213:118534. Patrick Bareiß, Roman Klinger, and Jeremy Barnes. 2024. English prompts are better for nli-based zero-shot emotion classification than target-language prompts. In Companion Proceedings of the ACM Web Conference 2024 , pages 1318–1326. Ankita Bhaumik and Tomek Strzalkowski. 2024a. To- wards a generative approach for emotion detection and reasoning. arXiv [cs.CL] . Ankita Bhaumik and Tomek Strzalkowski. 2024b. To- wards a generative approach for emotion detection and reasoning. arXiv preprint arXiv:2408.04906 . Su Bo-Hao, Shreya G Upadhyay, and Lee Chi-Chun. 2025. Toward zero-shot speech emotion recognition using llms in the absence of target data. In ICASSP 2025-2025 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE. Sven Buechel and Udo Hahn. 2017. EmoBank: Study- ing the impact of annotation perspective and repre- sentation format on dimensional emotion analysis. In Proceedings of the 15th Conference of EACL , pages 578–585, Valencia, Spain. Sven Buechel, Luise Modersohn, and Udo Hahn. 2021. Towards label-agnostic emotion embeddings. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing , pages 9231– 9249. Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. 2023. Using emotion embed- dings to transfer knowledge between emotions, lan- guages, and annotation formats. In ICASSP 2023- 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE. Roddy Cowie and Randolph R Cornelius. 2003. De- scribing the emotional states that are expressed in speech. Speech communication
https://arxiv.org/abs/2505.18040v1
, 40(1-2):5–32. Roddy Cowie, Ellen Douglas-Cowie, Nicolas Tsapat- soulis, George V otsis, Stefanos Kollias, Winfried Fel- lenz, and John G Taylor. 2001. Emotion recognition in human-computer interaction. IEEE Signal pro- cessing magazine , 18(1):32–80. Flor Miriam Plaza Del Arco, María-Teresa Martín- Valdivia, and Roman Klinger. 2022. Natural lan- guage inference prompts for zero-shot emotion clas- sification in text across corpora. In Proceedings of the 29th International Conference on Computational Linguistics , pages 6805–6817. Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. Goemotions: A dataset of fine-grained emo- tions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4040–4054. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 conference of the North American chapter of the association for com- putational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186. Kexin Feng and Theodora Chaspari. 2020. A review of generalizable transfer learning in automatic emotion recognition. Frontiers in Computer Science , 2:9. Yuan Gao, Longbiao Wang, Jiaxing Liu, Jianwu Dang, and Shogo Okada. 2023. Adversarial domain gener- alized transformer for cross-corpus speech emotion recognition. IEEE Transactions on Affective Comput- ing, 15(2):697–708. Katie Hoemann, Evan Warfel, Caitlin Mills, Laura Allen, Peter Kuppens, and Jolie B Wormwood. 2024. Using freely generated labels instead of rating scales to assess emotion in everyday life. Assessment , page 10731911241283623. Chul Min Lee and Shrikanth S Narayanan. 2005. Toward detecting emotions in spoken dialogs. IEEE transactions on speech and audio processing , 13(2):293–303.Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large lan- guage models. In International conference on ma- chine learning , pages 19730–19742. PMLR. Zheng Lian, Licai Sun, Haiyang Sun, Kang Chen, Zhuo- fan Wen, Hao Gu, Bin Liu, and Jianhua Tao. 2024. Gpt-4v with emotion: A zero-shot benchmark for generalized emotion recognition. Information Fu- sion, 108:102367. Yuanyuan Liu, Ke Wang, Lin Wei, Jingying Chen, Yib- ing Zhan, Dapeng Tao, and Zhe Chen. 2024a. Af- fective computing for healthcare: Recent trends, ap- plications, challenges, and beyond. arXiv preprint arXiv:2402.13589 . Zhiwei Liu, Kailai Yang, Qianqian Xie, Tianlin Zhang, and Sophia Ananiadou. 2024b. Emollms: A series of emotional large language models and annotation tools for comprehensive affective analysis. In Pro- ceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 5487– 5496. Rui Mao, Qian Liu, Kai He, Wei Li, and Erik Cambria. 2023. The biases of pre-trained language models: An empirical study on prompt-based sentiment anal- ysis and emotion detection. IEEE Transactions on Affective Computing , 14(3):1743–1753. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation , pages 1–17, Stroudsburg, PA, USA. Association for Computational Linguistics. Tong Mu, Alec Helyar, Johannes Heidecke, Joshua Achiam, Andrea Vallone, Ian Kivlichan, Molly Lin, Alex Beutel, John Schulman, and Lilian Weng. 2024. Rule based rewards for
https://arxiv.org/abs/2505.18040v1
language model safety. arXiv preprint arXiv:2411.01111 . Minxue Niu, Mimansa Jaiswal, and Emily Mower Provost. 2024. From text to emotion: Unveiling the emotion annotation capabilities of llms. In Proc. Interspeech 2024 , pages 2650–2654. Justin Olah, Sabyasachee Baruah, Digbalay Bose, and Shrikanth Narayanan. 2021. Cross domain emotion recognition using few shot knowledge transfer. arXiv [cs.CL] . Mirosław Płaza, Robert Kazała, Zbigniew Koruba, Marcin Kozłowski, Małgorzata Luci ´nska, Kamil Sitek, and Jarosław Spyrka. 2022. Emotion recogni- tion method for call/contact centre systems. Applied Sciences , 12(21):10951. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, and 1 others. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics , pages 5370–5381. Abdur Rasool, Saba Aslam, Naeem Hussain, Sharjeel Imtiaz, and Waqar Riaz. 2025. nbert: Harnessing nlp for emotion recognition in psychotherapy to trans- form mental health care. Information , 16(4):301. Klaus R Scherer, Elizabeth Clark-Polner, and Marcello Mortillaro. 2011. In the eye of the beholder? uni- versality and cultural specificity in the expression and perception of emotion. International Journal of Psychology , 46(6):401–435. Eimear Stanley, Eric DeMattos, Anita Klementiev, Piotr Ozimek, Georgia Clarke, Michael Berger, and Dim- itri Palaz. 2023. Emotion label encoding using word embeddings for speech emotion recognition. In Pro- ceedings of the INTERSPEECH , pages 2418–2422. Ala N Tak and Jonathan Gratch. 2023. Is GPT a com- putational model of emotion? In 2023 11th Interna- tional Conference on Affective Computing and Intel- ligent Interaction (ACII) , pages 1–8. IEEE. Ala N Tak and Jonathan Gratch. 2024. Gpt- 4 emulates average-human emotional cognition from a third-person perspective. arXiv preprint arXiv:2408.13718 . Senait Gebremichael Tesfagergish, Jurgita Kapo ˇci¯ut˙e- Dzikien ˙e, and Robertas Damaševi ˇcius. 2022. Zero- shot emotion detection for semi-supervised sentiment analysis using sentence transformers and ensemble learning. Applied Sciences , 12(17):8662. Marcel Trotzek, Sven Koitka, and Christoph M Friedrich. 2018. Utilizing neural networks and lin- guistic metadata for early detection of depression indications in text sequences. IEEE Transactions on Knowledge and Data Engineering , 32(3):588–601. Manh Tu Vu, Marie Beurton-Aimar, and Serge Marc- hand. 2021. Multitask multi-database emotion recog- nition. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision , pages 3637– 3644. Johannes Wagner, Andreas Triantafyllopoulos, Hagen Wierstorf, Maximilian Schmitt, Felix Burkhardt, Flo- rian Eyben, and Björn W Schuller. 2023. Dawn of the transformer era in speech emotion recognition: closing the valence gap. IEEE Transactions on Pat- tern Analysis and Machine Intelligence , 45(9):10745– 10759. Harald G Wallbott and Klaus R Scherer. 1986. How universal and specific is emotional experience? evi- dence from 27 countries on five continents. Social science information , 25(4):763–795.Xuena Wang, Xueting Li, Zi Yin, Yue Wu, and Jia Liu. 2023. Emotional intelligence of large language models. Journal of Pacific Rim Psychology , 17. Congying Xia, Chen
https://arxiv.org/abs/2505.18040v1
Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming Xiong. 2024. Fofo: A benchmark to evaluate llms’ format-following capability. In 62nd Annual Meet- ing of the Association for Computational Linguistics , pages 680–699. Tian Xu, Jennifer White, Sinan Kalkan, and Hatice Gunes. 2020. Investigating bias and fairness in facial expression recognition. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16 , pages 506–523. Springer. Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. 2024. A survey on large lan- guage model (llm) security and privacy: The good, the bad, and the ugly. High-Confidence Computing , page 100211. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Bench- marking zero-shot text classification: Datasets, eval- uation and entailment approach. In 2019 Conference on Empirical Methods in Natural Language Process- ing and 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019 , pages 3914–3923. Association for Computational Linguistics. Yüksel Yurtay, Hüseyin Demirci, Hüseyin Tiryaki, and Tekin Altun. 2024. Emotion recognition on call cen- ter voice data. Applied Sciences , 14(20):9458. Chi Zhan, Dongyu She, Sicheng Zhao, Ming-Ming Cheng, and Jufeng Yang. 2019. Zero-shot emotion recognition via affective structural embedding. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision , pages 1151–1160. Guanhong Zhang, Sophia Ananiadou, and 1 others. 2022. Examining and mitigating gender bias in text emotion detection task. Neurocomputing , 493:422– 434. Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Pan, and Lidong Bing. 2024. Sentiment analysis in the era of large language models: A reality check. In Find- ings of the Association for Computational Linguistics: NAACL , pages 3881–3906. Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, and Bing Qin. 2023. Is chat- gpt equipped with emotional dialogue capabilities? arXiv preprint arXiv:2304.09582 . Yue Zheng, Yuhao Chen, Bin Qian, Xiufang Shi, Yuan- chao Shu, and Jiming Chen. 2025. A review on edge large language models: Design, execution, and appli- cations. ACM Computing Surveys , 57(8):1–35. A Examples of human vs. GPT-4 generated labels on GoEmotions Text Human Label GPT-4 Label At least they can make some good pizza Neutral Contentment, Satisfaction Embrace the feels my friend. Glad you found happiness even if it is fleeting.Joy Supportive, Glad, Accepting Congrats! Vegan baking is still daunting to me but I will conquer it one day!Gratitude Encouragement, Determination I AM CALLING THE POLICE Neutral Urgency, Fear, Anger Whew lad they’re bleeding employees, reminds me of [NAME] before they bit the dustNeutral Concern, Reminiscence, Fore- boding Refrigerators? That’s cool Neutral Amusement, Pun-Intended I’m not sure what you mean by 18th century family here but otherwise thanks.Confusion, Grati- tudeNeutral Thanks for the link. She actually likes some of the stuff! Gratitude Appreciation, Happiness It sounds like you’re the one who is afraid of the internet. Relax, bud. You’re on r/cringeFear Condescension, Annoyance Forget new buildings, I work at the Chase Plaza building and am a little bummed that Seagram Tower made it but this didn’t.Disappointment, NeutralBemused, A Little Bummed Table 5: Ten random samples selected from GoEmotions training set, comparing human-annotated
https://arxiv.org/abs/2505.18040v1
emotion labels and GPT-4 generated descriptive labels. B Full Emotion Neighbor Retrieval Results Target Emotion BERT (768D) Ours (768D) Ours (50D) Admiration Sympathy, Gratitude, Satisfaction, Reluctant AdmirationReverence, Amazement, Awe, Attrac- tionAccomplishment, Reverence, Amazement, Pride Gratitude Satisfaction, Admiration, Affection, JealousyAppreciation, Grateful, Thankful, Po- litenessAppreciation, Grateful, Thankful, Recognition Approval Disapproval, Recommendation, Fair- ness, AcceptancePositive Surprise, Positive, Valida- tion, RecommendationPositive Interest, Positive Surprise, Positive, Favorability Annoyance Distraction, Reluctance, Frustrated, AvoidanceIrritation, Annoyed, Irritated, Exas- perationIrritation, Annoyed, Reluctance, Irri- tated Curiosity Suspicion, Surprise, Aggression, Dis- tractionInterest, Intrigue, Speculation, Curi- ousInterest, Slight Hopefulness, Curious, Fascination Disapproval Annoyance, Reluctance, Welcoming, DismayJudgment, Criticism, Condemnation, DismayNegative Opinion, Judgment, Criti- cism, Disagreement Amusement Laughter, Reluctant Admiration, Mock Frustration, AmusedMild Amusement, Lack of Amuse- ment, Lightheartedness, Mild Sar- casmMild Amusement, Lack of Amuse- ment, Mock Frustration, Mock Seri- ousness Love Passion, Joy, Happiness, Pain Affection, Passion, Adoration, Fond- nessAffection, Joy, Good Wishes, Lust Anger Rage, Jealousy, Resentment, Con- temptRage, Hostility, Hatred, Outrage Rage, Hostility, Indignation, Hatred Optimism Cautious Optimism, Calmness, Hopefulness, ReassuranceHopefulness, Hope, Optimistic, HopefulHopefulness, Hope, Cautious Opti- mism, Hopeful Joy Hope, Grief, Calm, Celebration Happiness, Celebration, Delight, Joy- fulHappiness, Adoration, Celebration, Excitement Sadness Grief, Sad, Heartbreak, Disappoint- mentSorrow, Heartbreak, Sad, Grief Melancholy, Sad, Sorrow, Heart- break Confusion Horror, Resentment, Jealousy, Impa- tienceSeeking Clarification, Seeking Help, Seeking Assistance, Seeking AdviceConfused, Awkwardness, Historical Trauma, Puzzlement Disappointment Regret, Frustration, Guilt, Relief Dissatisfaction, Regret, Mild Disap- pointment, DisappointedDissatisfaction, Regret, Disap- pointed, Heartbreak Realization Disbelief, Distraction, Shock, Des- perationSurprise, Reflection, Denial, Ac- knowledgmentDenial, Shock, Focus, Shame Surprise Curiosity, Astonishment, Surprised, RealizationRealization, Astonishment, Mild Sur- prise, ShockMild Surprise, Shock, Astonishment, Realization Caring Denial, Kindness, Compassion, Pro- tectiveEmpathetic, Considerate, Reassuring, SupportiveConsiderate, Self-Assured, Affec- tionate, Accepting Disgust Irritation, Disgusted, Shame, Para- noiaLoathing, Disgusted, Horror, Hatred Outrage, Disgusted, Condemnation, Dismay Excitement Delight, Anticipation, Panic, Aston- ishmentJoy, Delight, Enthusiasm, Triumph Joy, Delight, Enthusiasm, Laughter Desire Lust, Jealousy, Longing, Affection Lust, Longing, Craving, Eagerness Longing, Lust, Passion, Misery Fear Panic, Distrust, Dread, Worry Anxiety, Dread, Terror, Panic Anxiety, Dread, Terror, Alarm Remorse Regret, Apology, Compassion, Guilt Regret, Apology, Regretful, Apolo- geticRegret, Apology, Regretful, Guilt Embarrassment Humiliation, Guilt, Concern, Frustra- tionGuilt, Humiliation, Self-Irritation, ShameGuilt, Humiliation, Discomfort, Aversion Nervousness Tiredness, Helplessness, Awkward- ness, EagernessAnxiety, Fear of Embarrassment, Numbness, TirednessNumbness, Desire for Emotional Re- lief, Fear of Embarrassment, Anxiety Pride Jealousy, Satisfaction, Empathy, Af- fectionAccomplishment, Confidence, Loy- alty, TriumphReverence, Accomplishment, Belief, Authenticity Relief Relieved, Disappointment, Irritation, GriefGladness, Relieved, Glad, Satisfac- tionGood, Relieved, Glad, Gladness Grief Heartbreak, Sadness, Desperation, ReliefSorrow, Heartbreak, Sadness, Misery Sorrow, Heartbreak, Sadness, Pain Table 6: Top-4 most similar emotion terms retrieved for each target emotion using cosine similarity, using the BERT encoder, our contrastively distilled emotion model (768D), and its reduced 50D version.
https://arxiv.org/abs/2505.18040v1
MathEDU: Towards Adaptive Feedback for Student Mathematical Problem-Solving Wei-Ling Hsu, Yu-Chien Tang, An-Zi Yen Department of Computer Science, National Yang Ming Chiao Tung University, Taiwan weiling.hsu.cs11@nycu.edu.tw ,tommytyc.cs10@nycu.edu.tw ,azyen@nycu.edu.tw Abstract Online learning enhances educational accessi- bility, offering students the flexibility to learn anytime, anywhere. However, a key limitation is the lack of immediate, personalized feedback, particularly in helping students correct errors in math problem-solving. Several studies have investigated the applications of large language models (LLMs) in educational contexts. In this paper, we explore the capabilities of LLMs to assess students’ math problem-solving pro- cesses and provide adaptive feedback. The MathEDU dataset is introduced, comprising au- thentic student solutions annotated with teacher feedback. We evaluate the model’s ability to support personalized learning in two scenarios: one where the model has access to students’ prior answer histories, and another simulating a cold-start context. Experimental results show that the fine-tuned model performs well in iden- tifying correctness. However, the model still faces challenges in generating detailed feed- back for pedagogical purposes. 1 Introduction In the post-pandemic era, online learning has emerged as one of the mainstream methods of edu- cation (Alqahtani and Rajkhan, 2020; Jafar et al., 2022). Online educational platforms enable stu- dents to study anytime, anywhere, offering exten- sive question banks to assess understanding. How- ever, without immediate teacher support, students may struggle to correct mistakes if they lack a clear grasp of underlying concepts. Recent advancements in LLMs have prompted numerous studies (Tack and Piech, 2022; Kasneci et al., 2023) to explore their applications in the field of education. Some works (Shen et al., 2021; Yu et al., 2021; Jie et al., 2022) have explored the math word problem-solving capabilities of LLMs, yield- ing promising results. Leveraging the advanced natural language understanding and generation ca- pabilities of LLMs, we explore their potential inmathematical problem-solving by focusing on the generation of free-text rationalizations. For effective assessment of student performance, the LLM must possess a robust understanding of mathematical concepts to identify and analyze errors accurately. Additionally, it should pro- vide clear explanations and deliver constructive, adaptive feedback tailored to the student’s reason- ing. Yen and Hsu (2023) have employed GPT- 3.5 (Ouyang et al., 2022) to simulate both student and teacher roles in order to examine the model’s behavior and grading capabilities. While LLMs can generate comprehensive explanations, findings indicate that they often struggle to accurately inter- pret students’ mathematical problem-solving pro- cesses, leading to incorrect assessments. To better understand these challenges and ex- plore ways to improve LLMs’ evaluative capabili- ties, we collect real-world data on student problem- solving processes alongside teacher-written feed- back. We extend the existing MathQA (Amini et al., 2019), which contains GRE-level questions requir- ing advanced mathematical knowledge, by incor- porating detailed annotations of students’ problem- solving processes and corresponding teacher feed- back. We invited six students to solve these prob- lems, and their answers were reviewed and graded by three mathematics experts. This process resulted in a comprehensive dataset of 4,048 annotated en- tries. Table 1 provides an example of the annota- tions, including error types (e.g., incorrect
https://arxiv.org/abs/2505.18056v1
oper- ations), incorrect steps, and teacher feedback ad- dressing misunderstandings. Details of the dataset construction are provided in the following section. Rather than focusing solely on whether the final answer matches a predefined solution, this paper investigates how LLMs can evaluate students’ rea- soning, recognizing multiple valid approaches. To facilitate this exploration, the task was divided into three subtasks: (1) answer accuracy assessment, (2) error identification, and (3) feedback genera-arXiv:2505.18056v1 [cs.CL] 23 May 2025 Problem: Two trains of equal length are running on parallel lines in the same directions at 46 km/hr and 36 km/hr. The faster train passes the slower train in 144 seconds. The length of each train is: Student Process : 46−36 = 10 10 km /hr×5 18=100 36m/s 100 36×144 = 400 m Error Type: Wrong Mathematical Operation/Concept Error Equation: 100 36×144 = 400 m Teacher Feedback: To overtake the other train, you need to travel the combined length of both trains. Since both trains are of the same length, you need to divide by 2 to get the answer. Table 1: Example of Student Problem-Solving Process with Teacher Grading and Feedback. tion. We investigate the capabilities of LLMs using both direct prompting and fine-tuning approaches. During fine-tuning, we employed three training strategies: single-task training, multi-task training, and end-to-end training. To evaluate the model’s performance in real-world scenarios, experiments were conducted under two conditions: one with access to a student’s prior answer history and an- other simulating a cold-start scenario without any prior data. In sum, our contributions are threefold: (1) This study investigates the challenge of assess- ing students’ reasoning processes by exploring the use of LLMs in the context of mathematical edu- cation. (2) We present the MathEDU dataset,1de- signed to address students’ mistakes in math word problem-solving and provide personalized adap- tive feedback. (3) We evaluate LLM performance with prompting and LoRA (Hu et al., 2021) fine- tuning. Experimental results show that fine-tuning improves accuracy in identifying correctness but remains limited in generating adaptive feedback. 2 Related Work 2.1 Large Language Models for Education Teachers play a crucial role in identifying and ex- plaining student errors, significantly enhancing un- 1The dataset will be released upon acceptance. An anonymized version is available at: https://anonymous. 4open.science/r/MathEDU-4628/Dataset #Students #Questions #AnswersSolving ProcessTeacher’s Feedback ASSISTments 2009 4,217 26,688 346,860 ASSISTments 2009 46,674 179,999 6,123,270 ASSISTments 2015 19,917 - 708,631 ASSISTments 2017 1,709 3,162 942,816 Algebra2005-2006 575 1,084 813,661 Algebra2006-2007 1,840 90,831 2,289,726 Bridge to Algebra 1,146 19,258 3,686,871 EdNet-KT1 784,309 13,169 95,293,926 EdNet-KT2 297,444 13,169 56,360,602 EdNet-KT3 297,915 13,169 89,270,654 EdNet-KT4 297,915 13,169 131,441,538 Junyi 72,630 25,785 16,217,311 Eedi 118,971 27,613 15,867,850 MATHDIAL - 2,861 2,861 ✓ ✓ MathEDU 6 4,048 4,048 ✓ ✓ Table 2: Comparison of Existing Datasets. derstanding and learning efficiency (Robinson and Loeb, 2021; Zhang et al., 2023). To address the lack of interaction and feedback in online learning, researchers have explored automated systems, such as integrating Cognitive Task Analysis (CTA) with LLMs to support student remediation (Graesser et al., 2004; Wang et al., 2024). However, the resource-intensive nature of CTA, requiring ex- pert involvement and
https://arxiv.org/abs/2505.18056v1
complex step design, lim- its its scalability in routine educational contexts. Some studies leverage LLMs to support instruc- tion by generating distractors for multiple-choice questions (Dave et al., 2021), analyzing students’ answer histories (Gao et al., 2021), and creating per- sonalized instructional materials (He-Yueya et al., 2024). Others use LLMs to simulate students in interactive chats, enhancing classroom simulations for teacher training (Markel et al., 2023). LLMs have also been deployed as virtual tutors, such as generating code explanations in computer science classrooms (MacNeil et al., 2023). 2.2 Mathematics Education Datasets Table 2 presents a comparison of existing datasets. The ASSISTments datasets,2collected from the free online tutoring platform ASSISTments, fo- cus on addressing the knowledge tracing problem. They consist of grade school math exercises, fea- turing various question types, including multiple choice, text, and open-ended questions. Multiple versions of the datasets have been released, with data collected during different periods. The Alge- bra 2005-2006 and Algebra 2006-2007 datasets, collected from the Cognitive Tutors system, cap- ture student responses to Algebra problems. 2https://www.etrialstestbed.org/resources/ featured-studies/dataset-papers The EdNet dataset (Choi et al., 2020) is a hierar- chical dataset designed for educational research. It consists of four levels (KT1 to KT4) that progres- sively capture more detailed student interactions. These range from question-solving logs to compre- hensive action sequences, including behaviors like watching lectures or reading materials. The Junyi dataset (Pojen et al., 2020) consists of student ex- ercise attempt logs collected during 2018-2019. It focuses on high school level Math exercise practice in Taiwan. The Eedi dataset (Wang et al., 2020), collected students’ responses to questions between 2018 and 2020, spans from primary to high school levels. It supports tasks such as predicting student answer correctness and includes additional tasks for assessing question quality and generating per- sonalized question sequences. Previous datasets for knowledge tracing typically span over a year of diverse student behavioral data, providing valuable insights into student engage- ment. However, they lack detailed records of stu- dents’ step-by-step problem-solving processes in mathematics. Furthermore, they do not include teacher feedback on student errors. To address the limitation, the MATHDIAL dataset (Macina et al., 2023) utilizes LLMs to simulate student responses in mathematical problem-solving, with teachers providing targeted instructional feedback to create educational dialogues. Although LLMs are capable of simulating stu- dent responses, some studies (Aher et al., 2023; Markel et al., 2023) have indicated that these simu- lated answers are often unrealistic, particularly in maintaining a consistent knowledge level. Conse- quently, collecting real student problem-solving processes along with the corresponding teacher feedback is essential. 3 Dataset construction 3.1 Collection of Real-World Student Math Problem-Solving Processes We construct the MathEDU dataset based on MathQA.3We removed the multiple-choice op- tions provided by MathQA, presenting the ques- tions as open-ended tasks. To comprehensively assess students’ mathematical abilities and collect problem-solving processes from individuals with varying skill levels, we recruited six students from different university departments. Before the anno- tation process, students were asked to attempt 10 3The dataset selection criteria are detailed in Appendix A.questions to ensure their responses adhered to the required format. This step
https://arxiv.org/abs/2505.18056v1
helped avoid issues such as incomplete equations or disorganized layouts, facilitating smoother data collection. To ensure that each student answered a balanced number of questions across all types (i.e., general, gain, physics, geometry, probability, and other), we sampled 4,500 unique questions from the six types in MathQA, maintaining the proportional distribution of the original dataset. Each student was assigned 750 questions, with the distribution of question types consistent across students. We found that the dataset contained errors such as un- recognizable symbols, unclear problem definitions, and missing accompanying images, which could hinder accurate interpretation. Therefore, students were allowed to note these issues and skip such questions. The missing questions were then supple- mented until each student completed 750 questions with problem-solving processes. Students wrote their problem-solving processes on paper, and their responses were then transcribed into L ATEX format4to ensure machine readability. We requested problem-solving steps to be as com- plete as possible to ensure that the teachers grading their answers could clearly understand their rea- soning. If needed, they could also provide verbal explanations to elaborate on their reasoning. For questions they could not solve, students were re- quired to note the reasons. If their handwriting was unclear, we asked them to revise to ensure legibility for scanning and conversion to L ATEX format.5 6 3.2 Annotation for Error Identification and Teacher Feedback After collecting the students’ problem-solving re- sults, we proceeded to label detailed grading infor- mation for each answer. We invited three experts in mathematics education to serve as teachers for grading and reviewing student responses. The three invited mathematics experts annotated the specific steps where errors occurred and providing expla- nations to clarify the reasons for these mistakes. They also categorized the errors based on these annotated steps. Inspired by Wijaya et al. (2014), We define three categories. The error categories in- clude “Wrong Mathematical Operation/Concept”, “Calculation Error” and “Incomplete Answer”. Ad- 4https://mathpix.com/ 5The annotation guidelines are elaborated in Appendix B.1. 6The question distribution and academic backgrounds of student annotators are detailed in the Appendix B.2. All General Gain Physics Geometry Probability Other Avg. Equations/Problem Avg. Words/Problem Student 1 70.57% 67.75% 72.73% 74.73% 69.39% 28.57% 72.97% 3.9516 0.6442 Student 2 87.59% 86.53% 87.22% 88.83% 87.18% 100.00% 91.67% 7.3124 6.7532 Student 3 71.09% 70.11% 65.44% 77.78% 64.29% 42.86% 81.25% 5.5958 5.0943 Student 4 75.30% 77.24% 66.43% 77.84% 78.26% 87.50% 77.42% 6.7212 4.0303 Student 5 67.01% 66.55% 61.15% 73.12% 65.85% 75.00% 60.61% 4.9237 2.8871 Student 6 80.61% 77.48% 78.74% 88.00% 72.73% 85.71% 85.29% 4.1484 0.3106 Table 3: Correctness and Problem-Solving Characteristics of Students. ditionally, to address instances where students ei- ther made careless mistakes or were completely unable to answer a question, we introduced two additional error categories: “Careless Error” and “Lack of Necessary Mathematical Concepts”.7 Each student’s answer was annotated by three experts. In addition to labeling the types of er- rors and incorrect problem-solving steps, experts provided suggestions for students, including expla- nations for students’ mistakes and suggestions for correcting them. The questions are reviewed and removed erroneous ones that led to unrecognizable problem-solving processes, resulting in a
https://arxiv.org/abs/2505.18056v1
dataset of 4,048 real student answers. To assess the quality of the annotations, we calculated the inter-rater agree- ment for error category annotations, using Krippen- dorff’s Alpha. The result was 0.7818, indicating a substantial level of agreement among the experts in their annotations. When a disagreement occurs, we follow the majority rule to determine the final anno- tation. In cases where a majority decision could not be reached, the instance was revisited by the three experts to reach a final consensus. Finally, authors with expertise in mathematics education reviewed and refined the labeled data to ensure quality. As a result, our dataset includes 3,050 correct answers and 998 incorrect ones. 4 Data Analysis 4.1 Student Mathematical Abilities Table 3 presents the correctness of students’ problem-solving processes, highlighting the di- verse data we collected even from just six partic- ipants. Correctness varied significantly, with the most proficient students achieving up to 87.59%, while those with lower proficiency scored as low as 67.01%. Furthermore, individual strengths differed across areas, with some students reaching 100% accuracy in probability questions, whereas others struggled, achieving only 28.57%. 7Descriptions and examples of these error categories are provided in Appendix C.4.2 Student Problem-Solving Styles We further analyze variations in students’ problem- solving strategies. Since quantifying these strate- gies is challenging, we indirectly assess them by evaluating the number of mathematical expressions used by each student. Table 3 shows the average use of equations and written explanations per prob- lem, presented in the last two columns, providing insights into their approaches. This analysis reveals that Students 1, 5, and 6 tend to have more con- cise problem-solving processes, while Students 2, 3, and 4 have more detailed and complex processes. Notably, Student 2 used an average of 7.3124 equa- tions per problem, the highest among the group. We also analyzed the average number of English words used, excluding algebraic symbols and L ATEX expressions, as students could provide verbal ex- planations to support their solutions. The results reveal significant differences in text usage: Student 6 rarely used words, averaging 0.3106 per problem, while Student 2 provided detailed explanations, av- eraging 6.7532 words per problem. 5 Methodology In this section, we focus on how the tutor model is developed to analyze students’ problem-solving processes. The model identifies specific steps where errors occur in incorrect answers and pro- vides targeted feedback. The i-th input data con- sists of the math word question qi, the rationale ri provided in MathQA, and the student’s answering process si, which includes both equations and writ- ten descriptions of their reasoning. Based on this input, the model performs three main tasks: Answer Accuracy Assessment: This task deter- mines whether the student’s overall answer is cor- rect based on their problem-solving process, treated as a binary classification with the output label ci. Problem-Solving Error Identification: This task identifies incorrect equations in the process. The output Ei=ei,1, ei,2, . . . , e i,αconsists of αidenti- fied incorrect equations. Feedback Generation: This task generates feed- back, including explanations or suggestions, to help students understand and correct their mistakes. The
https://arxiv.org/abs/2505.18056v1
feedback is denoted as Ti. We employ LoRA to fine-tune an LLM.8Three fine-tuning settings are explored. In single-task training, separate models were fine-tuned for each of the three subtasks. In multi-task training, a sin- gle model was trained on data annotated for all three subtasks, using distinct prompts to guide the model in performing specific tasks. End-to-end training required the model to sequentially analyze the problem statement and evaluate the student’s answer, providing a complete assessment without dividing the tasks explicitly. 6 Experiments 6.1 Experimental Setup We examined the performances of Llama3 8B (Tou- vron et al., 2023), Llama3 70B, and GPT-3.5 with few-shot prompting.9We further fine-tuned Llama3 8B with a learning rate set to 2e-4 and a LoRA rank of 16.10During inference, the tempera- ture was set to 0, with a maximum output of 512 tokens, while other parameters remained at their default settings. Data Splitting. To evaluate the model’s perfor- mance across different application scenarios, we designed two dataset splitting settings. (1) The first setting employs a time-series split , simulating a situation where the model has access to a stu- dent’s prior answer history. Student records were organized chronologically, with part of the records serving as the training set (answer history) and the remaining records as the validation and test sets in a 70:15:15 ratio. This approach assesses whether the model can leverage past responses to identify a student’s strengths and weaknesses and apply this knowledge to grade new answers. (2) The second setting also uses a time-series split but applies a leave-one-out strategy to simulate the model en- countering a new student without prior history. In this setup, one student’s data is designated as the test set, while the data from the remaining students is proportionally divided into training and valida- tion sets in a 9:1 ratio. This scenario, akin to the cold-start setting, evaluates the model’s adaptabil- ity to diverse problem-solving styles.11 8The details of methods are reported in Appendix D. 9The version we used is gpt-3.5-turbo-0125 . 10All the prompts are shown in Appendix E. 11The data splitting statistics are in Appendix F.Inference Prompt. For the few-shot prompting, six examples were provided, consisting of three cor- rect and three incorrect student responses. In the time-series split setting, we randomly selected the student’s prior answer records from the training set as few-shot demonstrations. In the leave-one-out setting, few-shot demonstrations were randomly sampled from the answer records of other students. For the fine-tuned model, which was extensively trained on the task, we employed zero-shot prompt- ing to generate results directly. 6.2 Experimental Results Answer Accuracy Assessment. We evaluated the ability of LLMs to determine the correctness of student answers. Accuracy is adopted as the evalu- ation metric. The results are shown in Table 4. We tested whether the detailed rationales in MathQA assist LLMs in grading student answers. “w/o r” and “w/ r” refer to whether the input includes the rationale of each question, respectively. In the time- series split setting, Llama3 70B achieved 83.58% accuracy, significantly outperforming Llama3 8B (62.02%) by 21.56%, highlighting the importance of parameter
https://arxiv.org/abs/2505.18056v1
size in mathematical computation and understanding. GPT-3.5’s lower accuracy stemmed from its strict evaluation of omitted calculations. For example, if a student’s final answer was correct but some intermediate steps were skipped, GPT-3.5 often classified the answer as incorrect. The bottom three rows of Table 4 represent our fine-tuned models, which demonstrate higher ac- curacy in determining correctness compared to the original, non-fine-tuned Llama3 8B model. Among these, the end-to-end training method perform bet- ter than the multi-task training method. This im- provement may be attributed to the end-to-end ap- proach, which requires the model to generate grad- ing results sequentially, enabling it to perform more coherent reasoning and achieve better outcomes. The impact of including rationales in the input is evident in the results shown in Table 4. For instance, Llama3 8B’s accuracy increased from 62.02% to 70.15%. Notably, Llama3 70B exhib- ited a slight decrease in accuracy from 83.58% to 81.76%. In general, the inclusion of rationales al- lowed most models to better assess correctness by providing additional contextual support, though the ability to interpret and utilize these rationales var- ied across models. Fine-tuned models, particularly the end-to-end model, demonstrated a stronger abil- ity to effectively leverage rationales Time Series Split Leave-One-Out All General Gain Physics Geo. Prob. Other All General Gain Physics Geo. Prob. Other Llama3 8B w/o r62.02% 64.66% 64.86% 55.56% 77.50% 62.50% 48.48% 63.53% 66.57% 64.61% 58.50% 69.21% 55.06% 50.31% w/r 70.15% 71.55% 68.47% 68.42% 80.00% 75.00% 60.61% 68.58% 71.59% 68.02% 63.81% 75.47% 61.31% 61.45% Llama3 70B w/o r83.58% 82.76% 88.29% 79.53% 97.50% 56.25% 90.91% 85.25% 86.54% 87.90% 80.89% 87.53% 74.70% 84.92% w/r 81.76% 78.88% 87.39% 79.53% 97.50% 62.50% 84.85% 84.12% 84.57% 87.92% 81.01% 82.66% 63.69% 85.29% GPT-3.5 w/o r40.96% 40.95% 45.05% 38.01% 52.50% 50.00% 24.24% 41.70% 43.45% 46.03% 36.91% 41.67% 37.50% 32.76% w/r 50.41% 50.00% 54.95% 49.12% 57.50% 43.75% 39.39% 49.10% 51.65% 52.90% 43.61% 48.25% 59.23% 37.68% Single-task w/o r87.23% 85.77% 90.09% 88.30% 90.00% 68.75% 87.88% 71.93% 71.43% 72.00% 71.98% 72.51% 72.32% 73.62% w/r 92.70% 90.09% 95.50% 92.98% 95.00% 87.50% 100.00% 92.45% 92.45% 93.32% 91.73% 92.42% 88.99% 93.57% Multi-task w/o r73.13% 70.26% 71.17% 80.70% 67.50% 56.25% 75.76% 43.07% 42.41% 41.95% 45.03% 44.03% 40.48% 41.91% w/r 78.11% 74.57% 80.18% 83.63% 75.00% 62.50% 78.79% 54.77% 53.71% 53.87% 55.78% 58.00% 64.88% 55.99% End-to-end w/o r87.73% 84.05% 93.69% 90.06% 90.00% 62.50% 90.91% 82.91% 82.29% 83.80% 83.67% 83.42% 75.00% 80.15% w/r 91.54% 88.79% 95.50% 93.57% 95.00% 62.50% 96.97% 93.13% 93.01% 93.31% 92.53% 93.97% 95.83% 94.61% Table 4: Results of Answer Accuracy Assessment. Time Series Split Leave-One-Out EM (↑) Distance ( ↓) EM ( ↑) Distance ( ↓) Llama3 8B w/o r28.40% 68.44 10.58% 96.56 w/r 11.73% 97.73 8.80% 102.12 Llama3 70B w/o r32.10% 67.35 23.30% 78.33 w/r 29.63% 73.38 24.34% 78.55 GPT-3.5 w/o r30.25% 57.93 27.81% 63.07 w/r 27.78% 62.32 29.46% 63.26 Single-task w/o r24.69% 95.64 23.41% 96.85 w/r 23.46% 97.21 21.98% 99.09 Multi-task w/o r23.46% 95.70 22.59% 93.77 w/r 17.28% 96.35 22.11% 97.92 End-to-end w/o r30.25% 71.48 29.39% 74.01 w/r 40.12% 51.78 42.81% 52.21 Table 5: Results of Problem-Solving Error Identifica- tion. In the leave-one-out setting, non-fine-tuned LLMs achieved accuracy
https://arxiv.org/abs/2505.18056v1
consistent with the time- series split. In contrast, fine-tuned multi-task mod- els saw a 20% accuracy drop and struggled with unseen problem-solving approaches, occasionally failing to produce outputs. However, the end-to- end model remained stable, achieving the highest accuracy of 93.13%, adapting effectively to solv- ing styles excluded from training. We further con- ducted McNemar’s test to assess the significance of differences between our fine-tuned models and the baseline Llama3 70B without rationale. In the time-series split setting, the single-task model with rationale significantly outperformed the baseline, withp < 0.001. Similarly, in the leave-one-out setting, the end-to-end model with rationale also achieved significantly better performance than the baseline, with p <0.001. Although the end-to-end model seems to per- form worse in the time-series split setting than in the leave-one-out setting, this likely reflects differ- ences in data distributions, making direct compari- son invalid. Moreover, the lack of user embeddings and effective encoding of individual records may limit the model’s ability to capture answering pat- terns, which we leave as future work.Problem-Solving Error Identification. To eval- uate the model’s performance in identifying erro- neous equations in cases where a student’s answer is incorrect, we employ two metrics. The first is the exact match (EM) ratio, which measures the correspondence between model-detected step and manually annotated step. The second metric is the distance, defined as the textual distance between the model-detected errors and the manually labeled errors. Textual distance measures the character dif- ference between the predicted and annotated errors, reflecting the model’s prediction deviation.12 The results are shown in Table 5. In the time series split setting, non-fine-tuned LLMs achieved an EM ratio of around 30% without rationale. How- ever, providing rationale reduced the EM ratio for all models, with Llama3 8B showing the largest drop of 16.67%. This may result from differing for- mats between the rationale and the student’s actual process (e.g., L ATEX-formatted), which increases the complexity of error identification. For fine-tuned models, the single-task model performed the worst without rationale. It rarely identified errors correctly, often gave irrelevant re- sponses, and sometimes failed to output answers. Its performance further declined when rationale was included. The multi-task model showed a similar pattern. In contrast, the end-to-end model outperformed Llama3 8B without rationale and showed noticeable improvements with rationale. This result indicates that fine-tuning with this ap- proach helps the model better understand and uti- lize rationale for error identification. In the leave-one-out setting, Llama3 8B with few-shot prompting performed poorly. Other LLMs performed similarly to their results in the time-series split method. This result indicates the limitation of smaller parameter models in adapt- 12The details of the distance metric is in Appendix G. Time Series Split Leave-One-Out ROUGE-1 ROUGE-2 ROUGE-L BERTScore LLM Rating ROUGE-1 ROUGE-2 ROUGE-L BERTScore LLM Rating Llama3 8B w/o r 0.2593 0.0809 0.2391 0.7654 2.68 0.2103 0.0615 0.1924 0.6851 2.38 w/r 0.1819 0.0557 0.1632 0.5515 2.14 0.1738 0.0515 0.1594 0.5600 1.97 Llama3 70B w/o r 0.2587 0.0881 0.2439 0.7147 3.61 0.2412 0.0761 0.2254 0.7219 3.77 w/r 0.2456 0.0876 0.2311 0.7108 3.31 0.2367 0.0720 0.2201 0.7205 3.56 GPT-3.5
https://arxiv.org/abs/2505.18056v1
w/o r 0.2897 0.1041 0.2695 0.8395 3.70 0.2824 0.0978 0.2601 0.8477 3.39 w/r 0.2848 0.1025 0.2597 0.8366 3.63 0.2759 0.0925 0.2533 0.8451 3.31 Single-task w/o r 0.1661 0.0730 0.1594 0.4836 0.63 0.0886 0.0359 0.0841 0.2639 0.47 w/r 0.0790 0.0378 0.0750 0.2182 0.39 0.0594 0.0205 0.0561 0.1828 0.33 Multi-task w/o r 0.0130 0.0049 0.0124 0.0437 0.08 0.0600 0.0237 0.0577 0.1832 0.31 w/r 0.1650 0.0719 0.1552 0.4372 0.55 0.0244 0.0068 0.0230 0.0807 0.22 End-to-end w/o r 0.1862 0.0751 0.1761 0.5327 0.99 0.1808 0.0637 0.1720 0.5726 2.14 w/r 0.2842 0.1418 0.2703 0.6786 1.91 0.2779 0.1089 0.2614 0.7940 2.28 Table 6: Results of Feedback Generation. ing to different scenarios. Among fine-tuned models, the end-to-end model performed consis- tently, adapting well to unseen question types and problem-solving styles. Feedback Generation. We evaluate the qual- ity of these suggestions using five metrics: ROUGE-1, ROUGE-2, ROUGE-L (Lin, 2004), BERTScore (Zhang et al., 2019), and LLM Rat- ing (Liu et al., 2023). The LLM Rating, using GPT-4 as the evaluator, compares the model gen- erated feedback to the ground truth and assigns a score ranging from 0 to 5, with higher scores in- dicating better quality. The details of the scoring criteria are in Appendix G. The results in Table 6 show that non-fine-tuned language models perform consistently across the time series and leave-one- out splits. Their inability to effectively utilize ad- ditional rationales for math problems limits their ability to generate accurate suggestions. In our fine-tuned models, the single-task and multi-task models performed poorly, rarely iden- tifying errors or generating feedback. Although the end-to-end models achieved ROUGE and BERTScore results comparable to Llama3 70B, they received lower LLM ratings. This represents the complexity of feedback generation, which re- quires larger models with stronger reasoning and language generation capabilities. Even with fine- tuning, Llama3 8B appears to have reached its per- formance limit. We also explored the capabilities of o1-mini but conducted only partial experiments due to high financial costs. Main experiments for each task were completed, excluding the leave-one- out and without rationale settings. Results and discussion are in Appendix H.Answer Acc. Error Ident. Feedback Gen. Llama3 8B 1.0000 0.2191 0.8819 Llama3 70B 0.4401 0.3414 0.7407 GPT-3.5 0.8410 0.8538 0.8640 End-to-end 0.3766 0.8852 0.5567 Table 7: Correlation between LLM’s Math Word Prob- lem Solving Ability and Grading Ability. 7 Discussion 7.1 Relationship Between Problem-Solving Ability and Grading Ability One concern about using LLMs in the math tu- toring system is that they may act differently in problem-solving and grading students’ answers. To address this issue, we investigate the correlation be- tween LLMs’ mathematical problem-solving abil- ities and their accuracy in grading students’ pro- cesses.13A Chi-square test was conducted using predictions from the time-series split setting, ex- cluding rationales to rely solely on the model’s intrinsic capabilities. Since the Chi-square test requires categorical variables, we defined task- specific criteria for statistical validity. Answer ac- curacy assessment is a binary classification task, allowing direct Chi-square application. Error iden- tification is an extraction task where only exact matches with the ground truth are correct; all oth- ers are incorrect, ensuring the results
https://arxiv.org/abs/2505.18056v1
are treated as categorical variables. Feedback generation lacks an absolute correctness measure, so we computed ROUGE-L scores for all generated feedback and set the average score as the threshold, consider- ing feedback correct if it exceeded this value. We report the corresponding p-values in Table 7. The results show no significant correlation be- tween the model’s problem-solving ability and its performance in grading tasks. Grading requires more than aligning solutions; it involves mathemat- ical reasoning and step-by-step analysis to assess 13The model’s performance is presented in Appendix I. All Wrong M. Calc. Incomp. Lack of M. Careless Llama3 70B 1.61 1.59 1.30 1.33 2.20 2.00 GPT-3.5 1.59 1.46 1.35 1.61 2.30 2.20 o1-mini 1.82 1.79 1.65 1.44 2.55 1.60 End-to-end 1.01 0.81 1.10 0.77 2.00 1.40 Table 8: Human Evaluation of Generated Feedback from Different Models. “Wrong M.,” “Calc.,” “Incomp.,” “Lack of M.,” and “Careless,” correspond to “Wrong Mathematical Operation/Concept,” “Calculation Error,” “Incomplete Answer,” “Lack of Necessary Mathematical Concepts,” and “Careless Error,” respectively. each part of the student’s process. Effective feed- back generation further demands understanding the error, its cause, and providing accurate, context- specific corrections. These findings emphasize the necessity of advanced reasoning and a thorough understanding of mathematical concepts. We also analyzed whether different problem-solving styles among students affect the model’s grading perfor- mance. The results show that LLMs perform better when solutions include more textual explanations. Detailed discussions are provided in Appendix J. 7.2 Types of Student Errors that Models Excel at Resolving To further evaluate the model responses, we se- lected representative results from Llama3 70B with- out rationale, GPT-3.5 without rationale, and an end-to-end model with rationale under the time series split setting. Additionally, we included the o1-mini model, which uses few-shot prompts with- out rationale, to further enhance the comparison and analysis of model performance. The invited three mathematics experts were asked to evaluate the quality of the generated feedback. The evalua- tion criteria range from 0 (not helpful) to 3 (clear and effective guidance).14 The results are shown in Table 8.15We assessed the inter-rater agreement on the models’ output rat- ings using Krippendorff’s Alpha, yielding a score of 0.7628, which reflects a moderate level of agree- ment among the experts. The responses from lan- guage models remain below the ideal score of 3. The fine-tuned end-to-end model underperformed compared to larger models. All models excelled in addressing “Lack of Necessary Mathematical Concept” errors. The reason may be that such er- rors arise when students cannot solve the problem, 14The evaluation criteria is elaborated in Appendix K. 15The examples and the corresponding analyses are pre- sented in Appendix L.requiring the model to solve it directly and explain, without analyzing the student’s process. All models struggled with identifying “Calcula- tion Errors,” often failing even with simple arith- metic mistakes. Interestingly, they performed bet- ter on “Wrong Mathematical Operations/Concepts.” Models tended to assume student results were cor- rect and frequently misattributed errors to incor- rect methods. In some cases, they even revised correct processes, producing inaccurate feedback. Our fine-tuned end-to-end model performed worse than larger models in “Wrong
https://arxiv.org/abs/2505.18056v1
Mathematical Op- erations/Concepts,” “Incomplete,” and “Careless Errors,” likely due to its limited reasoning and pro- cess analysis capabilities. This led to inaccurate judgments on errors requiring deeper understand- ing. However, the performance gap narrowed in categories demanding less reasoning, such as “Cal- culation Errors” and “Lack of Necessary Mathe- matical Concepts.” Although GPT-3.5, o1-mini, and Llama3 70B have stronger reasoning and generation capabilities, they struggle to identify student errors and provide focused feedback. Larger models, like o1-mini, of- ten produce verbose responses with inaccuracies, which experts find overwhelming for students and ineffective in addressing specific mistakes. In con- trast, our fine-tuned 8B end-to-end model provides concise feedback that directly targets errors but per- forms poorly on unfamiliar problem types outside its training data. Large models (e.g., GPT-3.5) also tend to solve problems themselves rather than of- fering adaptive feedback tailored to the student’s reasoning process, even when prompts emphasize error analysis. These results highlight the need for further refinement in LLMs, particularly in han- dling more nuanced aspects of students’ problem- solving processes. 8 Conclusion Assessment in mathematics education is criti- cal. However, this grading process is often time- consuming and labor-intensive for teachers, partic- ularly when it requires detailed analysis of each student’s individual problem-solving steps. This study constructed the MathEdu dataset, which in- cludes real student problem-solving processes and teacher feedback for GRE-level math problems. We explore the use of LLMs to automate grading and provide adaptive feedback. While LLMs per- form well in identifying correctness and errors, they struggle with generating personalized suggestion. An advanced method to enhance LLMs’ ability to understand and interpret mathematical reasoning is left as our future work. Limitations Lack of Student Evaluation. In the current evalu- ation process, the original students are not involved in reviewing the grading results generated by the LLMs, with only peer students providing feedback. This lack of direct student input may lead to the loss of the most relevant insights. Additionally, since expert annotators evaluating the grading re- sults do not interact with the students, it is difficult to determine to what extent the teacher’s feedback helps students understand their mistakes or improve their learning. In future work, we plan to gather direct opinions from students to determine whether the feedback generated by the model are genuinely helpful to them. By involving students in respond- ing to the model’s grading and recommendations, we can better understand whether the feedback ef- fectively helps students comprehend and correct their mistakes. Such feedback will not only im- prove the model’s performance but also enhance its applicability in educational contexts. Limited to Mathematics. While we have con- ducted several experiments to discuss and analyze the capabilities of LLMs in mathematics tutoring, evaluating their potential advantages and disad- vantages in real-world scenarios, our study is con- strained to the mathematics domain due to the lim- itations of our dataset. Research in other fields remains significantly underexplored. In the future, we plan to explore the pedagogical capabilities of LLMs in different educational areas, such as pro- gramming instruction, to further uncover their po- tential in
https://arxiv.org/abs/2505.18056v1
diverse subjects. Limited Dataset Size. The dataset is constrained to 4,048 entries due to the complexity and effort re- quired for detailed data annotation, limiting its size and potentially reducing its effectiveness in fine- tuning smaller models. Despite its limitations, the dataset offers authentic student problem-solving processes at the GRE difficulty level, accompanied by detailed feedback from expert teachers. It is ex- pected that this resource can support future research in mathematics education, particularly in advanc- ing automated assessment of student answers. Limited Model and Method Exploration. Our study focused on Llama3 8B, Llama3 70B, andGPT-3.5, with experiments limited to few-shot prompting and LoRA-based fine-tuning. Other models, such as Gemma 2 (Team et al., 2024), Qwen 2.5 (Yang et al., 2024), and Phi-3 (Abdin et al., 2024), as well as newer versions of Llama, remain unexplored. Additionally, our fine-tuning methods did not delve into incorporating students’ individual learning progress, such as tracking their mastered concepts. Despite these limitations, we hope our work can contribute to the growing explo- ration of LLM applications in mathematics educa- tion and encourage further research to refine and expand these methods. Ethics Statement To address potential privacy concerns associated with the MathEdu dataset, this section outlines the measures taken to protect annotators’ privacy and the ethical considerations in releasing the dataset. The MathEdu dataset contains problem-solving records annotated by students from diverse aca- demic backgrounds. However, no personally iden- tifiable information, such as names or IDs, was collected. Each record is assigned an anonymized identifier to categorize entries from the same stu- dent. While the dataset includes the students’ aca- demic backgrounds (i.e., majors), this information is provided solely for analytical purposes. As high- lighted in previous sections, understanding how problem-solving styles vary across mathematical abilities and expertise is a valuable area of research. All student annotators were informed that their per- sonal data would remain confidential and that only their academic background would be disclosed. This consent was obtained prior to participation in the annotation task. The dataset’s structure en- sures that individual privacy is preserved while facilitating research in educational applications. In alignment with ethical guidelines, the dataset will only be released for research purposes. References Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly ca- pable language model locally on your phone. arXiv preprint arXiv:2404.14219 . Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. 2023. Using large language models to simulate mul- tiple humans and replicate human subject studies. InInternational Conference on Machine Learning , pages 337–371. PMLR. Ammar Y Alqahtani and Albraa A Rajkhan. 2020. E- learning critical success factors during the COVID- 19 pandemic: A comprehensive analysis of e- learning managerial perspectives. Education Sci- ences , 10(9):216. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha- jishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based for- malisms. In Proceedings of NAACL-HLT , pages 2357–2367. Youngduck Choi, Youngnam Lee, Dongmin Shin, Junghyun Cho, Seoyon Park,
https://arxiv.org/abs/2505.18056v1
Seewoo Lee, Jineon Baek, Chan Bae, Byungsoo Kim, and Jaewe Heo. 2020. Ednet: A large-scale hierarchical dataset in education. In Artificial Intelligence in Education: 21st International Conference, AIED 2020, Ifrane, Morocco, July 6–10, 2020, Proceedings, Part II 21 , pages 69–73. Springer. Neisarg Dave, Riley Bakes, Barton Pursel, and C Lee Giles. 2021. Math multiple choice question solv- ing and distractor generation with attentional gru networks. International Educational Data Mining Society . Weibo Gao, Qi Liu, Zhenya Huang, Yu Yin, Haoyang Bi, Mu-Chun Wang, Jianhui Ma, Shijin Wang, and Yu Su. 2021. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval , pages 501–510. Arthur C Graesser, Shulan Lu, George Tanner Jack- son, Heather Hite Mitchell, Mathew Ventura, An- drew Olney, and Max M Louwerse. 2004. Autotutor: A tutor with dialogue in natural language. Behav- ior Research Methods, Instruments, & Computers , 36:180–192. Joy He-Yueya, Noah D Goodman, and Emma Brunskill. 2024. Evaluating and optimizing educational content with large language model judgments. arXiv preprint arXiv:2403.02795 . Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large lan- guage models. In ICLR . Adi Jafar, Ramli Dollah, Nordin Sakke, Moham- mad Tahir Mapa, Ang Kean Hua, Oliver Valen- tine Eboy, Eko Prayitno Joko, Diana Hassan, and Chong Vun Hung. 2022. Assessing the challenges of e-learning in Malaysia during the pandemic of COVID-19 using the geo-spatial approach. Scientific Reports , 12(1):17316. Zhanming Jie, Jierui Li, and Wei Lu. 2022. Learning to reason deductively: Math word problem solvingas complex relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 5944–5955. Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stepha Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Sei- del, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. ChatGPT for good? On opportunities and challenges of large language mod- els for education. Learning and Individual Differ- ences , 103:102274. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of the 2016 conference of the north american chapter of the association for computational linguistics: human language technologies , pages 1152–1157. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human align- ment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 2511–2522. Jakub Macina, Nico Daheim, Sankalan Chowdhury, Tan- may Sinha, Manu Kapur, Iryna Gurevych, and Mrin- maya Sachan. 2023. Mathdial: A dialogue tutoring dataset with rich pedagogical properties grounded in math reasoning problems. Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 .
https://arxiv.org/abs/2505.18056v1
Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from using code explanations generated by large language models in a web software development e-book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V . 1 , pages 931–937. Julia M Markel, Steven G Opferman, James A Lan- day, and Chris Piech. 2023. Gpteach: Interactive ta training with gpt-based students. In Proceedings of the tenth acm conference on learning@ scale , pages 226–236. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35:27730–27744. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2080–2094. Chen Pojen, Hsieh Mingen, and Tsai Tzuyang. 2020. Junyi academy online learning activity dataset: A large-scale public online learn- ing activity dataset from elementary to senior high school students. Dataset available from https://www.kaggle.com/junyiacademy/learning- activity-public-dataset-by-junyi-academy . Carly D Robinson and Susanna Loeb. 2021. High- impact tutoring: State of the research and priorities for future learning. National Student Support Accel- erator , 21(284):1–53. Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & rank: A multi-task framework for math word prob- lems. In Findings of the Association for Computa- tional Linguistics: EMNLP 2021 , pages 2269–2279. Anaïs Tack and Chris Piech. 2022. The AI Teacher Test: Measuring the pedagogical ability of Blender and GPT-3 in educational dialogues. In Proceedings of the 15th International Conference on Educational Data Mining , page 522. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 . Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Rose Wang, Qingyang Zhang, Carly Robinson, Susanna Loeb, and Dorottya Demszky. 2024. Bridging the novice-expert gap via models of decision-making: A case study on remediating math mistakes. In Proceed- ings of the 2024 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (Volume 1: Long Papers) , pages 2174–2199. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Pro- ceedings of the 2017 conference on empirical meth- ods in natural language processing , pages 845–854. Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández- Lobato, Richard E Turner, Richard G Baraniuk, Craig Barton, Simon Peyton Jones,
https://arxiv.org/abs/2505.18056v1
et al. 2020. In- structions and guide for diagnostic questions: The neurips 2020 education challenge. arXiv preprint arXiv:2007.12061 .Ariyadi Wijaya, Marja van den Heuvel-Panhuizen, Michiel Doorman, and Alexander Robitzsch. 2014. Difficulties in solving context-based pisa mathemat- ics tasks: An analysis of students’ errors. The Mathe- matics Enthusiast , 11(3):555–584. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024. Qwen2. 5 tech- nical report. arXiv preprint arXiv:2412.15115 . An-Zi Yen and Wei-Ling Hsu. 2023. Three questions concerning the use of large language models to facil- itate mathematics learning. In The 2023 Conference on Empirical Methods in Natural Language Process- ing. Weijiang Yu, Yingpeng Wen, Fudan Zheng, and Nong Xiao. 2021. Improving math word problems with pre-trained knowledge and hierarchical reasoning. InProceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing , pages 3384–3394. Sarah J Zhang, Samuel Florin, Ariel N Lee, Eamon Niknafs, Andrei Marginean, Annie Wang, Keith Tyser, Zad Chin, Yann Hicke, Nikhil Singh, et al. 2023. Exploring the mit mathematics and eecs cur- riculum using large language models. arXiv preprint arXiv:2306.08997 . Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Wein- berger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In International Confer- ence on Learning Representations . A Math Word Problem Dataset Selection Existing open datasets in mathematical do- mains include datasets such as MAWPS (Koncel- Kedziorski et al., 2016), Math23k (Wang et al., 2017), SV AMP (Patel et al., 2021), and MathQA (Amini et al., 2019). These datasets pro- vide numerous mathematical problems and refer- ence rationales. To gather high-quality student pro- cesses, we have two main requirements for the mathematical problems in our dataset: 1.Problem Difficulty: We aim for the dataset to include both problems that can be solved with brief processes and those that are chal- lenging and require complex solution paths. This varying levels of difficulty allow us to ob- tain diverse problem-solving approaches and explore effective methods for assessing the correctness of these differing strategies. 2.Problem Diversity: We seek a dataset which has multiple domains such as probability, ge- ometry, and algebra. This diversity enables Goal: The goal is to obtain students’ problem-solving processes when they make errors in solving math problems, which will help train language models for applications in math education. Annotation Process: A Google Sheet link will be provided to the students, which will contain both the Chinese and English versions of the questions. Students can choose one of the following methods to complete their solutions: • 1: Solve on a tablet. •2: Solve on paper and take a picture afterward (one picture per question). After completing the solutions, students should name the file as <problem_id> (e.g., 2456.jpg ) and upload it to the cloud drive. The timeline for completing 750 questions will be discussed with students to set a deadline (estimated to be within two months). Weekly meetings will be scheduled to report progress and understand the students’ problem-solving status. There will be no weekly minimum requirement; students can allocate their time
https://arxiv.org/abs/2505.18056v1
as they see fit. Annotation Guidelines: All problems are junior high school-level math problems. Students must show the full solution process and the final answer with the following requirements: - Clearly write the solution (top-down). - Include simple English explanations, no Chinese. - Example 1 : Assume lent X X×0.08×8 = 0 .4X 0.6X= 480 X= 800 - Example 2 : Thief speed : 50 km/hr Owner speed : 60 km/hr 50×0.5 = 25 km The thief drove 25 km in half an hour. 25 60−50= 2.5 - Round to 2 decimal places. Use 3.14 for pi. Simplify fractions. No calculators allowed. - If you encounter a question you cannot solve, note the problem number and the type of error, then skip it. Error types include: • 1: Don’t know how to solve it. • 2: Problem definition is unclear. • 3: Unrecognizable symbols or notation. • 4: Problem requires a diagram, but none is provided. • 5: Other... - If you feel the instructions are unclear or have any other questions, please ask the responsible staff immediately. Table 9: Math Problem-Solving Process Guidelines. us to comprehensively assess students’ math- ematical abilities and further explore tech- niques applicable to grading problems across different domains. Considering these factors, we utilize the MathQA dataset (Amini et al., 2019). MathQA includes problems from six distinct domains, cov- ering a range of difficulties suitable for GRE-level problems. Furthermore, MathQA not only provides answers to the problems but also includes ratio- nales, which are correct solution processes that aid teachers in evaluating student process.B Supplementary Details for Dataset Construction B.1 Annotation Guidelines Table 9 presents the guidelines we used for collect- ing students’ math word problem solving results. The guidelines begin by explaining the rationale for annotating answers and providing instructions for submitting their work, whether by solving prob- lems on a tablet or solving on paper and uploading a photo. We emphasize the importance of clear annotations and encourage students to include ex- planations in their problem-solving process. Addi- tionally, we provide examples of proper annotation, and finally, we give instructions on how to handle problems they find unclear. B.2 Academic Backgrounds and Question Distribution of Student Annotators To collect data on math problem-solving, we in- vited six university students to participate in the annotation process. Table 10 presents the majors of the student annotators and the distribution of questions they answered. We aimed for diversity by including individuals from different academic backgrounds, such as Japanese Studies and Ap- plied Mathematics, to capture a range of math pro- ficiency levels in the problem-solving process. All annotators involved in the task, including students and mathematics experts serving as teachers, were compensated based on the minimum hourly wage standards of their respective countries. C Supplementary Information for Each Error Category Detailed descriptions of these error categories are provided in Table 11. The following are examples of each error category. Wrong Mathematical Operation/Concept: This category refers to errors where the student applies an incorrect mathematical operation or employs an inappropriate mathematical concept when at- tempting to solve a problem.
https://arxiv.org/abs/2505.18056v1
Such mistakes of- ten occur when students misunderstand the nature of the mathematical task or misinterpret key ele- ments of the problem. These errors can stem from a variety of issues, including a failure to correctly identify the mathematical procedure required or the application of an irrelevant concept to the problem at hand. Additionally, this category encompasses cases where the student misinterprets critical key- words or phrases, leading to the incorrect selec- tion of operations. For example, misunderstanding terms such as “per,” “rate,” or “difference” can re- sult in choosing the wrong formula or approach. Furthermore, errors in selecting or using relevant information from the problem, either by focusing on irrelevant data or neglecting essential variables, fall under this category. Such errors reflect deeper issues in comprehension or the application of math- ematical principles. By recognizing and addressing these types of mistakes, it becomes possible to bet- ter understand the student’s reasoning process and to provide more targeted feedback for improving their problem-solving abilities. In Table 12, we provide an example of this typeof error. The problem involves two trains of the same length traveling in the same direction, with the speeds of each train and the time it takes for the faster train to catch up to the slower one provided. The question asks for the length of the trains. While the student correctly calculated that the distance covered by the faster train during the overtaking process was 400 meters, they failed to account for the fact that the faster train needs to cover the com- bined length of both trains to complete the overtak- ing maneuver, which led to the incorrect answer. Calculation Error: This category captures mis- takes made by students during the computational phases of problem-solving, where the execution of arithmetic or algebraic operations is incorrect. Such errors can range from simple miscalculations in basic arithmetic to more complex mistakes in- volving the manipulation of algebraic expressions or functions. These errors do not typically stem from a misunderstanding of the problem’s struc- ture or concept but rather from faulty execution during the calculation process. Table 13 reports an example of this type of error. The student made a calculation mistake when evaluating825−750 5, re- sulting in an answer of 25 instead of the correct value of 15, leading to an incorrect answer for the problem. Incomplete Answer: The “Incomplete Answer” category refers to instances where the student begins solving a problem using the correct for- mula or procedure but does not carry the solution through to completion. Although the initial steps of the problem-solving process may be accurate and aligned with the required methodology, the stu- dent ultimately halts the process prematurely. This may result from a partial understanding of the prob- lem or a lack of familiarity with subsequent steps needed to finalize the solution. In Table 14, an ex- ample of this error is illustrated. The problem asks for the average number of apples sold per hour over a two-hour period. However, the student calculated only the total number of apples sold during those two hours
https://arxiv.org/abs/2505.18056v1
and stopped there, failing to continue to calculate the average, which led to an incorrect answer. Careless Error: The “Careless Error” category encompasses mistakes that arise not from a misun- derstanding of the mathematical concepts or pro- cedures, but rather from inattentiveness or lapses in concentration during the problem-solving pro- cess. These errors often result from avoidable mis- takes, such as substituting incorrect numbers into Student Major Total General Gain Physics Geometry Probability Other Student 1 Applied Mathematics 683 276 (40.41%) 132 (19.33%) 182 (26.64%) 49 (7.17%) 7 (1.02%) 37 (5.42%) Student 2 Finance 685 297 (43.36%) 133 (19.42%) 188 (27.45%) 39 (5.69%) 4 (0.58%) 24 (3.50%) Student 3 Japanese 678 281 (41.45%) 136 (20.06%) 180 (26.55%) 42 (6.19%) 7 (1.03%) 32 (4.72%) Student 4 Information Management 660 268 (40.60%) 140 (21.21%) 167 (25.30%) 46 (6.97%) 8 (1.21%) 31 (4.70%) Student 5 Mathematics Education 682 275 (40.32%) 139 (20.38%) 186 (27.27%) 41 (6.01%) 8 (1.17%) 33 (4.84%) Student 6 Physics 660 262 (39.70%) 127 (19.24%) 175 (26.52%) 55 (8.33%) 7 (1.06%) 34 (5.15%) Table 10: Student Majors and Distribution of Questions Answered. Error Type Explanation Wrong Mathematical Operation/Concept Student applies an incorrect mathematical operation or uses an inappropriate mathematical concept to solve a problem. Calculation Error Mistakes in calculations, such as errors in solving equations, arithmetic mistakes, and incorrect unit conversions. Incomplete Answer Student used a correct formula or procedure but did not complete it. Careless Error Errors caused by students’ carelessness in answering, including number substitu- tion errors and missing digits. Lack of Necessary Mathematical Concepts Errors in answering caused by a lack of essential mathematical knowledge or techniques. Table 11: Error Types and Explanations. Problem: Two trains of equal length are running on parallel lines in the same directions at 46 km/hr and 36 km/hr. The faster train passes the slower train in 144 seconds. The length of each train is: Student Process : 46−36 = 10 10 km /hr×5 18=100 36m/s 100 36×144 = 400 m Error Type: Wrong Mathematical Operation/Concept Error Equation: 100 36×144 = 400 m Teacher Feedback: To overtake the other train, you need to travel the combined length of both trains. Since both trains are of the same length, you need to divide by 2 to get the answer. Table 12: Example of Wrong Mathematical Opera- tion/Concept a formula or writing the wrong number. Table 15, shows an example of this type of error. The prob- lem provides the number 78; however, the student mistakenly interprets this number as 70 during the calculations, leading to an incorrect result. Lack of Necessary Mathematical Concepts: The “Lack of Necessary Mathematical Concepts” cat- egory refers to errors that occur when students lack fundamental mathematical knowledge or tech- niques needed to solve a problem. These errorsProblem: at what rate percent on simple interest will rs . 750 amount to rs . 825 in 5 years ? Student Process : 825−750 5= 25 25 750=1 30≈0.03333 (or3.33% ) Error Type: Calculation Error Error Equation: 825−750 5= 25 Teacher Feedback: Incorrect calculation. The correct an- swer is 15. Table 13: Example
https://arxiv.org/abs/2505.18056v1
of Calculation Error often result in students being unable to attempt the problem at all, as they may not possess the requisite understanding of essential concepts such as fractions, percentages, or algebraic expressions. Table 16 presents an example of this type of error. The question involved calculating compound inter- est, but the student responded with “Do not know how to calculate compound interest.” indicating a lack of knowledge on how to perform this calcula- tion. As a result, the student was unable to arrive at the correct solution for the problem. D Details of Fine-Tuning Methods We employ LoRA fine-tuning to train an LLM M, parameterized by Φ. The training data Problem: maria sold 10 kg of apples in her first hour at the market , but only 2 kg of apples in the second hour . on average , how many kg of apples did she sell in two hours at the market ? Student Process : 10+2=12 Error Type: Incomplete Answer Error Equation: None Teacher Feedback: After calculating the total number of apples sold in two hours, you still need to divide by the time to get the average sales per hour. Therefore, the answer is 12/2 = 6. Table 14: Example of Incomplete Answer Problem: Peter’s average (arithmetic mean) test score on 4 tests is 78. What must be Peter’s score on the 5th test for his average score on the 5 tests to be 80? Student Process : 80×5−70×4 = 120 Error Type: Careless Error Error Equation: 80×5−70×4 = 120 Teacher Feedback: Carelessly misreading the numbers in the problem, the original average was 78 instead of 70. Therefore, the correct process is 805 - 784 = 88. Table 15: Example of Careless Error used in LoRA fine-tuning is denoted as Z= {(xi, yi)}i=1,...,N. Given task-specific parameter increment ∆Φ = ∆Φ(Θ) , we optimize over Θ: max ΘX (x,y)∈Z|y|X t=1(logpΦ0+∆Φ(Θ) (yt|x, y<t)).(1) Single-task and multi-task with rationale: In these two approaches, the training data used in LoRA fine-tuning is denoted as Z= {(xi, ci),(xi, Ei),(xi, Ti)}i=1,...,N,where xi= PA(qi, pi, ri). Here, PArepresents the prompt that takes qi,ri, and pito generate grading results. Single-task and multi-task without rationale: Similar to the “Single-task and multi-task with rationale” approaches, these methods also use the same training approach but exclude the ra- tionale in the input data. The training data used in LoRA fine-tuning is denoted as Z= {(xi, ci),(xi, Ei),(xi, Ti)}i=1,...,N,where xi= PB(qi, pi). Here, PBrepresents the prompt that takes qiandpito generate grading results. End-to-end with rationale: In this approach, the training data used in LoRA fine-tuning is de-Problem: If $5000 is invested in an account that earns 12% interest compounded semi-annually, then the interest earned after one year would be how much greater than if the $5000 had been invested at 8% simple yearly interest? Student Process : Do not know how to calculate compound interest. Error Type: Lack of Necessary Mathematical Concepts Error Equation: None Teacher Feedback: When calculating compound interest, you also need to consider the interest generated from the previous year’s interest. The formula is A=P 1 +r
https://arxiv.org/abs/2505.18056v1
nnt where Pis the principal, ris the annual interest rate, nis the number of times interest is compounded per year, and t is the number of years the money is invested or borrowed. Therefore, the calculation for compound interest in this ques- tion is A= 5000 (1 + 0 .12)2. Table 16: Example of Lack of Necessary Mathematical Concepts noted as Z={(xi, yi)}i=1,...,N,where xi= PC(qi, pi, ri)andyi= (ci, Ei, Ti). Here, PCrep- resents the prompt that takes qi,ri, andpito gener- ate grading results. End-to-end without rationale: Similar to the “End-to-end with rationale” approach, this method also uses an end-to-end training approach but ex- cludes the rationale in the input data. The train- ing data used in LoRA fine-tuning is denoted as Z={(xi, yi)}i=1,...,N,where xi=PD(qi, pi) andyi= (ci, Ei, Ti). Here, PDrepresents the prompt that takes qiandpito generate grading results. E Input Format This section presents the few-shot prompts and fine-tuning prompts used for the answer accuracy assessment task. The prompt for rating the gener- ated feedback is also included. Table 17 presents the few-shot prompts. Since we examined the mod- els both with and without rationale, the prompts are presented in two formats. In these prompts, “few- shot examples” denotes the position for examples, where six samples are randomly selected from the training dataset, comprising three correct and three incorrect responses. Table 18 presents the end-to-end prompts. Since the fine-tuned models are trained and inferred us- ing a zero-shot approach, there is no “few-shot examples” field included. The prompt content has Without Rationale With Rationale You are a math teacher. According to the [Question] , please indicate whether the [Student’s Answer] is correct or not. If the [Student’s Answer] is incorrect, identify where the student went wrong and provide explanations and advice. For correct answers, respond: The student’s answer is correct. For incorrect answers, respond: The student’s answer is incorrect. [wrong equation] ... [Teacher’s explanations and advice] ... {few-shot examples} [Question] : {question} Option: {options} [Student’s Answer] : {student process}You are a math teacher. According to the [Question] and [Rationale] , please indicate whether the [Student’s Answer] is correct or not. If the [Student’s Answer] is incorrect, identify where the student went wrong and provide explanations and advice. For correct answers, respond: The student’s answer is correct. For incorrect answers, respond: The student’s answer is incorrect. [wrong equation] ... [Teacher’s explanations and advice] ... {few-shot examples} [Question] : {question} Option: {options} [Rationale] : {rationale} [Student’s Answer] : {student process} Table 17: Few-Shot Prompts for Grading Student Problem-Solving Results. been adjusted to meet the format required by Llama. Table 19 presents the prompts used for the single- task and multi-task models. Because we separated the tasks of answer accuracy assessment, problem- solving Error Identification, and feedback gener- ation, we created three distinct prompts for each task. Table 20 presents the prompts used for rating the generated feedback, where we first outline the rating criteria, with scores ranging from 0 to 5, in- dicating poor to excellent. Additionally, we specify that the output should follow the format: “The rat- ing for this correct suggestion
https://arxiv.org/abs/2505.18056v1
is:” to facilitate the result collection process. F Data Splitting Strategies and Statistics Table 21 presents the data counts for the time-series split and leave-one-out methods for different stu- dents in the train, validation and test sets. G Distance Metric The exact match metric is overly strict, so we also consider cases where the model’s predicted errors slightly deviate from the annotated errors. For ex- ample, minor character differences, though not en- tirely correct, may still be acceptable. Similarly, the model might predict the step immediately be- fore or after the actual error. The distance metric is designed to distinguish these minor deviations from predictions that are entirely misaligned with the an- notated errors, providing a more comprehensive evaluation of the model’s ability to detect errors accurately. To evaluate the discrepancy between the pre- dicted erroneous equations Piand the annotated erroneous equations Gi, we compute the distance D(Pi, Gi)for each pair. The positions of these equations within the student’s solution are repre- sented by PstartandPendforPi, andGstartandGendforGi. Specifically, Pstartis the starting character position of Pi, andPendis the ending position. Sim- ilarly, Gstartis the starting position of Gi, andGend is the ending position. A distance of 0 is assigned when the model accu- rately identifies the error. If the model incorrectly classifies an incorrect answer as correct, a penalty of 127 is applied, representing the average length of the student’s solution. Specifically, the calculation ofD(Pi, Gi)is defined as follows: 1.If both PiandGiare “None” (indicat- ing no errors in the equation), the distance D(Pi, Gi) = 0 . 2.If only one of PiorGiis “None,” the distance is set to the penalty value Dpenalty . 3.If the start and end positions of PiandGi perfectly match, the distance D(Pi, Gi) = 0 . 4.When PiandGido not overlap, the distance is the character difference between their bound- aries: •IfPend≤Gstart, then D(Pi, Gi) = Gstart−Pend. •IfGend≤Pstart, then D(Pi, Gi) = Pstart−Gend. 5.When PiandGipartially overlap, the distance is the sum of the non-overlapping character differences: •IfGstart≤Pstart≤Pend≤Gend, then D(Pi, Gi) = (Pstart−Gstart) + (Gend− Pend). •IfPstart≤Gstart≤Gend≤Pend, then D(Pi, Gi) = ( Gstart−Pstart) + (Pend− Gend). The distance is calculated for each pair of Pi andGi, and the minimum distance is taken as the distance for the corresponding equation pair. For- mally, the distance is measured as follow: Without Rationale With Rationale ### Instruction: You are a math teacher. According to the [Question] , please indicate whether the [Student’s Answer] is correct or not. If the [Student’s Answer] is incorrect, identify where the student went wrong and provide explanations and advice. For correct answers, respond: The student’s answer is correct. For incorrect answers, respond: The student’s answer is incorrect. [wrong equation] ... [Teacher’s explanations and advice] ... ### Input: [Question] : {question} Option: {options} [Student’s Answer] : {student process} ### Response:### Instruction: You are a math teacher. According to the [Question] and [Rationale] , please indicate whether the [Student’s Answer] is correct or not. If the [Student’s Answer] is incorrect, identify where the student went wrong and provide explanations and advice. For correct answers, respond: The student’s answer is correct. For incorrect
https://arxiv.org/abs/2505.18056v1
answers, respond: The student’s answer is incorrect. [wrong equation] ... [Teacher’s explanations and advice] ... ### Input: [Question] : {question} Option: {options} [Rationale] : {rationale} [Student’s Answer] : {student process} ### Response: Table 18: End-to-End Fine-Tuning Prompt for Grading Student Problem-Solving Results. D(Pi, Gi) =   0, if both PiandGiare “None” min(Dpenalty,ret_dis ), if one of PiorGiis “None” 0, ifPiandGiperfectly match Gstart−Pend, ifPend≤Gstart Pstart−Gend, ifGend≤Pstart (Pstart−Gstart) + (Gend−Pend),ifGstart≤Pstart≤Pend≤Gend (Gstart−Pstart) + (Pend−Gend),ifPstart≤Gstart≤Gend≤Pend min(Dpenalty,ret_dis ), otherwise. (2) Distance =1 nnX i=1D(Pi, Gi) (3) where nis the number of instance. H o1-mini Result The o1-mini model, developed by OpenAI, is renowned for its strong mathematical reasoning capabilities. To evaluate its performance, we con- ducted experiments using a time series split set- ting, where inference was performed using few- shot prompts without rationales. We analyzed its performance across three tasks and compared the results with our fine-tuned end-to-end model. The results of the answer accuracy assessment task are presented in Table 22. Notably, o1-mini demonstrates exceptional mathematical capabili- ties, achieving an overall accuracy of 92.54% in identifying the correctness of the student’s an- swers. This surpasses the best-performing end- to-end model with rationale. The results of the error identification task are shown in Table 23. o1-mini demonstrated stronger mathematical reasoning and understanding abili- ties, achieving a 41.98% exact match ratio, surpass- ing the end-to-end model. However, it performed worse in terms of the distance metric. Upon re- viewing o1-mini’s outputs, we found that in cases where errors were due to unfinished problems, thestudent’s calculations were correct. Teachers typi- cally did not mark any formulas as incorrect. De- spite this, o1-mini still flagged incorrect formulas in these cases, which introduced significant penal- ties in the distance metric, leading to a poorer per- formance in this aspect. The results of the feedback generation task are shown in Table 24. R-1, R-2, R-L, and Rat- ing represent ROUGE-1, ROUGE-2, ROUGE-L, and LLM Rating metrics, respectively. o1-mini outperformed the end-to-end model in terms of BERTScore. However, its responses were often overly lengthy, generating additional irrelevant in- formation, which impacted the evaluation results, and limited it performance compared to GPT-3.5. In the LLM rating conducted by using GPT-4, o1- mini received an almost perfect score of 4.70. By contrast, in the evaluation conducted by human an- notators (i.e., Table 8), o1-mini’s performance was less satisfactory in generating adaptive feedback, as its responses were often overly lengthy and lacked focus, reducing their effectiveness in addressing specific student errors. I Mathematical Problem-Solving Ability of the Model To evaluate the problem-solving capabilities of the models used in the baseline with respect to the ques- tions in our dataset, we employed the test set from the time series split method. The results, presented in Table 25, show that even the smallest parameter model, Llama 8B, achieved an overall accuracy of 75.95%. In contrast, Llama 80B attained an impres- sive overall accuracy of 92.37%, surpassing all of our students. This demonstrates that the LLMs we utilized are capable of understanding and solving the problems present in this task. Category Without Rationale With Rationale Answer Accuracy
https://arxiv.org/abs/2505.18056v1
Assessment### Instruction: You are a math teacher. According to the [Question] , please indicate whether the [Student’s Answer] is correct or not. For correct answers, respond: The student’s answer is correct. For incorrect answers, respond: The student’s answer is incorrect. ### Input: [Question] : {question} Option: {options} [Student’s Answer] : {student process} ### Response:### Instruction: You are a math teacher. According to the [Question] and [Rationale] , please indicate whether the [Student’s Answer] is correct or not. For correct answers, respond: The student’s answer is correct. For incorrect answers, respond: The student’s answer is incorrect. ### Input: [Question] : {question} Option: {options} [Rationale] : {rationale} [Student’s Answer] : {student process} ### Response: Problem-Solving Error Identifica- tion### Instruction: You are a math teacher. According to the [Question] , please indicate whether the [Student’s Answer] contains any wrong equations. For correct answers, respond: The student’s answer doesn’t contain wrong equations. For incorrect answers with wrong equations, respond: [wrong equation] ... ### Input: [Question] : {question} Option: {options} [Student’s Answer] : {student process} ### Response:### Instruction: You are a math teacher. According to the [Question] and [Rationale] , please indicate whether the [Student’s Answer] contains any wrong equations. For correct answers, respond: The student’s answer doesn’t contain wrong equations. For incorrect answers with wrong equations, respond: [wrong equation] ... ### Input: [Question] : {question} Option: {options} [Rationale] : {rationale} [Student’s Answer] : {student process} ### Response: Feedback Gener- ation### Instruction: You are a math teacher. According to the [Question] , please indicate whether the [Student’s Answer] is correct or not. If the [Student’s Answer] is incorrect, explain why the student went wrong and provide advice on how to correct their mistake. For correct answers, respond: The student’s answer is correct and doesn’t need advice. For incorrect answers, respond: [Teacher’s explanations and advice] ... ### Input: [Question] : {question} Option: {options} [Student’s Answer] : {student process} ### Response:### Instruction: You are a math teacher. According to the [Question] and [Rationale] , please indicate whether the [Student’s Answer] is correct or not. If the [Student’s Answer] is incorrect, explain why the student went wrong and provide advice on how to correct their mistake. For correct answers, respond: The student’s answer is correct and doesn’t need advice. For incorrect answers, respond: [Teacher’s explanations and advice] ... ### Input: [Question] : {question} Option: {options} [Rationale] : {rationale} [Student’s Answer] : {student process} ### Response: Table 19: Single-Task and Multi-Task Learning Prompts for Grading Student Problem-Solving Results. J Impact of Variations in Student Problem-Solving Styles on Model Grading The problem-solving results in our dataset exhibit considerable diversity among students, as detailed in Table 3. Students 2, 3, and 4 provided more de- tailed and complex solutions, often including natu- ral language explanations. In contrast, Students 1, 5, and 6 used fewer equations and offered more ab- breviated responses, with Students 1 and 6 almost entirely omitting natural language descriptions of their processes. The variation in student response styles raises the question of whether this diversity impacts the models’ grading performance. We can examine this question by looking at the experimental
https://arxiv.org/abs/2505.18056v1
results in Table 26 under the leave- one-out data split method. For the non-finetuned models, the number of parameters plays a signif-icant role in their ability to interpret and assess students’ problem-solving results. The Llama3 8B model, with fewer parameters, struggled to accu- rately grade the concise and less descriptive re- sponses provided by Students 1 and 6, resulting in a lower performance. Conversely, the Llama3 70B model, although still affected by these abbreviated responses, demonstrated greater resilience due to its enhanced comprehension capabilities. Similarly, GPT-3.5, despite lower overall accuracy due to its more stringent evaluation of student processes, ex- hibited comparable trends across the different stu- dents, aligning with Llama3 in terms of accuracy variation. For the fine-tuned models, those trained with the multi-task learning method demonstrated less ro- bustness. We found that these models occasionally failed to provide any output when confronted with Given a problem, student’s process and the teacher’s manual suggestions, please rate the following feedback generated by different models on a scale from 0 to 5. The criteria for the rating are: • 0: Completely irrelevant or nonsensical suggestions. •1: Very poor suggestions with little relevance or help- fulness. •2: Poor suggestions with some relevance but mostly unhelpful. •3: Average suggestions that are somewhat relevant and somewhat helpful. •4: Good suggestions that are relevant and helpful but could be improved. •5: Excellent suggestions that are highly relevant, help- ful, and well-aligned with the teacher’s manual sug- gestions. Please only answer in the following format: The rating for this feedback is: Table 20: Rating Prompt. Train Validation Test Time Series Split 2,836 609 603 Leave-One-OutS1 3,028 337 683 S2 3,026 337 685 S3 3,033 337 678 S4 3,049 339 660 S5 3,029 337 682 S6 3,049 339 660 Table 21: Data Count for Various Data Splitting Strate- gies. particularly brief responses, leading to instances of zero accuracy. Moreover, the fine-tuned mod- els showed reduced performance when assessing more detailed and complex solutions, such as those from Student 2, likely due to limitations in their ability to process and understand intricate reason- ing. However, when models were allowed to refer- ence rationales during answer accuracy assessment, their overall performance improved, and discrepan- cies in grading accuracy across different problem- solving styles were mitigated. That means the incorporation of rationales can assist models in navigating the challenges posed by varied student responses, enabling them to more accurately deter- mine whether a student’s problem-solving process is correct. To sum up, the experiment indicates that the di- versity in students’ problem-solving styles affect the models’ answer accuracy assessment perfor-mance. If the student’s solution includes more detailed explanations or even natural language de- scriptions, the LLMs are better able to evaluate the solution accurately, leading to more reliable grading outcomes. K Evaluation Criteria for Generated Feedback Quality The evaluation criteria are based on a 0 to 3 scale, as follows: • 0: The model’s feedback did not help the stu- dent at all. •1: The model’s feedback pointed in the right direction but contained errors, making it un- helpful to the student. •2:
https://arxiv.org/abs/2505.18056v1
The model’s feedback correctly indicated how to correct the student’s mistake but lacked clarity, potentially causing the student diffi- culty in understanding. •3: The model’s feedback effectively guided the student on how to correct the mistake, with clear and detailed explanations. LAnalysis of Model-Generated Feedback In this section, we present examples of feedback generated by the models to analyze their distinct behaviors. Table 27 is a relatively simple math word problem, specifically asking how many ap- ples Maria sells on average per hour over a two- hour period. The student only calculated the total number of apples sold during the two hours without determining the average, resulting in an incorrect answer. Llama3 70B accurately pointed out that the stu- dent only calculated the total number of apples sold rather than the average and provided guidance on how to proceed with the calculation, offering ex- cellent advice. GPT-3.5 correctly noted that the student calculated the total number of apples sold instead of the average and explained how to per- form the subsequent calculations. However, while their suggestions were detailed, the content was overwhelming for the student, making it difficult for them to quickly understand how to correct their answer. On the other hand, our fine-tuned end-to- end model effectively highlighted that the student needed to divide the total by 2 to arrive at the re- quired average number of apples sold, providing valuable suggestion. In Table 28, another example is presented where the question asks which orientation of a water tank All General Gain Physics Geo. Prob. Other o1-mini w/o r92.54% 92.24% 92.79% 92.40% 90.00% 93.75% 96.97% End-to-end w/o r 87.73% 84.05% 93.69% 90.06% 90.00% 62.50% 90.91% End-to-end w/ r 91.54% 88.79% 95.50% 93.57% 95.00% 62.50% 96.97% Table 22: o1-mini’s Results of Answer Accuracy Assessment. EM (↑) Distance ( ↓) o1-mini w/o r41.98% 56.30 End-to-end w/o r 30.25% 71.48 End-to-end w/ r 40.12% 51.78 Table 23: o1-mini’s Results of Problem-Solving Error Identification. R-1 R-2 R-L BERTScore Rating o1-mini w/o r0.1956 0.0633 0.1839 0.7986 4.7 End-to-end w/o r0.1862 0.0751 0.1761 0.5327 0.99 End-to-end w/ r0.2842 0.1418 0.2703 0.6786 1.91 Table 24: o1-mini’s Results of feedback generation. placed in a crate measuring 8 by 12 by 14 feet would yield the largest volume. The tank is to be placed upright, and the student mistakenly used the smallest dimension divided by 2 as the radius, lead- ing to an error. The problem requires consideration of different orientations of the crate, ultimately de- termining that placing the crate on the 12 by 14 face yields the maximum tank volume, with the correct radius being 6 feet. Llama3 70B recognized that the student made an error but ended up performing the same calcu- lation as the student, arriving at the same incorrect answer, which was unhelpful. Similarly, GPT-3.5 also failed to consider the varying tank volumes based on different crate orientations. It acknowl- edged the student’s mistake but provided the same incorrect answer, rendering it similarly unhelpful. In contrast, o1-mini demonstrated the strongest mathematical reasoning abilities, correctly identi- fying the solution to the problem. It provided a detailed
https://arxiv.org/abs/2505.18056v1
explanation of each step and clarified the reasons for the student’s mistake. Although the explanation was somewhat lengthy, it still consti- tuted a helpful suggestion. The end-to-end model not only failed to offer a correct correction but also misinterpreted the student’s process, mistakenly assuming the student was calculating the crate’s dimensions. This model performed the worst in this instance.Category Questions LLaMA3 8B LLaMA3 70B GPT-3.5 All 603 75.95% 92.37% 82.59% General 232 75.43% 90.51% 84.91% Gain 111 80.18% 93.69% 88.29% Physics 171 73.68% 95.32% 78.36% Geometry 40 77.50% 87.50% 77.50% Probability 16 75.00% 100.00% 81.25% Other 33 75.76% 87.88% 75.76% Table 25: Performance of Different Models Across Var- ious Problem Categories. S1 S2 S3 S4 S5 S6 Llama3 8B w/o r59.59% 70.36% 61.50% 67.88% 66.72% 55.15% w/r62.37% 72.26% 64.90% 71.06% 73.90% 66.97% Llama3 70B w/o r84.77% 88.91% 84.96% 82.88% 87.24% 82.73% w/r87.70% 86.13% 83.78% 81.67% 81.67% 83.79% GPT-3.5 w/o r40.85% 47.45% 45.28% 36.36% 51.32% 28.94% w/r50.07% 65.99% 50.74% 47.12% 52.05% 29.55% Single-task w/o r20.06% 87.88% 85.25% 67.73% 85.19% 85.45% w/r93.85% 94.01% 92.04% 89.85% 92.08% 92.88% Multi-task w/o r0.00% 30.80% 66.08% 70.61% 90.91% 0.00% w/r89.75% 88.03% 47.20% 0.00% 85.34% 18.33% End-to-end w/o r87.12% 57.96% 85.55% 87.73% 90.03% 89.09% w/r94.58% 95.33% 90.86% 89.85% 96.48% 91.67% Table 26: Model Performance on Answer Accuracy Assessment Across Different Students. M Information About Use Of Ai Assistants We utilized generative AI tools solely for grammar and language refinement. Following the use of this tool, the author(s) carefully reviewed and revised the content as necessary, assuming full responsibil- ity for the final publication. Problem: maria sold 10 kg of apples in her first hour at the market , but only 2 kg of apples in the second hour . on average , how many kg of apples did she sell in two hours at the market ? Student Process : 10+2=12 Human: After calculating the total number of apples sold in two hours, you still need to divide by the time to get the average sales per hour. Therefore, the answer is 12/2 = 6. Llama3 70B: The student has calculated the total amount of apples sold, but not the average. To find the average, divide the total amount by the number of hours: (10 + 2) 2=12 2= 6. GPT-3.5: To find the average, you need to add the total amount of apples sold in both hours and then divide by the total number of hours. In this case, Maria sold 10 kg in the first hour and 2 kg in the second hour, so the total amount of apples sold in two hours is 10 kg + 2 kg = 12 kg. To find the average, you need to divide this total by 2 (the number of hours), so the average amount of apples sold in two hours is 12 kg / 2 = 6 kg. Therefore, Maria sold an average of 6 kg of apples in two hours at the market. o1-mini: The student correctly calculated the total amount of apples sold by adding the quantities from each hour: 10kg+ 2kg= 12 kg However, the question
https://arxiv.org/abs/2505.18056v1
asks for the average amount of apples sold per hour, not the total. To find the average: 1. Total apples sold: 10kg+ 2kg= 12 kg 2. Total time: 2hours 3. Average per hour: 12kg 2hours= 6kg/hour Advice: Always pay close attention to whether a question is asking for a total, an average, or another specific calculation. In this case, dividing the total by the number of hours provides the correct average. End-to-end: The question asks for the average number of kilograms of apples sold in two hours, not the total number of kilograms of apples sold in two hours. Therefore, you need to divide the total number of kilograms of apples sold in two hours by 2 to get the average number of kilograms of apples sold in two hours. Table 27: Comparison of Model-Generated Suggestions on Arithmetic Problem. Problem: a certain rectangular crate measures 8 feet by 12 feet by 14 feet . a cylindrical gas tank is to be made for shipment in the crate and will stand upright when the crate is placed on one of its six faces . what should the radius of the tank be if it is to be of the largest possible volume ? Student Process :8 2= 4 feet Human: The suggestion from the teacher is that simply taking the shortest edge of the crate to calculate the radius is incorrect. It is necessary to consider the different orientations of the crate. Since the gas tank will be placed inside the crate, the radius of the gas tank will depend on the shorter edge of the face on which the crate is placed. By calculating the scenarios for the three different faces, we can determine that placing the gas tank on the 12 feet by 14 feet face will yield the largest volume, calculated as 12 2 ×12 2 ×π×8 Therefore, the radius is 6. Llama3 70B: The student seems to be trying to find the radius of the cylinder, but the calculation is incorrect. Since the crate measures 8 feet by 12 feet by 14 feet, the largest possible radius of the cylindrical tank would be the smallest dimension of the crate, which is 8 feet, divided by 2, which is 4 feet. GPT-3.5: To find the largest possible volume for the cylindrical gas tank that can fit inside the rectangular crate, the radius of the tank should be half of the smaller dimension of the crate. In this case, the radius should be half of 8 feet, which is 4 feet. o1-mini: The student only considered one possible orientation of the crate by taking the dimension 8 feet and dividing it by 2 to get a radius of 4 feet. However, to achieve the largest possible volume for the cylindrical gas tank, it’s essential to evaluate all possible orientations of the crate. Here’s how to approach the problem correctly: 1. Identify all orientations: (a) First Orientation: Placing the crate on the 8 ft ×12 ft face. • Height of cylinder: 14 ft • Maximum diameter: 8 ft (since the cylinder must
https://arxiv.org/abs/2505.18056v1
fit within the 8 ft width) • Radius:8 2= 4ft • V olume: π×42×14 = 224 πcubic feet (b) Second Orientation: Placing the crate on the 12 ft ×14 ft face. • Height of cylinder: 8 ft • Maximum diameter: 12 ft • Radius:12 2= 6ft • V olume: π×62×8 = 288 πcubic feet (c) Third Orientation: Placing the crate on the 8 ft ×14 ft face. • Height of cylinder: 12 ft • Maximum diameter: 8 ft • Radius:8 2= 4ft • V olume: π×42×12 = 192 πcubic feet 2. Determine the optimal orientation: •The second orientation (placing the crate on the 12 ft ×14 ft face) provides the largest volume for the cylinder with a radius of 6 feet. Conclusion: To maximize the volume of the cylindrical gas tank, the crate should be placed on the 12 ft ×14 ft face, allowing the tank to have a radius of 6 feet. End-to-end: The question is misunderstood. The question asks for the radius of the tank, not the length of the crate. Table 28: Comparison of Model-Generated Suggestions to a Geometric Problem Involving V olume Calculation.
https://arxiv.org/abs/2505.18056v1
arXiv:2505.18071v1 [cs.CL] 23 May 2025Extended Inductive Reasoning for Personalized Preference Inference from Behavioral Signals Jia-Nan Li1 2∗Jian Guan2∗Wei Wu2†Rui Yan1† 1Gaoling School of Artificial Intelligence, Renmin University of China2Ant Group {lijianan, ruiyan}@ruc.edu.cn {jianguanthu, wuwei19850318}@gmail.com Abstract Large language models (LLMs) have demonstrated significant success in complex reasoning tasks such as math and coding. In contrast to these tasks where deduc- tive reasoning predominates, inductive reasoning—the ability to derive general rules from incomplete evidence, remains underexplored. This paper investigates extended inductive reasoning in LLMs through the lens of personalized preference inference, a critical challenge in LLM alignment where current approaches struggle to capture diverse user preferences. The task demands strong inductive reasoning capabilities as user preferences are typically embedded implicitly across various in- teraction forms, requiring models to synthesize consistent preference patterns from scattered signals. We propose ALIGN XPLORE , a model that leverages extended reasoning chains to enable systematic preference inference from behavioral signals in users’ interaction histories. We develop ALIGN XPLORE by combining cold-start training based on synthetic data with subsequent online reinforcement learning. Through extensive experiments, we demonstrate that ALIGN XPLORE achieves substantial improvements over the backbone model by an average of 11.05% on in-domain and out-of-domain benchmarks, while maintaining strong generalization ability across different input formats and downstream models. Further analyses establish best practices for preference inference learning through systematic com- parison of reward modeling strategies, while revealing the emergence of human-like inductive reasoning patterns during training. 1 Introduction Recent advances in large language models (LLMs) have demonstrated remarkable success in com- plex reasoning tasks through extended reasoning chains [ 45,12], particularly in domains such as code generation [ 7] and mathematical problem-solving [ 66,39] where deductive reasoning pre- dominates [ 41,10]. However, inductive reasoning, i.e., the ability to derive rules from specific observations and make predictions about novel cases [ 21], presents unique challenges in probabilistic generalizations from incomplete evidence. As a core cognitive ability [ 22], inductive reasoning has long been a key component in human intelligence tests [ 14] and scientific research [ 31]. Nevertheless, the extension of LLMs’ reasoning abilities to complex inductive tasks remains largely unexplored. In this work, we investigate extended inductive reasoning through the lens of personalized prefer- ence inference [ 38,77], a challenging task that demands strong inductive capabilities to synthesize explicit preference patterns from implicit signals for aligning LLMs with individual preferences. The importance of this investigation is twofold: First, preference inference addresses a critical challenge in LLM alignment, where current approaches primarily focus on universal values such as helpfulness, ∗Equal contribution. †Corresponding authors: Wei Wu (wuwei19850318@gmail.com) and Rui Yan (ruiyan@ruc.edu.cn). Preprint. Under review. honesty, and harmlessness [ 2,46,3,1,61] while struggling to capture the diversity of individual user preferences [ 32]. This limitation has led to reduced user satisfaction and potential systematic biases [ 58,18], particularly when serving diverse user populations [ 63]. Second, preference inference exemplifies the complexities of inductive reasoning. In reality, users rarely explicitly express their preferences, i.e., positive or negative stances towards specific attributes such as cultural sensitivity, during interactions with LLMs [ 37]. Instead, these preferences
https://arxiv.org/abs/2505.18071v1
are implicitly embedded in various forms of user-generated content (e.g., user posts [ 68]), behavioral signals (e.g., comparative judg- ments [ 46]), and demographic attributes (e.g., age, gender [ 76]). Preference inference requires models to identify consistent preference patterns across such multiple diverse interactions and generalize them to novel contexts, as exemplified in Figure 1. Despite the critical importance of preference inference, most existing personalization approaches bypass this crucial step, opting instead for direct mappings that incorporate implicit signals as prompts [ 69,37], trainable parameters [ 29,60], or encoded hidden representations [ 49,43]. The absence of explicit preference inference not only renders the preference modeling process opaque and uncontrollable but also leads to suboptimal personalization performance [ 37]. These limitations underscore the need for a principled approach that can systematically infer and articulate user preferences. To address these challenges, we propose ALIGN XPLORE , a model that leverages extended reasoning chains to enable systematic inductive reasoning from behavioral signals in users’ interaction histories. To this end, we develop a two-stage framework that combines synthetic data training with reinforcement learning optimization. First, we address the cold-start challenge by leveraging advanced LLMs to generate high-quality training data that demonstrates the process of preference inference through extended reasoning. We then enhance the model’s reasoning capabilities through reinforcement learning, where the reward signal is designed to encourage both accurate preference inference and coherent reasoning processes. Through extensive experiments on both in-domain and out-of-domain benchmarks, we demonstrate thatALIGN XPLORE achieves substantial improvements in preference inference accuracy, outperform- ing the backbone model by 11.05% and showing competitive performance against significantly larger models, including GPT-4 [ 1] and DeepSeek-R1-671B [ 12]. Notably, ALIGN XPLORE exhibits strong generalization ability across different input formats and downstream models, and maintains robust performance under preference reversal. This is attributed to the extended reasoning process that helps the model develop more systematic and transferable inductive reasoning patterns, rather than learning superficial correlations. Further analysis reveals two key findings: (1) comparing different reward modeling approaches reveals that directly optimizing for preference judging leads to more stable training than optimizing response generation, establishing best practices for training preference inference models, and (2) our two-stage training approach demonstrates progressive enhancement of inductive reasoning capabilities, where cold-start training helps establish basic preference characteri- zation abilities, while RL further refines these into actionable hypotheses through iterative testing and refinement, mirroring human approaches to inductive reasoning [22]. The main contributions of this work are as follows: I. We present the first systematic investigation of extended inductive reasoning in LLMs through the lens of personalized preference inference, demonstrating how structured reasoning processes enable LLMs to derive generalizable preference patterns from implicit behavioral signals. II. We develop ALIGN XPLORE , a novel two-stage framework that combines synthetic data training with reinforcement learning to enhance LLMs’ preference inference capabilities. We open-source our model and training framework to facilitate future research in personalized alignment.3 III. We conduct comprehensive evaluations across diverse benchmarks, demonstrating substantial improvements over existing approaches while maintaining strong generalization ability and robust- ness. Our analyses provide valuable insights into reward modeling
https://arxiv.org/abs/2505.18071v1
strategies and the progressive development of inductive reasoning capabilities. 2 Related works Inductive reasoning Unlike deductive reasoning, where conclusions follow deterministically from premises, inductive reasoning involves making probabilistic generalizations from incomplete 3Code is available at https://github.com/JinaLeejnl/AlignXplore . 2 Post 𝒙from𝑼: Can you explain Artificial Intelligence?LLM: Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed tothinklikehumansandmimic…Post 𝒙from𝑼: Soacademic.How is AI actually used in business or our daily lives?Post 𝒙fromU: I want to start meditating to reduce stress, but I have no idea where to begin. Any tips?Response A fromLLM: Meditation is a practice rooted in ancient contemplative traditions, aiming to cultivate mindfulness and inner stillness. It involves various techniques like Vipassanā, which focuses on…Response BfromLLM:Great idea! For starting out, find a quiet spot where you won‘t be disturbed. Sit comfortably, either on a cushion or a chair, with your back straight but not stiff. Set a timer for just…PreferenceofU: Response B ≻ Response A DiversedownstreampersonalizationtasksBehavioralSignals𝓔ofUser𝑼Example#1(𝒆𝟏)Example#2(𝒆𝟐) PreferenceInferenceofALIGNXPLORE<think>Okay, let’s tacklethis…Starting with Example 1 …theuser might have particular interest in AI and technology,prefers practical and real-world applications over theoretical concepts.Theymay be business-oriented, seeking commercial applications…moving to Example 2 …weakens our hypothesis about specific interest in AI/technology…this strongly reinforces the preference for practical over theoretical …Puttingtogether,theuser likelyhasastrong preferencefor…</think>InferredPreferenceDescriptiond:Theuserpreferspragmaticguidancethataddresses immediate needs, prioritizing utility over theoretical depth.1.ProposingHypothesis2.Verifying&Refining3.Summary PersonalizationGeneral-PurposeLLM ℛPost 𝒙fromU:CanI…Personalizedresponsegeneration:Sure…Post 𝒙fromU:CanI…ResponseA:This…ResponseB:Sure…Personalizedpreferencejudging:Response B ≻ Response A Behavioral Signals𝓔Key Preference Dimensions 𝝓Age groupGenderInvestment preferenceNeed for personal achievement Cold-start TrainingTeacherModel𝓣 Reasoning Chains 𝑟Preference 𝑑 FilterCold-start Data 𝒟!"#$Behavioral Signals ℇ ∈ 𝒟%&RolloutReinforcement Learning RewardforinferredpreferenceALIGNXPLORE ALIGNXPLORESFT Reasoning Chains 𝑟Preference 𝑑Reward&AdvantageUpdate𝑅'()𝑅*+$Figure 1: Top: Preference inference task overview. Our model performs human-like inductive reasoning for preference inference by progressively refining their preference hypotheses through iterative testing and validation. These inferred preferences can then guide diverse downstream personalization tasks. Bottom: Two-stage training process of ALIGN XPLORE , which combines cold-start training using synthetic data from teacher models with reinforcement learning optimization to enhance the model’s reasoning capabilities. evidence [ 35,21], which is crucial for various cognitive activities from categorization to scientific discovery [ 23]. This capability has gained renewed attention through the Abstract Reasoning Corpus (ARC) [ 10,42] in evaluating LLMs like OpenAI o3 [ 48]. While existing research [ 65] primarily focuses on few-shot generalization [ 51,4], preference inference presents three distinct challenges: reasoning over unstructured language instead of formal languages [ 50,70], handling heterogeneous forms of preference signals that may significantly differ from test-time user tasks, and necessitating reasoning about negative examples that reveal undesired preferences [ 36]. Our framework provides a principled solution to these challenges while maintaining interpretability. Extended reasoning in LLMs Traditional Chain-of-Thought approaches [ 66] are limited by shal- low, linear reasoning steps. Recent advances in extended reasoning [ 45,8] have significantly improved LLMs’ performance through three key mechanisms: (1) In-depth logical chains that maintain ex- tended reasoning through various formats, including natural language [ 64], formal language [ 67], and latent space reasoning [ 20]; (2) Systematic exploration of solution spaces, implemented via internal mechanisms trained by reinforcement learning [ 12] or external frameworks like Monte Carlo tree search [ 74] and beam search
https://arxiv.org/abs/2505.18071v1
[ 72,59]; and (3) Iterative self-reflection that enables models to verify and correct reasoning paths through supervised fine-tuning [ 61,16] or reinforcement learning with verifiable rewards [ 12,73]. The integration of these mechanisms has led to significant improvements in complex reasoning tasks such as math [ 25], coding [ 26], scientific question-answering [ 55], reward modeling [ 9], and multimodal reasoning [ 56]. We extend this paradigm to preference inference, a domain that poses unique challenges due to its requirement for strong inductive reasoning capabilities. Personalized alignment Recent studies highlight limitations of one-size-fits-all alignment [ 2,32, 58], motivating personalized alignment, i.e., adapting LLM behaviors to individual preferences [ 28, 44]. Key challenges include: (1) Preference inference from implicit signals [ 68,46,76], which requires sophisticated reasoning to synthesize scattered signals [ 18]. Current works primarily focus on retrieving preference-relevant contexts [ 77,47,75] while overlooking explicit preference inference, leading to limited alignment accuracy [ 38]. (2) Preference modeling through prompts [ 69,37], model 3 parameters [ 29,60], or latent representations [ 49,43]. We focus on prompt-based methods for their interpretability and model-agnostic nature. (3) Feedback-driven alignment that updates LLMs during training [ 27,19,33] or guides generation at inference [ 57,6,54]. In contrast to existing approaches, we present the first study incorporating both extended reasoning for accurate preference inference and efficient mechanisms for handling evolving user interactions [5]. 3 Methodology In this section, we first formulate the preference inference task and evaluation methods in §3.1, then detail our two-phase training strategy to develop the preference inference model: an initial cold-start phase to develop basic reasoning capabilities 3.2, followed by a reinforcement learning phase that directly optimizes for the reward 3.3. Figure 1 illustrates the training recipe. 3.1 Task formulation We first formulate the preference inference task as follows: Given a collection of behavioral signals E={e1, e2, ..., e T}with multiple interaction examples of user U,4the model Mshould generate an explicit preference description din natural language with an extended reasoning chain r: r, d=M(E), (1) where dtypically manifests as positive or negative attitudes of Utowards specific dimensions (e.g., cultural sensitivity, formality). The inferred preference description dshould be model-agnostic, enabling it to condition any general-purpose LLM Rfor personalization realization [37, 38]. Evaluation framework The quality of preference inference dcan be assessed by how well it guides Rto align with user preferences. Ideally, this could be measured through an online reward: Ronline =Eo∈R(·|x,d)Align (o, U), (2) where orepresents R’s output on a new post xof user UandAlign (·)measures its alignment with the user. However, this approach requires costly online sampling and user feedback. To enable efficient and scalable evaluation while avoiding such overhead, we leverage offline user-specific comparative judgment data. Specifically, given a post xfrom user Uand two responses ywandylwhere ywis preferred over ylbyU, we define: Roffline = 1 fR(yw|x,·)> fR(yl|x,·) Rformat (3) Rformat = 1 r, dsatisfy the generation format , (4) where fR(yw/l|x,·)measures the model’s preference scores for the two responses, and Rformat ensures the structural validity of both randd(see Appendix B for format specifications). Reward instantiation The above evaluation framework can be instantiated by modeling
https://arxiv.org/abs/2505.18071v1
preference scores fR(yw/l|x,·)in different ways. For example, when the downstream model Ris repurposed as a response generation model (denoted as Rgen) [52], it measures preference of a response yw/l through the change in response log-probability when conditioned on dcompared to the unconditional case. The offline reward, denoted as Rgen, then compares the log-probability changes between yw andyl, where a larger positive margin indicates better preference alignment: Rgen= 1 logRgen(yw|x, d) Rgen(yw|x)>logRgen(yl|x, d) Rgen(yl|x) Rformat . (5) WhenRserves as a preference judging model (denoted as Rjud) [78], it directly models the preference score using the probability of a response yw/lbeing preferred under the inferred preference description d. The corresponding offline reward, denoted as Rjud, is computed based on the probability difference between ywandyl. Specifically: Rjud= 1 Rjud(yw|x, d, y w, yl)>Rjud(yl|x, d, y w, yl) Rformat . (6) 4For simplicity, our main experiments use comparative judgments (a user post with preferred/less-preferred responses) as preference signals, though our method is agnostic to both the source platforms and signal formats, readily accommodating various forms of implicit signals such as user posts, reviews, or interaction histories. 4 Our evaluation framework can be further instantiated with other types of R, such as using raw response log-probabilities directly as preference scores [ 40]. We leave the exploration of these alternative reward formulations as future work. In our main experiments, we primarily use Rjudfor both training and evaluation, while analyzing Rgenin subsequent ablation studies. 3.2 Cold-start training The primary challenge in training preference inference models lies in the inherent difficulty for small models to perform complex preference inference following instructions alone without proper initialization. To address this, we develop a synthetic data generation pipeline leveraging advanced LLMs to create high-quality training examples with detailed reasoning processes. Specifically, we employ a two-stage data synthesis process. For each example in the original implicit preference signals ei∈ E, we first identify key preference dimensions ϕexpressed in natural language that potentially reveal user preferences, which serve as analytical guidance for subsequent preference inference. We then prompt an advanced teacher model Twith both these identified dimensions ϕ and the original implicit signals to generate Greasoning chains and preference descriptions (see Appendix B for prompt templates): {ri, di}G i=1∼ T(r, d|E, ϕ).Then, we filter these generations through outcome-based verification, selecting only the samples that achieve optimal reward scores. The filtered dataset Dcoldis constructed as: Dcold={(E, ri, di)|R(ri, di) = 1 , i∈[1, G]}, (7) where R(·)denotes either RgenorRjudgeas defined in Equations 5 and 6, respectively. The training objective of the preference inference model Mis to maximize the likelihood of generating both correct reasoning chains and accurate preference descriptions: Lcold=E(E,r,d)∼Dcold−1 |r|+|d|TX t=1logp(r, d|E), (8) where p(·|E)denotes the conditional probability distribution modeled by M. 3.3 Reinforcement learning While cold-start training establishes basic reasoning capabilities, RL further enhances the model’s ability to generate high-quality preference descriptions through extended reasoning. We adopt the Group Relative Policy Optimization (GRPO) algorithm [ 12], which has demonstrated effectiveness in optimizing long-horizon reasoning processes. Specifically, for each training instance, we sample multiple reasoning paths and optimize them collectively using the reward signal defined
https://arxiv.org/abs/2505.18071v1
in Eq. 3. Following [ 25], we remove the KL penalty term from the original GRPO formulation for more effective optimization: LRL=E E∼D rl {(ri,di)}G i=1∼pold(·|E) −1 GGX i=11 |ri|+|di|ρi , (9) ρi=X tminp({ri, di}t|E) pold({ri, di}t|E)Ai,clip(p({ri, di}t|E) pold({ri, di}t|E),1−ϵ,1 +ϵ)Ai , (10) Ai=Ri−mean ({Rj}G j=1) std({Rj}G j=1), (11) where poldis the old policy model, Gis the number of sampled outputs, {ri, di}tis the t-th token in the generated sequence, and Riis the reward of i-th output, computed using Eq. 5 or 6. The advantage term Ainormalizes rewards across different paths to reduce variance in training. 4 Experiments 4.1 Experimental setup Implementation details We adopt DeepSeek-R1-Distill-Qwen-7B [ 12] as our backbone model and conduct training on ALIGN X[38], a comprehensive personalized alignment dataset spanning 90 5 Table 1: Summary of evaluation benchmarks. For preference directions, ↑and↓represent preferred and non-preferred examples, respectively, with their quantities shown in parentheses. The “In-domain” column ( ✓/✗) indicates whether the benchmark’s preference dimensions are seen during training. Benchmark Dimensions and #Examples In-domain ALIGN Xtest 90 preference dimensions (3,000 examples in total, ∼1:1 ratio for ↑/↓preferences) ✓ P-S OUPS “Expertise” ( ↑: 300,↓: 300); “Informativeness” ( ↑: 300,↓: 300); “Style” ( ↑: 300,↓: 300) ✗ Table 2: Offline preference inference evaluation results (ACC jud, %) using Qwen2.5-7B-Instruct as the preference judging model. Extended Reasoning : whether the model generates preference descriptions with extended reasoning. For Qwen3-32B, we compare thinking (extended reasoning) andnon-thinking (concise reasoning) modes. Among non-gray-shaded methods, bold andunderlined numbers indicate the best and second-best results. Gray-shaded rows represent golden preference or large-sized models, where italic numbers indicate performance below the best result in bold . * indicates that the best result is significantly better than the others ( p <0.05with pairwise t-test). MethodExtendedALIGN XtestP-S OUPS Reasoning Informativeness Style Expertise Directly given preference descriptions Null N/A 51.37∗45.85∗17.00∗36.00∗ E N/A 50.33∗41.03∗37.33∗36.00∗ Golden Preference N/A 64.63 68.94 84.50 90.17 Previous specialized methods for inductive reasoning and personalization LMInductReason [50] N/A 51.80∗44.35∗27.50∗38.17∗ VPL [49] N/A 51.20∗43.69∗47.17∗52.67∗ PBA [38] N/A 62.77∗53.65∗31.33∗50.50∗ Preference descriptions generated by state-of-the-art LLMs Qwen2.5-7B-Instruct [61] ✗ 56.33∗53.82∗59.00∗65.17∗ DS-R1-Distill-Qwen-7B [12] ✓ 57.63∗51.16∗45.83∗56.67∗ Qwen3-32B non-thinking [71] ✗ 57.60 54.98 61.50 66.67 GPT-4 [1] ✗ 66.10 53.82 73.33 71.83 QwQ-32B [62] ✓ 65.70 58.14 72.17 71.50 Qwen3-32B thinking [71] ✓ 65.03 57.14 71.67 73.83 DeepSeek-R1-671B [12] ✓ 70.47 55.48 79.66 76.17 Preference descriptions generated by our preference inference model ALIGN XPLORE -7B ✓ 65.33 54.32 69.67 63.83 ALIGN XPLORE -7B w/o RL ✓ 61.80 52.82 54.00 59.83 ALIGN XPLORE -7B w/o Cold-start ✓ 62.80 56.64 64.83 59.50 preference dimensions with balanced positive and negative examples. Each test instance contains 4 preference pairs as input behavioral signals. We create two non-overlapping sets from ALIGN X: 3,980 instances for cold-start training and 7,000 instances for reinforcement learning, using Rjud(Eq. 6) as the reward function. For RL training, we use a prompt batch size of 128 with 4 rollouts per prompt. During inference, we combine nucleus sampling ( p= 0.95) [24] with top- ksampling ( k= 10 ) [13] and set the temperature to 0.9 [17]. Appendix A.1 shows more implementation details. Benchmarks
https://arxiv.org/abs/2505.18071v1
We use two benchmarks: (1) ALIGN Xtest[38], the official test set of ALIGN X, where each test case contains 4 preference pairs as input; (2) P-S OUPS [27], which focuses on three preference dimensions: “expertise,” “informativeness,” and “style,” with both positive and negative preference directions. For P-S OUPS , we treat each preference pair as a test case and sampling 4 additional pairs with matching dimension and direction to form E. Table 1 summarizes the statistics. Evaluation metrics Due to the inherent difficulty of directly evaluating preference inference quality, we employ both offline and online metrics for indirect evaluation: (1) Offline evaluation: We measure Acc genandAcc judfollowing Eq. 5 and 6, which assess preference-guided response generation and preference judging accuracy, respectively. We primarily focus on Acc judas it aligns with our training 6 Table 3: Online preference inference evaluation results (GPT-4 win rate, %, row model against column model) using Qwen2.5-7B-Instruct as the personalized response generation model. We randomly select 400 test cases per benchmark for evaluation. M1: Qwen2.5-7B-Instruct; M2: DS-R1-Distill-Qwen-7B; M3: ALIGN XPLORE -7B. ALIGN Xtest M1 M2 M3 P-S OUPS M1 M2 M3 M1 - 43.00 37.00 M1 - 51.33 42.33 M2 57.00 - 43.00 M2 48.67 - 46.67 M3 63.00 57.00 - M3 57.67 53.33 - objective. (2) Online evaluation: We introduce GPT-4 Win Rate ,5where GPT-4 conditioned on the ground-truth preferences (provided by the benchmarks) compares responses generated given preference descriptions from different models [34, 27]. Baselines We compare our approach with three groups of baselines: (1) Direct preference de- scriptions: Null (no description), E(raw behavioral signals), and Golden Preference (ground-truth descriptions from benchmarks6).(2) Specialized methods: LMInductReason [50] for inductive reasoning, VPL [49] for preference modeling, and PBA [38] for structured preference prediction. (3) State-of-the-art LLMs: Small models ( Qwen2.5-7B-Instruct [61],DS-R1-Distill-Qwen-7B [12]) and large models ( QwQ-32B [62], Qwen3-32B [71], GPT-4 [1],DeepSeek-R1-671B [12]). We also evaluate ablated versions of our model (w/o RL and w/o Cold-start) to verify the effectiveness of each training stage. See Appendix A.2 for baseline implementation details. 4.2 Main results Offline evaluation Table 2 demonstrates the offline preference inference evaluation results. We draw five key findings: (1) Necessity of preference inference: Direct utilization of behavioral signals (E) performs similarly to the “Null” setting and substantially worse than golden preference, validating the necessity of preference inference. (2) Limitations of previous methods: LMInductReason and VPL show poor performance, suggesting the inadequacy of prompting- and latent variable-based approaches. While PBA performs better through predefined preference modeling, its significant performance drop on P-S OUPS reveals limited generalization capability. (3) Superiority of extended reasoning: Models with extended reasoning consistently outperform their concise counterparts, as shown by Qwen3-32B thinking vs. non-thinking (65.03% vs. 57.60%) and DeepSeek-R1-671B vs. GPT-4 (70.47% vs. 66.10%). (4) Effectiveness of ALIGN XPLORE :Our model outperforms comparable-sized baselines on both in-domain and out-of-domain tasks, while achieving competitive performance with larger models like Qwen3-32B and GPT-4, even surpassing golden preference on ALIGN Xtest.(5) Dominant impact of RL: While both stages contribute to performance, removing RL causes more significant degradation than removing cold-start training, indicating RL’s critical role in
https://arxiv.org/abs/2505.18071v1
optimizing preference alignment. Online evaluation Using GPT-4 as a judge for pairwise comparison of personalized response generation conditioned on the generated preference descriptions, Table 3 shows that ALIGN XPLORE - 7B achieves competitive win rates against baselines on both in-domain and out-of-domain scenarios, further validating its effectiveness in preference inference. 4.3 Generalization ability assessment We evaluate our model’s generalization abilities from both input and output perspectives, as shown in Table 4. (1) Input-form generalization. We evaluate models by replacing preference pairs with user-generated content (UGC) in the input signals, reflecting real-world scenarios where preferences must be inferred from diverse sources like reviews and social media posts. Our ALIGN XPLORE -7B exhibits strong generalization to different input formats, achieving 61.97% accuracy that significantly outperforms baseline models. (2) Cross-model generalization. We investigate the transferability of 5We use OpenAI’s API “gpt-4-turbo-2024-04-09” for our all subsequent experiments. 6Note that golden preference descriptions, while semantically accurate, may not necessarily lead to optimal downstream personalization performance due to potential gaps in model compatibility. 7 Table 4: Generalization and robustness evaluation (ACC jud, %). Generalization: (1) Input format generalization: inferring preferences from user-generated content (UGC), shown in the “ ALIGN Xtest w/ UGC” column; (2) Cross-model transferability: personalizing different preference judging models (columns) using generated preference descriptions (rows) on the original ALIGN Xtestbenchmark. Ro- bustness: Evaluating model performance when preference directions are reversed in both behavioral signals and test pairs. The subscripts in the last two columns indicate performance changes compared to the original results in Table 2. Takeaway: ALIGN XPLORE -7B shows strong generalization abilities in both aspects while maintaining robust performance under preference reversal, suggesting it captures fundamental preference patterns rather than learning fixed biases. MethodExtended ReasoningGeneralization Robustness ALIGN Xtest w/ UGCPreference Judging Model Rjud ALIGN Xtest (Reverse)P-S OUPS (Reverse)Qwen2.5-7B-Instruct QwQ-32B DeepSeek-R1-671B E N/A 52.17 50.33 49.03 50.12 48.67 −1.7 36.57 −1.6 Golden Preference N/A 69.87 64.63 74.30 78.97 61.83 −2.8 67.42 −13.8 Qwen2.5-7B-Instruct ✗ 57.57 56.33 56.90 58.15 47.27 −9.1 68.33 +9 .0 DS-R1-Distill-Qwen-7B ✓ 58.30 57.63 58.70 59.61 53.40 −4.2 67.83 +16 .6 DeepSeek-R1-671B ✓ 61.97 70.47 73.73 74.00 61.53 −8.9 73.33 +2 .9 ALIGN XPLORE -7B ✓ 61.97 65.33 68.53 67.59 62.13 −3.2 71.27 +8 .6 Table 5: Comparison of different reward functions for training preference inference models. Rjud (used in all previous experiments) and Rgendenote rewards from preference judging and response generation, respectively. Takeaway: Rjudleads to better overall performance, improving both ACC jud and ACC gendespite optimizing only for judging accuracy. MethodExtended A LIGN Xtest P-S OUPS Reasoning ACC jud ACC gen ACC jud ACC gen E N/A 50.33 48.13 38.12 69.49 Golden Preference N/A 64.63 53.03 81.20 86.92 Qwen2.5-7B-Instruct ✗ 56.33 48.53 59.33 72.22 DS-R1-Distill-Qwen-7B ✓ 57.63 48.60 51.22 69.87 DeepSeek-R1-671B ✓ 70.47 50.65 70.44 80.42 ALIGN XPLORE -7B (Rjud) ✓ 65.33 49.30 62.61 78.98 ALIGN XPLORE -7B (Rgen) ✓ 61.67 49.40 56.94 71.82 generated preference descriptions for personalizing different preference judging models, which is essential for broader adoption of preference-guided personalization systems. Our ALIGN XPLORE -7B demonstrates robust cross-model generalization, consistently outperforming baseline models of comparable size. We attribute this superior
https://arxiv.org/abs/2505.18071v1
transferability to our extended reasoning mechanism, which encourages learning fundamental, model-agnostic preference patterns rather than surface-level correlations, resulting in more generalizable descriptions across different downstream models. 4.4 Robustness assessment A key challenge for preference inference systems is maintaining consistent performance when user preferences differ significantly from training patterns. We evaluate this robustness through preference reversal [ 38], where we reverse all preference directions in both behavioral signals and test pairs (e.g., changing “ yw≻yl” to “ yw≺yl”). This tests whether the model truly learns to infer preferences rather than merely capturing fixed biases. As shown in Table 4, ALIGN XPLORE -7B demonstrates strong robustness with relatively small performance changes, notably outperforming both comparable-sized baselines and golden preferences. Even compared to DeepSeek-R1-671B, our model achieves competitive results, suggesting it learns to identify and adapt to preference patterns flexibly rather than relying on dataset biases. 4.5 Further analysis Our further analysis focuses on two aspects: (1) comparing different reward functions ( Finding 1 ), and (2) examining how two-stage training progressively enhances preference description quality 8 DS-R1-Distill-Qwen-7B AlignXplore-7B (Rjud) w/o RL AlignXplore-7B (Rjud) w/o Cold-start AlignXplore-7B (Rjud)Figure 3: Word clouds of generated preference descriptions from model variants on ALIGN Xtest. Terms in bounding boxes represent frequently occurring words characterizing each model’s generation patterns. Takeaway: The evolution demonstrates how cold-start training helps identify preference dimensions, while RL learns to determine preference directions and aggregate signals across examples into actionable guidance, mirroring human inductive reasoning. (Finding 2 ). Additional analyses on RL training dynamics and detailed case studies are provided in Appendix A.3 and A.4, respectively. Finding 1: Optimizing for preference judging accuracy outperforms response generation rewards. We investigate how different reward sources affect model performance by comparing RjudandRgen, as shown in Table 5. Results show that Rjudachieves better performance across most metrics, even including response generation (ACC gen), suggesting that accurate preference inference naturally facilitates better personalized generation. 0 100 200 Steps0.20.30.40.50.60.7RewardAlignXplore-7B (Rjud) w/o Cold-start 0 100 200 Steps0.500.550.600.650.70RewardAlignXplore-7B (Rjud) 0 100 200 Steps0.400.450.500.550.60RewardAlignXplore-7B (Rgen) Figure 2: RL training curves of ALIGN XPLORE -7B with different reward functions. Takeaway: Rjudpro- vides more stable and effective training signals, showing consistent improvement over time, while Rgenexhibits high variance and limited improvement.We attribute this superiority to more in- formative training signals. As shown in Figure 2, Rjuddemonstrates steady im- provement even without cold-start training, while Rgenfluctuates around the random level (0.5). The ineffectiveness of Rgen stems from two factors: (1) confounding factors in response probability estimation (e.g., language fluency, response length), and (2) inherently noisy reward compu- tation from offline responses. While on- line examples might help, they require pro- hibitively expensive real-time user feed- back. In contrast, Rjudprovides direct feedback about preference understanding, enabling stable training even from random initialization. Finding 2: Cold-start and RL training progressively enhance preference description quality. Figure 3 shows the word clouds of generated preference descriptions. The backbone model generates mainly general descriptions (e.g., “historical,” “situation”). Cold-start training enables identification of specific preference dimensions (e.g., “communication style,” “age group”) but shows limited synthesis capability. RL alone shows limited improvement, with manual inspection
https://arxiv.org/abs/2505.18071v1
revealing a focus on generic dimensions (e.g., “helpfulness”). Combining both stages leads to more actionable guidance with diverse preference dimensions and concrete actions (e.g., “avoid,” “prioritize,” “leans toward”). This evolution mirrors human inductive reasoning [ 22,15], evidenced by increasing usage of synthesis phrases like “putting together” during training. Without explicit supervision, our framework naturally encourages this iterative refinement process, progressively improving from general observations to specific, actionable preference hypotheses. 5 Conclusion This work presents the first systematic investigation of extended inductive reasoning in LLMs through the lens of personalized preference inference. Our proposed model, ALIGN XPLORE , demonstrates that extended reasoning can effectively bridge the gap between implicit behavioral signals and explicit 9 preferences. Through comprehensive experiments, we show that ALIGN XPLORE not only achieves substantial improvements in preference inference accuracy but also exhibits strong generalization ability and robustness. The success of our two-stage training strategy provides valuable insights into developing LLMs’ inductive reasoning capabilities, suggesting that combining synthetic demon- strations with reinforcement learning can effectively guide models to learn generalizable reasoning patterns rather than superficial correlations. Our findings also reveal several promising directions for future research, such as extending the success of our approach in preference inference to other induc- tive reasoning tasks, such as scientific hypothesis generation and pattern discovery in unstructured data. References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [2]Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. [3]Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 , 2022. [4]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances inneural information processing systems, 33:1877–1901, 2020. [5]Murali Chandrashekaran, Beth A Walker, James C Ward, and Peter H Reingen. Modeling individual preference evolution and choice in a dynamic group setting. Journal ofMarketing Research, 33(2):211–223, 1996. [6]Daiwei Chen, Yi Chen, Aniket Rege, Zhi Wang, and Ramya Korlakai Vinayak. PAL: Sample- efficient personalized reward modeling for pluralistic alignment. In TheThirteenth International Conference onLearning Representations, 2025. [7]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-V oss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa,
https://arxiv.org/abs/2505.18071v1
Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. [8]Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, and Wanxiang Che. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. arXiv preprint arXiv:2503.09567 , 2025. [9]Xiusi Chen, Gaotang Li, Ziqi Wang, Bowen Jin, Cheng Qian, Yu Wang, Hongru Wang, Yu Zhang, Denghui Zhang, Tong Zhang, Hanghang Tong, and Heng Ji. Rm-r1: Reward modeling as reasoning, 2025. [10] François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547, 2019. [11] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint arXiv:2307.08691, 2023. 10 [12] DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. [13] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings ofthe56th Annual Meeting oftheAssociation forComputational Linguistics (V olume 1:Long Papers), pages 889–898, 2018. [14] Roberta A Ferrara, Ann L Brown, and Joseph C Campione. Children’s learning and transfer of inductive reasoning rules: Studies of proximal development. Child development , pages 1087–1099, 1986. [15] Jan-Philipp Fränken, Nikos C. Theodoropoulos, and Neil R. Bramley. Algorithms of adaptation in inductive inference. Cognitive Psychology, 137:101506, 2022. [16] Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, and Noah D Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307, 2025. [17] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances inneural information processing systems, pages 2672–2680, 2014. [18] Jian Guan, Junfei Wu, Jia-Nan Li, Chuanqi Cheng, and Wei Wu. A survey on personalized alignment – the missing piece for large language models in real-world applications, 2025. [19] Jian Guan, Wei Wu, zujie wen, Peng Xu, Hongning Wang, and Minlie Huang. AMOR: A recipe for building adaptable modular knowledge agents through process feedback. In The Thirty-eighth Annual Conference onNeural Information Processing Systems, 2024. [20] Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. Training large language models to reason in a continuous latent space, 2024. [21] Brett K Hayes, Evan Heit, and Haruka Swendsen. Inductive reasoning. Wiley interdisciplinary reviews: Cognitive science, 1(2):278–292, 2010. [22] Evan Heit. Properties of inductive reasoning. Psychonomic bulletin &review , 7:569–592, 2000. [23] John H Holland. Induction: Processes ofinference, learning, anddiscovery. MIT press, 1986. [24] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In International Conference onLearning Representations, 2020. [25] Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model, 2025. [26] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Ar- mando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. In TheThirteenth International Conference onLearning Representations. [27] Joel Jang,
https://arxiv.org/abs/2505.18071v1
Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, and Prithviraj Ammanabrolu. Personalized soups: Per- sonalized large language model alignment via post-hoc parameter merging. arXiv preprint arXiv:2310.11564, 2023. [28] Ehud Kalai and Meir Smorodinsky. Other solutions to nash’s bargaining problem. Econometrica: Journal oftheEconometric Society, pages 513–518, 1975. [29] Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. Do llms understand user preferences? evaluating llms on user rating prediction. arXiv preprint arXiv:2305.06474, 2023. 11 [30] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [31] Kinshuk, Taiyu Lin, and Paul McNab. Cognitive trait modelling: The case of inductive reasoning ability. Innovations inEducation andTeaching International, 43(2):151–161, 2006. [32] Hannah Rose Kirk, Andrew Michael Bean, Bertie Vidgen, Paul Rottger, and Scott A. Hale. The past, present and better future of feedback learning in large language models for subjective human preferences and values. In The 2023 Conference onEmpirical Methods inNatural Language Processing, 2023. [33] Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, and Jingren Zhou. Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning. In Proceedings ofthe30th ACM SIGKDD Conference onKnowledge Discovery andData Mining, pages 5260–5271, 2024. [34] Sachin Kumar, Chan Young Park, Yulia Tsvetkov, Noah A Smith, and Hannaneh Ha- jishirzi. Compo: Community preferences for language model personalization. arXiv preprint arXiv:2410.16027, 2024. [35] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral andBrain Sciences, 40:e253, 2017. [36] Michael Laskin, Luyu Wang, Junhyuk Oh, Emilio Parisotto, Stephen Spencer, Richie Steiger- wald, DJ Strouse, Steven Stenberg Hansen, Angelos Filos, Ethan Brooks, maxime gazeau, Himanshu Sahni, Satinder Singh, and V olodymyr Mnih. In-context reinforcement learning with algorithm distillation. In TheEleventh International Conference onLearning Representations , 2023. [37] Seongyun Lee, Sue Hyun Park, Seungone Kim, and Minjoon Seo. Aligning to thousands of preferences via system message generalization. arXiv preprint arXiv:2405.17977, 2024. [38] Jia-Nan Li, Jian Guan, Songhao Wu, Wei Wu, and Rui Yan. From 1,000,000 users to every user: Scaling up personalized preference for user-level alignment, 2025. [39] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023. [40] Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734, 2024. [41] Kinga Morsanyi, Teresa McCormack, and Eileen O’Mahony. The link between deductive reasoning and mathematics. Thinking &Reasoning, 24(2):234–257, 2018. [42] Arsenii Kirillovich Moskvichev, Victor Vikram Odouard, and Melanie Mitchell. The con- ceptARC benchmark: Evaluating understanding and generalization in the ARC domain. Transactions onMachine Learning Research, 2023. [43] Lin Ning, Luyang Liu, Jiaxing Wu, Neo Wu, Devora Berlowitz, Sushant Prakash, Bradley Green, Shawn O’Banion, and Jun Xie. User-llm: Efficient llm contextualization with user embeddings. arXiv preprint arXiv:2402.13598, 2024. [44] Ninell Oldenburg and Tan Zhi-Xuan. Learning and sustaining shared normative systems via bayesian rule induction in markov games. In Proceedings
https://arxiv.org/abs/2505.18071v1
ofthe23rd International Conference onAutonomous Agents andMultiagent Systems, pages 1510–1520, 2024. [45] OpenAI. Introducing openai o1-preview. https://openai.com/index/ introducing-openai-o1-preview/ , 2024. [46] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances inneural information processing systems , 35:27730–27744, 2022. 12 [47] Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Xufang Luo, Hao Cheng, Dongsheng Li, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, and Jianfeng Gao. Secom: On memory construction and retrieval for personalized conversational agents. In TheThirteenth International Conference onLearning Representations, 2025. [48] Rolf Pfister and Hansueli Jud. Understanding and benchmarking artificial intelligence: Openai’s o3 is not agi, 2025. [49] Sriyash Poddar, Yanming Wan, Hamish Ivison, Abhishek Gupta, and Natasha Jaques. Personal- izing reinforcement learning from human feedback with variational preference learning. arXiv preprint arXiv:2408.10075, 2024. [50] Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, and Xiang Ren. Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. In The Twelfth International Conference onLearning Representations, 2024. [51] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding with unsupervised learning. 2018. [52] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances inNeural Information Processing Systems, 36, 2024. [53] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimiza- tions toward training trillion parameter models. In SC20: International Conference forHigh Performance Computing, Networking, Storage andAnalysis, pages 1–16. IEEE, 2020. [54] Alexandre Rame, Guillaume Couairon, Corentin Dancette, Jean-Baptiste Gaya, Mustafa Shukor, Laure Soulier, and Matthieu Cord. Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. In Thirty-seventh Conference onNeural Information Processing Systems, 2023. [55] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference onLanguage Modeling, 2024. [56] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, et al. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615, 2025. [57] Ruizhe Shi, Yifang Chen, Yushi Hu, Alisa Liu, Hannaneh Hajishirzi, Noah A Smith, and Simon S Du. Decoding-time language model alignment with multiple objectives. arXiv preprint arXiv:2406.18853, 2024. [58] Anand Siththaranjan, Cassidy Laidlaw, and Dylan Hadfield-Menell. Distributional prefer- ence learning: Understanding and accounting for hidden context in RLHF. In TheTwelfth International Conference onLearning Representations, 2024. [59] Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling LLM test-time com- pute optimally can be more effective than scaling parameters for reasoning. In TheThirteenth International Conference onLearning Representations, 2025. [60] Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bing Yin, and Meng Jiang. Democra- tizing large language models via personalized parameter-efficient fine-tuning. arXiv preprint arXiv:2402.04401, 2024. [61] Qwen Team. Qwen2.5: A party of foundation models, September 2024. [62] Qwen Team. Qwq-32b: Embracing
https://arxiv.org/abs/2505.18071v1
the power of reinforcement learning, March 2025. [63] A Tong. Exclusive: Chatgpt traffic slips again for third month in a row. reuters, 2023. 13 [64] Evan Z Wang, Federico Cassano, Catherine Wu, Yunfeng Bai, William Song, Vaskar Nath, Ziwen Han, Sean M Hendryx, Summer Yue, and Hugh Zhang. Planning in natural language improves llm search for code generation. In TheFirst Workshop onSystem-2 Reasoning at Scale, NeurIPS’24, 2024. [65] Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah Goodman. Hypothesis search: Inductive reasoning with language models. In TheTwelfth International Conference onLearning Representations, 2024. [66] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances inneural information processing systems, 35:24824–24837, 2022. [67] Jiaxin Wen, Jian Guan, Hongning Wang, Wei Wu, and Minlie Huang. Codeplan: Unlocking reasoning potential in large language models by scaling code-form planning. In TheThirteenth International Conference onLearning Representations, 2025. [68] Shujin Wu, Yi R. Fung, Cheng Qian, Jeonghwan Kim, Dilek Hakkani-Tur, and Heng Ji. Aligning LLMs with individual preferences via interaction. In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert, editors, Proceedings ofthe31st International Conference onComputational Linguistics , pages 7648–7662, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. [69] Jing Xu, Arthur Szlam, and Jason Weston. Beyond goldfish memory: Long-term open- domain conversation. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings ofthe60th Annual Meeting oftheAssociation forComputational Linguistics (V olume 1:Long Papers) , pages 5180–5197, Dublin, Ireland, May 2022. Association for Computational Linguistics. [70] Kai Yan, Zhan Ling, Kang Liu, Yifan Yang, Ting-Han Fan, Lingfeng Shen, Zhengyin Du, and Jiecao Chen. Mir-bench: Benchmarking llm’s long-context intelligence via many-shot in-context inductive reasoning. In Workshop onReasoning andPlanning forLarge Language Models. [71] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. Qwen3 technical report, 2025. [72] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances inNeural Information Processing Systems, 36, 2024. [73] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. [74] Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm self-training via process
https://arxiv.org/abs/2505.18071v1
reward guided tree search. Advances inNeural Information Processing Systems, 37:64735–64772, 2024. [75] Gangyi Zhang. User-centric conversational recommendation: Adapting the need of user with large language models. In Proceedings ofthe17th ACM Conference onRecommender Systems , RecSys ’23, page 1349–1354, New York, NY , USA, 2023. Association for Computing Machinery. [76] Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason We- ston. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243, 2018. 14 [77] Siyan Zhao, Mingyi Hong, Yang Liu, Devamanyu Hazarika, and Kaixiang Lin. Do LLMs recognize your preferences? evaluating personalized preference following in LLMs. In The Thirteenth International Conference onLearning Representations, 2025. [78] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances inNeural Information Processing Systems, 36:46595–46623, 2023. A Experiments A.1 Implementation details Our training and test sets are derived from ALIGN X, which proposes a 90-dimensional preference space (incorporating universal values, basic human needs, and prevalent interest tags). The dataset utilizes forum interactions and human-LLM interactions to construct 1.3 million examples, making it currently the largest and most comprehensive dataset for personalized alignment. However, preference signals in the original user interactions are relatively sparse, which previously hindered effective preference inference. To address this issue, we introduce a refined data construction approach. Specifically, we ensure that each target pair is associated with at least five preference dimensions, where all interaction history demonstrates consistent, non-neutral preference directions, while avoiding conflicting preferences across other dimensions. We constructed 10,000 data entries containing only “pair-wise comparative feedback” as interaction history, with 7,000 used for training and 3,000 for testing. Additionally, we constructed 3,000 entries containing only “user-generated content” as interaction history for generalization validation. The training is conducted on 8 NVIDIA A100 GPUs using Adam optimizer [ 30], with DeepSpeed ZeRO-3 [53] and Flash-attention-2 [11] for optimization. We employ the following hyperparameter configuration: learning rate of 1e-6, 50 warmup steps, 4 training epochs, and maximum prompt/- generation lengths of 8,192/2,048 tokens. During RL, we set the mini-batch size to 128 for each step. A.2 Baseline details We compare our approach with various baseline methods and models: •Directly given preference descriptions: (1)Null: no preference description is provided; (2) E: using behavioral signals directly as preference descriptions without inference; and (3) Golden Preference : ground-truth preference descriptions provided by the benchmark. Note that golden pref- erence descriptions, while semantically accurate, may not necessarily lead to optimal downstream personalization performance due to potential gaps in model compatibility. •Previous specialized methods for inductive reasoning and personalization: (1)LMInduc- tReason [50] employs iterative hypothesis refinement to enhance LLMs’ inductive reasoning capabilities; (2) VPL [49] introduces latent variables to model individual preferences; and (3) PBA [38] maps behavioral examples to structured preference scores along predefined dimensions, then converts them to natural language descriptions. •Preference descriptions generated by state-of-the-art LLMs: The LLMs range from small- sized models including Qwen2.5-7B-Instruct [61] and DS-R1-Distill-Qwen-7B [12], to large-sized models including QwQ-32B [62],Qwen3-32B [71],GPT-4 [1], and DeepSeek-R1-671B [12].
https://arxiv.org/abs/2505.18071v1
These models cover both concise reasoning and extended reasoning patterns. Furthermore, to verify the effectiveness of our approach, we also compare with ALIGN XPLORE -7B w/o RL and w/o Cold-start, which only uses cold-start training and RL for preference inference, respectively. For VPL [ 49], we train one epoch on Qwen2.5-7B-Instruct using Drl. Note that this method employs its own specialized downstream model for preference-guided judgment. For other baselines, we generate roles or preferences using the corresponding models and input them into Qwen2.5-7B- Instruct for evaluation. LMInductReason [ 50] follows the original paper’s implementation, where content generation is replaced by Qwen2.5-7B-Instruct. After iteratively generating rules, the final 15 rule is provided to Qwen2.5-7B-Instruct to generate preference selections. PBA [ 38] uses the method from the original paper to extract consistent preferences from the interaction history of each benchmark. A.3 Length evolution We present the changes in generation length during the reinforcement learning process for ALIGN X- PLORE -7B (Rjud) and ALIGN XPLORE -7B (Rgen) in Figure 4. As training progresses, the average generation length of the model continuously decreases. Our analysis suggests that, due to cold-start training, although the model is guided to analyze the appropriate preference dimensions, it tends to repetitively reproduce content from the behavioral signals, with low confidence in the analysis and many redundant and fluctuating dimensional interpretations. After reinforcement learning, the model’s analysis direction becomes clearer. For preference interpretation of behavioral signals, the model now only mentions key terms that reflect preferences, enabling it to quickly analyze and summarize user preferences. This aligns with the analysis presented in §4.5. A.4 Case study 0 100 200 Steps80010001200RewardAlignXplore-7B (Rjud) 0 100 200 Steps100011001200RewardAlignXplore-7B (Rgen) Figure 4: Curves of generation length for ALIGN XPLORE -7B with different reward functions during RL training.DS-R1-Distill-Qwen-7B tends to be more general and one-sided when analyzing preferences from be- havioral signals, which may lead to the omission of important points during the analysis. After cold- start training, ALIGN XPLORE -7B w/o RL provides more comprehensive and systematic analysis of the preference dimensions, but expressions indicating un- certainty, such as “?” and “Not clear yet,” frequently appear, along with extensive repetitions of content from the behavioral signals, such as “User describes facing harassment by a host due to his identity.” After reinforcement learning, these are replaced by more confident statements and clearer analyses, indicating that RL significantly aids in making inductive reasoning more precise and focused. For non-extended-reasoning models (e.g., Qwen2.5-7B-Instruct), the preference descriptions are provided directly. However, due to the lack of reasoning processes, some unreasonable preference descriptions emerge. In fact, during the analysis, the model focuses more on the user’s responses or the user’s tendencies toward different responses, rather than focusing on the content of the questions. However, many of the analyses provided by Qwen2.5-7B-Instruct are based on the content of the questions, such as “Interest in Personal Development and Self-Improvement.” Prompt for Case Study A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides
https://arxiv.org/abs/2505.18071v1
the User with the answer. The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer> . User: You must put your answer inside <answer> </answer> tags, i.e., <answer> answer here </answer> . This is the problem: Generate the user’s preference based on their historical behavior. **This person has chosen or rejected comments on some posts:** 1.Post: Sorry for format on mobile etc. My girlfriend[22] and I[22] decided to go away somewhat last minute. It 's our first trip together. We 're away in France, not far from Lille. We decided to get an apartment on Airbnb, it was inexpensive and so beautiful. It was perfect. Except, and this is the very concise version, we show up and the host looks surprised 16 to see us. I speak fluent french and my girlfriend doesn 't, and he doesn 't speak any English so I held the conversation even though she handled all the booking and liaising. He kept on asking if my girlfriend was the one in the picture as he was expecting the male and female in the picture. Her picture was an old one of her and her friend. I tried explaining this to him and he acts as if I 'm not understanding french, my own first language, properly. Turns out that my girlfriend had arranged for a bouquet for me which was waiting for me in the main room. He couldn 't wrap his head around it. He was acting somewhat civilised until I saw the flowers and hugged my girlfriend. Then he pieced it together and started acting hostile. His tone changed and he started laying down impromptu ground rules like no liquids near the sofa and no shoes on the rug and don 't trash the place, return it how you got it. All while his tone got more and more hostile. It's his apartment and he probably has a key and I just feel so unsafe. I 'll get over it once I process it I 'm sure but I 'm literally just starting medication for depression and anxiety and being at home is so comforting for me, which is why I was looking forward to getting in after travelling for what felt like eternity. It just fucking sucks that I can 't go away with my significant other without feeling unsafe and vulnerable. And I feel so bad cus my girlfriend is trying so hard to make this the best trip and she feels bad that our host is a homophobe even tho that 's literally nothing to do with her. Idk if this made sense I 'm tired and sleep deprived but can 't bring myself to sleep. Please send some kind gay words so I don 't feel like the only gay in the world. Little pick me up please. *Chosen: I live 20mins from Lille! (In Belgium near the Belgian/French border) I've heard before that people near the border/north of France seem to be more
https://arxiv.org/abs/2505.18071v1
conservative. I'm sorry this happened to you. *Rejected: Un bon gros fdp en somme. 2.Post: So, I 'm a 21yo bisexual (maybe pansexual? I 'm completely open to it but haven 't really met or talked to someone to know), living in the Chicago area. I realized and have been out as bisexual since I was 17. I 've slept with women, and I hate to admit it because of the 5 I 've been with, only 2 were legitimately into girls, the rest was just drunken young adult barsexual experimenting. This is already sounding great right? *sarcasm* Well, I 've dated my fair share of men and quite honestly I 'm bored. I have yet to find a girl that there is a mutual interest. 2 or 3 girls have liked me, but I honestly just wasn 't into them in a romantic way, and that was all they wanted, no friendship. I really want to date a girl already! I have no clue how to flirt with girls, I can never tell when I 'm being flirted with, or how to tell them I 've been out as bisexual for almost 5 years, have never dated a girl but screwed a bunch. Doesn 't sound too good on my end. I kind of have a crush on a bisexual friend of mine and would like to try for her, but I 've tried and failed at flirting or catching her attention in more than a friendly way. I have no clue what to do unless I'm in bed with them. I have no idea how to flirt with girls. Send help. *Chosen: I wish I could offer some good advice, but I'm not so good at flirting myself. I wish you the bestest of luck. *Rejected: I flirt like a pro and I can be smoother than butter... But I have no idea how to articulate the HOW aspect of it. I really want to help, but I 'm grasping for words here. *code-sloth ponders a moment, as a sloth does when stumped* Witty banter and humor are my main skills, which take a bit to learn for each gal 's tastes. At the very least, be genuine and sincere. Whether you 're a smooth talker or 17 as mentally coordinated as a deer on ice, sincerity goes a long way to endearment. Thoughtful, sincere, genuine. Start there, then you can develop your own style of flirting. Oh! And be aware of when to shut up or back off. That great one-liner she lined you up for? Don 't always blurt it out. She 's not reacting well to certain things (usually dirty lines)? Cool your jets and mellow out. It 's a balancing act, and you 'll pick up on it over time. Sometimes you won 't even need to flirt actively. Talking about a mutual subject (I love video games, for example) can cause the "oh, I dig this chick" feeling that flirting does. I 'm not above the whole "Can this controller fit between your boobs?" line
https://arxiv.org/abs/2505.18071v1
of discussion, but you don 't have to be on your witty toes ALL the time. Would you date a good yet incompatible flirt or someone you shared a common interest with? Probably the latter. Regarding pickup lines: No. Don 't use them in serious context. Jokingly yes, but don't play that card on the table first. Woah, that got a bit verbose. Sorry! 3.Post: As a 20 year old, it made me sad to see so many of you calling yourself old! Not that that 's a bad thing. I don 't think teenage/20s years are the peak of your life. I was having this conversation with my ex girlfriend (yeah...I know) the other day and she said this is a really shitty confusing time and IA. and besides I have so many health issues, I 'm looking forward to having surgery and stabilising and having more of a grip on my life/mental stability in my 30's and 40's and all the years after that. Anyway this thread is kind of OT but to all the 30+ ladies on here, you 're not old and even when you are you can still be a badass. I know 70 year olds that are kicking ass and when you think about it they were born in the 1940 's, and healthcare sucked then. Saw a thread here asking for members over 30... *Chosen: Thanks, thats nice of you to say :) *Rejected: It's cool. I 'm immortal anyway. Which reminds me, I have this sword fight I need to get to... *Queen Plays* 4.Post: My boss, after ghosting me for two weeks, fired me over text this morning. She was fine when I came out to her, but after discovering that I 'd actually take medication to change my body, she 's hated me. I worked with kids so she was always afraid I 'd corrupt their minds. As far as I know, parents didn 't care or know. They assumed me to be female and their kids assumed me to be male but otherwise it was all fine. I guess I 'll stay with my other job and wait for them to start paying me (still a volunteer until the fall when a job opens up). At least I 'm out there and everyone cool with it (behavioral health center). Should I just stay closeted in the future? I guess I will until I pass and then I 'd be stealth? Fired for "inability to keep up with changes" *Chosen: When are we going to get laws protecting LGBT people from getting fired? How come we didn't get it yet? I'm so sorry this happened to you. *Rejected: If your employer has not been documenting these "failures," then they are not in a good position legally. The EEOC (equal employment opportunity commission) is a federal agency that investigates discrimination due to gender identity. You can and should contest your firing; check them out online today. You have two weeks from firing to contest. Please do. It's possible that nothing will come of
https://arxiv.org/abs/2505.18071v1
it, but very often, they can find other workers who will substantiate your manager 's motives. You will have done the right thing by 18 putting it into their hands. Obviously, you have to assess your own emotional needs before taking this step. Quick action is also important for unemployment, which is the right of any worker released for no fault of their own. You will have to contest the firing with the unemployment office, too. Your manager is cheating you financially. Keep your head up. If you choose, this is new work, more important than any you have done before. Your task is to try to get justice for yourself and other trans people. Just to try. That is more than enough. It's great of you to volunteer at the behavioral health center! I hope your career there will be rewarding. Assistant: <think> Preference Generated by DS-R1-Distill-Qwen-7B Okay, so I need to figure out the user’s preference based on their historical behavior. They’ve provided four examples where they either chose or rejected comments on posts. Let me analyze each one step by step. First Post: The user shared a personal anecdote about traveling with a partner and facing a negative host due to language and cultural barriers. The user felt unsafe and lost depression and anxiety. They received a chosen comment that was empathetic and a rejected one that was dismissive. From this, it seems like the user appreciates genuine support and understanding, especially in sensitive situations. Second Post: The user identified as a bis (or pan) and discussed dating experiences. They felt unsuccessful and frustrated. The chosen comment was a friendly acknowledgment, while the rejected one was more detailed but still empathetic. The user values honesty and being heard, especially in personal and vulnerable contexts. Third Post: The user shared about facing termination due to their gender identity, feeling misunderstood and unhelped. The chosen comment was a compliment, and the rejected comment was more informative but still empathetic. The user seems to thrive on positive reinforcement and feels supported when others acknowledge their strengths. Fourth Post: Another situation where the user faced professional discrimination. They were fired for being closeted and chose a comment that expressed solidarity and disappointment, while the rejected comment provided practical steps for legal action. The user values solidarity and stands with others in similar situations but also seeks practical advice. Looking at these, the user seems to prefer comments that are empathetic, supportive, and acknowledge their struggles. They appreciate genuine concern and positive reinforcement. They also seem to seek practical advice when applicable, especially in situations where they feel discriminated or unsafe. However, they don’t seem to appreciate overly dismissive or critical comments, even if they’re well-meaning. So, the user’s preference is for comments that show understanding, support, and positivity. They value empathy and practical help when needed, but they don’t like being dismissed or told to "get over it." They likely seek validation, reassurance, and encouragement in their challenging situations. </think> The user’s preferences are evident in their analysis of past interactions, where they
https://arxiv.org/abs/2505.18071v1
favor comments that demonstrate empathy, support, and positivity. They appreciate genuine concern and acknowledge the user’s struggles, seeking both practical advice and affirmation in their challenging situations. They dislike overly dismissive or critical comments, even if well- intentioned, and value solidarity and encouragement. Thus, the ideal comment for the user would be one that shows understanding, offers support, and provides positive reinforcement or practical advice when applicable. 19 Preference Generated by A LIGN XPLORE -7B w/o RL Okay, let 's start breaking down the information provided here. The task is to generate the user 's preferences based on their historical interactions—specifically the comments they've chosen versus rejected across different social media posts. There are several factors mentioned related to psychological needs and communication styles mentioned earlier, such as openness, agreeableness, autonomy, approach to conflict resolution, etc., along with specific communication nuances. First, looking at each example provided: Starting with Post #1: User describes facing harassment by a host due to his identity. His main issue here seems emotional distress stemming from societal norms affecting his identity, particularly gender- related discrimination. He feels unsafe, uses medication for mental health issues, seeks support, and wants validation. He chose a response acknowledging location near a border area (which might indicate regional context?), noting potential conservatism elsewhere, expressing sympathy, and thanking him. Rejected a vague remark ( un bon gros fdp en somme ), suggesting the user prefers empathetic responses rather than dismissive ones. So possible indicators include High Openness (to experience/emotional intelligence?), Low Agreeableness since he 's upset, High Need for Autonomy—he wants control over his situation (e.g., staying home, dealing with anxiety independently?), Conflict Handling – confrontational or avoidance? Not clear yet. Moving onto Post #2: User identifies as bisexual (maybe pan), struggling with attraction despite previous experi- ences. Struggles finding authentic romantic partners beyond casual encounters. Wants advice but feels frustrated. Chooses a reply offering sympathy and encouragement ("bestest luck"), rejects a lengthy, verbose guide filled with witty tips. That suggests preferring brevity over detail-heavy solutions—maybe valuing conciseness vs thorough explanations. Possible clues: Low openness to complex strategies, higher need for autonomy in handling situations themselves versus relying on external methods. Also, rejects overly structured approaches preferring direct empathy. Third Interaction (Post#3): User celebrates aging positively, mentions achieving maturity, health challenges, and future goals. Chose a simple thank-you response instead of playful/humor-oriented replies refer- encing swords/fighting. Indicates seriousness regarding aging positively, dislikes sarcasm or flippant replies. Shows emphasis on sincerity/respectful communication avoiding trivialization. Emphasizes authenticity and substance over levity. Fourth Scenario (Boss situation): Emotionally impacted by losing job over gender identity concerns. Selected compassionate question about legislation protection plus sympathy (“how come?”), whereas the rejected comment gives legal/procedural guidance including steps to contest via agencies and unem- ployment offices. The user chooses emotional expression over procedural advice. Suggests prioritizes emotional support and communal solidarity over practical steps. Prioritizes feeling validated and supported rather than facing bureaucracy head-on immediately. So here autonomy comes in – user prefers staying informed but seeks immediate comfort/- support rather than taking proactive steps immediately. Maybe prefers empathetic validation
https://arxiv.org/abs/2505.18071v1
over practical action plans. Now compiling these observations: Looking at Personality Traits: •Big-Five Openness: Lower probably. Because they reject verbose/detailed answers in favor of shorter/more straightforward suggestions. Unless the rejection is specific to inappropriate content... Not sure though. Alternatively, openness relates to creativity/exploration—maybe limited here. Wait, but in Post #1, the rejection of “un bon gros fdp en somme” (which roughly translates to ‘f*** off’) shows aversion to crude language, preferring respectful tone. Maybe mid-openness? Hmm tricky. 20 •Agreeableness: Likely higher than average given tendency to choose comfort- ing/empathic remarks over confrontational or sarcastic ones. However, in fourth instance, the chosen comment asks ‘how come’ implying empathy but also acknowl- edges systemic issues (laws?), indicating balance between supportive and critical perspectives? Or perhaps moderate agreeableness. •Need For Autonomy: Seems strong—they prefer independent solutions, resisting dependence on others 'input (as seen in second post rejecting elaborate tips). Though in fourth scenario, they sought help from authorities (law/companies?) but still chose emotional affirmation over procedural steps. Maybe balancing autonomy with seeking expert resources selectively. •Conflict Handling Approach: From first and fourth cases, the user reacts strongly emotionally (anxiety, anger), indicating avoidance of direct confrontation unless necessary. Preferred responses offer reassurance rather than addressing root causes directly. Thus prefer indirect/empathic resolutions. •Communication Style: From examples above, the user tends toward concise messages avoiding excessive details. Rejected options included lengthy guides filled with anecdotes/examples. Hence leans towards Concise/Prefer direct communication over elaboration. •Work Environment Preferences: Uncertain yet. Third post mentions volunteering positively. Fourth talks about employment issues. Possibly values environments allowing flexibility and autonomy, avoiding rigid structures that led to firing. Prefers places where boundaries respected ethically (maybe corporate policies aligned with LGBTQ+ rights?). •Workplace Needs: Likely values transparent workplaces adhering to anti- discrimination laws, thus advocating for legal protections. Also wants safe spaces free from hostile environments (as per first story). •Achievement Needs: In third scenario celebrates maturity and long-term objec- tives (ageing positively), showing appreciation for progress and stability. Achieve- ment motivation exists but framed positively (not competitive, but attaining inner peace/stability). •Autonomy Needs: Strong. Avoid relying on others 'input (rejected tips in #2), prefers making own decisions guided by empathy (e.g., trusting their instincts in dating despite uncertainties). Want control over decisions, reduce dependency on external validation/suggestions. •Law Awareness: Fourth scenario acknowledges potential lack thereof affecting workplace fairness. Suggests awareness/careful consideration regarding legislation. May value informed decisions backed by legal insights but balance with emotional responses. Putting together all threads... Finalizing categories: Big Five: •Low/Openness? : Since rejects complex advice, leans toward simpler expressions. However, openness relates to imagination/intellect—not sure. Maybe neutral/mid- range? •Moderate-High Agreeableness : Chooses supportive tones, avoids hostility. Empa- thetic nature evident throughout choices. Needs: •Strong Autonomy : Prefers independent judgment, resists external influences (rejec- tion of structured guides in dating, seeking validation instead of proactive measures). •Need for Achievement : Mixed signals—but focuses on overcoming anxieties via proactive actions (getting back mentally). •Workplace Safety : Prioritizes environments respectful of identities, advocating policies preventing discrimination. 21 •Conflict Management : Prefer mediating through dialogue/acknowledging issues indirectly rather than combative methods. Communication
https://arxiv.org/abs/2505.18071v1
Style: Concise, avoids unnecessary elaboration. Focus on clarity and sincerity. Law: Shows awareness of systemic factors impacting employment/freedom, willing to challenge injustices but seeking respectful dialogues. Thus synthesizing all points into coherent preferences... <think> <answer> Based on the analyzed behaviors, the user exhibits the following preferences derived from their interactions: 1.Personality Traits (Big Five): •Openness: Moderately low/high. Prefers pragmatic solutions over elaborate suggestions (rejects verbose advice but values empathy). •Agreeableness: High. Chooses supportive, compassionate language and priori- tizes interpersonal harmony. 2.Core Psychological Needs: •Need for Autonomy: Very pronounced. Resists reliance on external ad- vice/influences (rejects complex dating tips, favors personal intuition). Seeks self-determined pathways amidst stress/anxiety. •Workplace Safety: Strong advocacy for inclusive workplaces free from bi- as/discrimination. •Emotional Security Over Pragmatism: Values emotional validation and communal solidarity (choosing heartfelt condolences) over pragmatic steps (legislation guidance). 3.Social Interaction Styles: •Directness + Empathy: Communicates concisely yet sincerely, balancing honesty with sensitivity (chooses brief affirmation vs verbose explanations/re- jections). •Avoidant Conflict Resolution: Tends to sidestep contentious topics (e.g., avoids discussing workplace discrimination head-on except when validating feelings). 4.Legal Awareness & Personal Achievement: •A conscious awareness of legal frameworks influencing life decisions (aware of EEOC protections but leans on communal solidarity). •Personal fulfillment tied to overcoming vulnerabilities (mental health improve- ment linked to traveling away for escape). 5.Work Environment Needs: •Thrives in transparent, equitable workplaces aligned with individual values (supports LGBTQ+ rights legislation advocacy). </answer> Preference Generated by A LIGN XPLORE -7B Okay, let’s start breaking down these interactions to figure out the user’s preferences using the given criteria. First, looking at each post response: Starting with Post 1: The user chose a comment acknowledging the situation politically ("people near the border") and showing empathy. They rejected a shorter phrase ("un bon gros fdp"). This suggests the user values contextual analysis and empathy rather than brevity. So maybe they prefer deeper insights over concise replies. 22 Moving to Post 2: Here, they picked a supportive reply emphasizing luck and sincerity without practical advice. Rejected a verbose explanation filled with tips. This points toward preferring brevity and warmth over extensive suggestions. Also, rejecting humor ("witty banter") implies less interest in overtly playful solutions unless sincere. In Post 3: Chose gratitude instead of sarcastic remarks about immortality. Shows appreciation for genuine thanks versus edgy content. Emphasizes gratitude and positivity. Lastly, Post 4: Opted for emotional support asking for societal protections vs professional guidance steps. Indicates valuing emotional validation first before actionable advice. Rejects detailed procedural guidance, focusing on feelings. Putting together, the user likely prioritizes warmth, empathy, conciseness, and avoids overly verbose/narrative-heavy responses. Their communication leans towards direct yet compas- sionate messages, seeking connection through shared experiences over complex explanations. They might value authenticity and emotional support above technical details or sarcasm. </think> <answer> User Preferences Analysis Based on Historical Behavior: Key Traits Identified Across Selected Responses: 1.Emotional Support Over Detailed Advice: - Prefers empathetic affirmations (e.g., "Thanks, that's nice of you" vs lengthy guides on legal processes). 2.Contextual Humor/Social Awareness: - Chooses politically relevant observations (border regions) over blunt humor ("un bon gros fdp"). 3.Direct Empathy Without
https://arxiv.org/abs/2505.18071v1
Overexplanation: - Selects compassionate statements focusing on connection ("be sincere"), rejects verbose explanations (flirting tips, "let me help"). 4.Privacy Focus: - Avoids overly personal/critical content, opting for platonic solidarity (volunteering at health centers, "stealth" strategy). Personality Traits Alignment: - Low openness to abstract concepts (preferring straightfor- ward empathy). - High agreeableness (prioritizing harmonious interactions). - Strong desire forauthentic communication , disliking pretentious language or performative politeness. </answer> Preference Generated by Qwen2.5-7B-Instruct Based on the user’s historical behavior on the platform, the following preferences and tendencies can be inferred: 1.Support for LGBTQ+ Community: •The user frequently engages with posts and comments related to the LGBTQ+ community, especially regarding trans issues, relationships, and personal expe- riences. •They often choose to offer comfort, support, and validation to others going through similar struggles, showing empathy and a supportive nature. 2.Interest in Personal Development and Self-Improvement: •The user seeks advice and guidance on personal issues such as dealing with ex-partners, coming out, and navigating family and social relationships. •They are open to receiving and giving advice, showing a willingness to engage in meaningful conversations that can help others. 3.Emotional Support: •The user often chooses comments that provide emotional support, validation, and encouragement. They appreciate kind words and genuine responses. •They tend to avoid confrontational or dismissive comments, preferring to offer understanding and empathy. 4.Sensitivity to Pronouns and Gender Issues: •The user demonstrates sensitivity to pronoun usage and gender issues, often choosing comments that reflect understanding and support for trans individuals. 23 •They are likely to be aware of and respectful of different gender identities and pronouns. 5.Struggle with Intimacy and Relationships: •The user frequently mentions difficulties with intimacy, relationships, and sexual experiences, indicating a need for support and guidance in these areas. •They seem to be seeking validation and advice on how to navigate relationships, both romantic and platonic. 6.Interest in Sexual Health and Consent: •The user shows an interest in sexual health and consent, discussing sensitive topics such as sexual experiences and boundaries. •They value explicit and honest communication in relationships, showing a preference for clear and respectful interactions. Based on these preferences and tendencies, it can be inferred that the user values emotional support, empathy, and genuine advice in their interactions. They are likely to appreciate content that is respectful, understanding, and supportive of their struggles, especially in relation to personal relationships, gender issues, and sexual health. B Data format and prompt Evaluation Prompt Determine which response the user prefers based on the user 's preferences. Please output your selection below in a json format by filling in the placeholders in []: {"selection": "[Response A / Response B]"} Prompt It is just recently that I realized I have no idea what the job market for chemists looks like, although I am a chemistry student. Could someone shed some light on that? Also, which of those bring in the most money? Chemistry jobs? Preference This person’s persona is: This person likes science, knowledge. An aspiring chemistry professional who values optimism and factual information, preferring to gather insights from collective experiences rather than individual anecdotes. This person
https://arxiv.org/abs/2505.18071v1
is proactive in seeking career guidance and appreciates straightforward, encouraging responses. Response A I just had a job interview with a private analytical company a week ago. They were small and privately owned. You could have considered it an entry level position. They weren’t expecting anyone to be FULLY trained in their analytical methods, which were not very common. I found out after I DIDN’T get the job that I had to compete with two Ph.D.s and a ton of other people. I’m fresh out of undergrad. Twas my first job interview and I am glad I had the experience of getting that for the next few job interview experiences I have possible coming up, but wow, I was getting screened for that job while two other doctorate fellows were also totally into it. Response B I think the people in this thread are a bit pessimistic. Someone did a salary thread a few weeks ago and it didn’t look bad at all. One thing a prof mentioned to me is that companies pretty much will not hire someone without lab experience as other people have mentioned. {"selection": "Response B"} 24 Prompt {prompt} Preference {persona} Response A {responseA} Response B {responseB} Prompt for Generating Reasoning Chains and Preference Descriptions. Generate the user’s preference based on their historical behavior. The following aspects can be referred to when analyzing user preferences. {key preference dimensions} This person has chosen or rejected comments on some posts: {implicit preference signals} C Limitations Due to the lack of a real LLM-user interaction test platform, we were unable to validate the model’s reasoning performance in a real-world environment. Once such a testbed becomes available, we will evaluate our model’s performance on it. This paper primarily focuses on the scenario of preference inference, ensuring that the historical preferences in the test set are consistent with the test pairs. Future work could extend to scenarios where user preferences change dynamically over time, requiring the model to adjust preferences based on the user’s recent behaviors during inference. D Impact statement This work enhances the preference inference capability of models, enabling them to better serve human users by understanding and responding to their individual preferences. However, it may involve potential risks related to user privacy and bias. By inferring personalized preferences, there is a possibility of inadvertently amplifying existing biases in the data or misinterpreting user intent. To mitigate these risks, we ensure that our approach incorporates robust fairness and transparency measures. We also prioritize user consent and implement mechanisms to ensure that user data is anonymized and securely handled. Furthermore, we encourage ongoing monitoring of the model 's performance in real-world scenarios to identify and address any unintended consequences, thus ensuring that the model’s deployment remains ethical and aligned with user interests. 25
https://arxiv.org/abs/2505.18071v1
arXiv:2505.18079v2 [cs.CV] 28 May 2025Deep Video Discovery : Agentic Search with Tool Use for Long-form Video Understanding Xiaoyi Zhang∗1Zhaoyang Jia∗2†Zongyu Guo1 Jiahao Li1Bin Li1Houqiang Li2Yan Lu1 1Microsoft Research Asia 2University of Science and Technology of China {xiaoyizhang, zongyuguo, jiahaoli, binli, yanlu}@microsoft.com {jzy_ustc, lihq}@ustc.edu.cn Abstract Long-form video understanding presents significant challenges due to extensive temporal-spatial complexity and the difficulty of question answering under such extended contexts. While Large Language Models (LLMs) have demonstrated considerable advancements in video analysis capabilities and long context handling, they continue to exhibit limitations when processing information-dense hour-long videos. To overcome such limitations, we propose the DeepVideo Discovery (DVD ) agent to leverage an agentic search strategy over segmented video clips. Different from previous video agents manually designing a rigid workflow, our approach emphasizes the autonomous nature of agents. By providing a set of search-centric tools on multi-granular video database, our DVD agent leverages the advanced reasoning capability of LLM to plan on its current observation state, strategically selects tools with appropriate parameters for actions in light of the gathered information. We perform comprehensive evaluation on multiple long video understanding benchmarks that demonstrates the advantage of the entire system design. Our DVD agent achieves state-of-the-art performance on the challenging LVBench dataset, reaching an accuracy of 74.2% , which substantially surpasses all prior works, and further improves to 76.0% with transcripts. The code will be released later as an MCP service. Video Database Search Toolset LLM with Reasoning “Who gets the lollipop last?”Thought: subtask(1) Ineed tofirst… Tool use: ClipSearch(“lollipop”) Observation: Relevant clips captions: … Thought: subtask(2) Ineed to… Tool use: GlobalBrowse() … gathering info… gathering info… “The zombie gets itat lastaccording tothe observation that…”… Figure 1: Left: Illustration of our Deep Video Discovery agent, which autonomously reasons on user query, iterative use tools to obtain the final answer. Right: Performance comparison on LVBench. ∗Equal contribution.∗∗Change Log is provided at the end of the main text. †This work was done during the internship at Microsoft Research Asia as an open-source project. Preprint. Under review. 1 Introduction Long-form videos are ubiquitous in everyday life, spanning diverse domains such as movies, meeting recordings, sports games, and variety shows. Accurately comprehending and interpreting content within these extensive videos remains an intrinsically challenging task [ 8,26,31], demanding an ability to simultaneously integrate and reason about intricate spatiotemporal details across broad global contexts. Effective retrieval of relevant information from hour-long or even longer sequences not only necessitates attending to fine-grained local details but also simultaneously interpreting subtle semantic relations distributed throughout extended temporal intervals. Recent advancements in Large Language Models (LLMs) and Large Vision-language Models (VLMs) have notably improved capabilities in video understanding [17, 4, 28] and increased context length handling more than one million tokens [ 17,25,33]. However, even this extended context length remains insufficient for comprehending the information density typically found in long-form videos of hour-long duration. Empirical observations [ 17] also suggest a decline in the model’s effective instruction-following ability and reasoning clarity as the temporal dimension and information density increase. Concurrently, recent breakthroughs [ 11,18] on reasoning capability of LLMs have facilitated
https://arxiv.org/abs/2505.18079v2
advances in agentic systems capable of complex information gathering tasks, such as Deep Re- search [ 16,10,20] or Deep Search [ 2,3]. These agentic approaches demonstrate how decomposing difficult tasks into modular sub-tasks enables iterative reasoning, information searching, and content synthesis. Inspired by these successes, we view the problem of understanding extremely long videos as analogous to multi-step complex search tasks, where the video is segmented into multiple shorter video clips serving as manageable units of information, named as Deep Video Discovery (Fig. 1, left). While existing video agent frameworks [ 34,7,19,30] incorporate searching processes in their designs, they manually design the search process with their human prior. For instance, both VideoTree [ 30] and VCA [ 34] employ tree-based search strategies that navigate from root nodes to leaf nodes. This approach alleviates the context length limitations of LMMs but is inefficient for fine-grained queries since traversing the tree from root to the leaf is costly, which might benefit more from direct retrieving among leaf nodes. Additionally, semantically relevant entities may not exhibit temporal proximity, potentially diminishing the efficiency of backdate mechanism in tree-based search methods. In contrast to existing systems that typically rely on manually defined, rigid workflows, our approach is distinctly designed around an autonomous and flexible agentic search paradigm. Instead of explicitly prescribing task workflows or search behaviors, We develop modular search tools that operate at multiple granularities, including (1) Global Browse , (2)Clip Search , and (3) Frame Inspect . Global Browse enables global summarization and indexing of subjects and global contexts across the entire video. Clip Search implements efficient semantic retrieval of relevant events within segmented clips. Specifically, Frame Inspection empowers the agent to extract fine-grained details directly from pixel-level information in a specified temporal range. With provided this search-centric toolkit and multi-granular video database, our agent is inherently capable of autonomous reasoning, dynamic strategy formation, and iterative decision-making to proactively discover and extract crucial evidence. By leveraging the sophisticated reasoning capa- bilities intrinsic in the latest LLM, our agent does not merely use these tools independently, but adaptively combines their complementary strengths into a chain of thoughts and tool uses, effectively addressing diverse temporal-spatial and complex questions for long video. In the end, Deep Video Discovery can autonomously reason, plan, and retrieve pertinent information for video understanding. We conduct comprehensive evaluations on long video benchmarks, demonstrating the efficiency and strong performance of our agent. In particular, on the challenging LVBench, we push forward the state-of-the-art performance by a large margin to 74.2% (as shown in Fig. 1, right), further achieving 76.0% with auxiliary transcripts. We also set a series of ablation studies that show the effectiveness of our tool design. In addition, we analyze the behavior patterns of different reasoning models in tool use sequences, providing future insight of developing agents for long video understanding tasks. 2 Related Work Long Video Understanding. Long video understanding remains a formidable challenge due to the intricate demands of temporal and spatial reasoning over extended durations and the complexity of information retrieval [ 31,26]. Recent efforts in VLM for long
https://arxiv.org/abs/2505.18079v2
video understanding primarily 2 Multi -grained Video Database Search -centric Toolset LLM with Reasoning User Query plan & actionobservation & thinking Answer Global Browse Clip Search Frame Inspect Global Subjects Clip Captions Indexed FramesLong Video Multi -granular Video Database Large VLM Video Clips subjects captionsindexed framesStage 1: Multi -granular Video Database Construction Stage 2: Agentic Search and Answer with Tool UseFigure 2: Deep Video Discovery consists of two stages: 1) Multi-granular Video Database Construc- tion. We extract video information from different levels to enable comprehensive understanding, efficient retrieval, and preservation of original content. 2) Agentic Search and Answer. The agent iteratively reasons on user query and leverage the tailored toolset to gather information to answer. tackle challenges of limited input frame number by extending the context length of models [ 25,6] or minimizing video redundancy to reduce visual token numbers [ 12,14,28]. Approaches such as AdaR ETAKE[28] dynamically compress visual token by allocating adaptive compression ratios across time and model layers, thus significantly expanding the effective input frame number. However, token compression inherently introduces uncertainty regarding information loss, and models continue to face difficulties when answering complex queries under elongated context windows. In parallel, given the sparsity of key information about the given query, some works [ 27,7,34,30,19,32] propose to explore the video content by agentic system. But they usually manually guide the agent about the search workflow by their priors [ 34,30] or only allow the agent at simplex frame granularity to search [ 27], which cannot make full use of the reasoning capability of LLMs, resulting in suboptimal search efficiency and a lack of holistic, global comprehension of the long video content. Agent and tool use. Recent advancements in large language models (LLMs), particularly their enhanced reasoning and planning capabilities, have significantly accelerated the development of autonomous agents [ 35,39,38]. The ability to leverage external tools [ 23,22,21] further narrows the gap between general-purpose LLMs and real-world applications, enabling LLMs to acquire information, perform planning, and execute actions in complex environments. Our work extends this line of research to long video understanding, contributing to the broader investigation of solving complex video understanding tasks by integrating the advanced reasoning capabilities of LLMs with sophisticated tool use. We introduce a suite of search-centric tools that allow LLMs to autonomously gather information at varying levels of granularity. By dynamically composing these tools, the agent can construct multi-step tool-use chains to improve the ability to answer complex queries effectively. 3 Deep Video Discovery Overview. To solve the long-form video understanding problem in an agentic search way, we first build a multi-grained structured database from the long-form video. The database then serves for search-centric tools that work at different granularities. Specifically, our Deep Video Discovery agent consists of three main components: the multi-grained video database D, search-centric toolset T, and the LLM Mas the agent’s orchestrator. Given the user query Q, the agent reasons iteratively to choose an action Ai∈ T ∪ { ANSWER }with parameters Pto gather information for the video database Dor make decision to answer the query by referring to the accumulated
https://arxiv.org/abs/2505.18079v2
information in this process. In the following subsections, we sequentially introduce the multi-grained video database construction and Agentic Search and Answer with Tool Use. 3.1 Multi-granular Video Database Construction Given an ultra–long input video V, our goal is to transform it into a database that can provide efficient fast retrieval and also provide the original pixels of video for detailed information when 3 necessary. Hence, we design it in a multiple granularity style which can provide different levels of video information for corresponding search tools. Specifically, we first segment the video into clips as the basic information unit then make the database include global summarized information to cover the whole video, a clip-based caption corpus and indexed frames from the clip. Fig. 2 (left) provides an overview. We introduce these components sequentially. Temporal segmentation. We start by uniformly partitioning the input video Vinto a temporal sequence of non-overlapping short clips {vi}N i=1, where the total segments N=⌈len(V) t⌉. Empirically, we set t= 5 seconds to provide an adequate balance between computing cost and semantic and action completeness. Then all the video clips are decoded into frames {fi}N i=1under 2 frames per second for further process. Multi-granular information extraction. Our multi-granular video information is designed as three levels: global video level, clip level and frame level. Specifically, at the global level we summarize the video content into a compact, subject-centric representation. At the clip level, we leverage textual captions to facilitate efficient information retrieval, while at the frame level we preserve original decoded frames indexed according to their corresponding clips, enabling precise reference and detailed analysis when required. To derive the subject-centric global representation while minimizing redundancy in caption generation, we maintain an progressive structured subject registry Sthroughout the clip captioning process. Specifically, given a video clip viand decoded frame fi, we prompt a large VLM to generate the captioning ciand evolve registry whenever new subjects appear. The process is denoted as Si, ci=V LM (fi, Si−1)where S0is initialized as empty, and at the conclusion of the captioning process, the final subject registry is denoted by S=SN. Each subject within the registry is represented by a comprehensive set of attributes, including name, physical appearance, identity descriptors, associated actions, and corresponding temporal spans in the video. The obtained caption ciis subsequently embedded into a dense semantic vector ei∈Rdusing a language embedding model, facilitating fast retrieval in downstream applications. Despite careful design choices, perceptual compression inherent in caption generation inevitably entails some information loss. To mitigate this when necessary, we explicitly retain the decoded frames fialongside their corresponding textual captions and embeddings. Outcome. The finalized database therefore encapsulates the decoded frames, captions and cor- responding embedding triples, thus forming a structured database D={S,{fi, ci, ei}N i=1}.This offline construction procedure transforms a lengthy raw video into a structured set of textually searchable embeddings with associated clips, while simultaneously preserving the complete visual content at pixel resolution. The resulting database becomes the basis for adaptive tool usage, enabling global information browsing, efficient semantic retrieval at the video-clip scale, and comprehensive grounding of generated outputs back to their
https://arxiv.org/abs/2505.18079v2
source frames. 3.2 Agentic Search and Answer with Tool Use With the built multi-granular video database, we design a set of search-centric tools that can enable global information understanding, efficient clip retrieval by semantic query, and details exploration on original video content. By equipping a reasoning large language model with this toolset, we build our DVD that can address complex user query on long video though autonomous planning and strategical search tool combination, as shown in Fig. 2 (right). We refer to this stage as Agentic Search and Answer with Tool Use (ASA ). We introduce this stage through two subsections: Search-centric Tool Preparation and Agent Design. 3.2.1 Search-centric Tool Preparation Leveraging the established video database, we have developed a suite of tools designed to efficiently gather information from video data at varying levels of granularity. Specifically, we divide long videos into three distinct hierarchical levels and introduce corresponding specialized tools: (1) Global Browse, (2) Clip Search, and (3) Frame Inspect. Given the significant computational cost associated with processing lengthy videos using VLMs, our tool design carefully balances efficiency and performance. Central to our approach is an agentic search paradigm, wherein the agent decomposes 4 the user query and strategically chains up tools with synthesized parameters, enabling iterative reasoning and information collection to resolve the task. Through the effective integration and coordinated use of these tools, the agent progressively enhances its understanding of user intent and precisely locates relevant information within extensive video content. We introduce the three tools sequentially in the following paragraphs. Tool: Global Browse. The Global Browse tool takes the video database and the original user query as input, and returns global summaries capturing high-level contextual information. We construct two distinct types of global information: subject-centric and event-centric summaries. For subject-centric summarization, we pre-construct it when building the multi-granular video dataset as mentioned in Section 3.1 since it is query-irrelevant. For event-centric summarization, we uniformly sample frames across the entire video and feed these sampled frames into the VLM. We instruct the VLM to describe noteworthy events explicitly related to the original user query. Upon invocation by the agent, the Global Browse tool efficiently retrieves and returns these global representations, providing the agent immediate access to high-level global context information. Tool: Clip Search. Clip Search provides a mid-level granularity retrieval capability, enabling fast and efficient exploration of video content via caption embedding. Given a query ˆQsynthesized based on the agent’s current internal reasoning context, this module retrieves a ranked list of top- k relevant video clips along with their captions. Specifically, the tool computes the cosine similarity between the embedding of the provided query and the pre-computed embeddings of all video clip captions, returning the clips corresponding to the highest-ranked caption matches. Each retrieved observation contains both the corresponding caption and the time ranges of the associated video clip. To achieve an accurate and detailed understanding, the agent can iteratively invoke this tool, progressively refining temporal constraints or reformulating its queries based on newly acquired contextual knowledge. This iterative chain-of-query approach effectively guides the agent toward precise temporal segments
https://arxiv.org/abs/2505.18079v2
relevant to the original high-level query. Tool: Frame Inspect. Frame Inspect receives a temporal range [ts, te]within the video and an sub-query freely defined by the agent as input, returning an open format visual-question-answering (VQA) response. The agent can invoke this tool whenever explicit frame-level details such as subtle attributes, object counting, or fine-grained spatial relationships, are required but not clearly depicted in captions or global summaries. The open-ended query format allows significant freedom for the agent to leverage its reasoning capability, enabling highly adaptable visual inspection. Specifically, the Frame Inspect tool loads raw frames from the requested interval and prompts a VLM with these frames and agent-synthesized query. To ensure computational efficiency, we limit processing to a maximum of 50 frames, uniformly sampling from frames exceeding this limit. The resulting response thus equips the agent with accurate, visually-grounded evidence essential for detailed reasoning tasks. 3.2.2 Agentic Design To maximally leverage the reasoning and planning capacity intrinsic to modern LLMs, we intention- ally abstain from manually instructing explicit seaching workflow or tool utilization patterns. Instead, we enable the agent to reason, plan, and take actions through a streamlined iterative observe-reason- actloop, similar to ReAct[ 35]. For a given query, the agent reasons about its current observation state, strategically selects search tools, formulates appropriate parameters for actions, and dynamically refines its internal reasoning in light of the gathered evidence. Within ASA, the LLM acts as a sophisticated cognitive driver, taking actions at each iteration based on cumulative knowledge and reasoned evidence, thereby reinforcing its pivotal role in adaptively navigating the discovery process. Specifically, as illustrated in Algorithm 1, given an initial user query Q, a predefined action spaceA=T ∪ { ANSWER }, and a maximum allowable step count N, our agent performs iter- ative reasoning to strategically navigate the available actions. The agent leverages an LLM M to reason upon the current dialogue history, plan its immediate action, interact with the toolset T={GLOBAL BROWSE ,CLIPSEARCH ,FRAME INSPECT }, and collect observations Oi. More con- cretely, at each step t, the agent maintains a historical context Hi, reflects to generate a reasoning step Ri, selects an action Ai∈ T ∪ { ANSWER }accompanied by relevant parameters Pi, and receives subsequent observation outcomes Oifrom the environment. These components, reasoning, action, and obtained outcomes, are successively appended to the interaction history Hi, enriching the context 5 Table 1: Action space overview of our DVD. The first three actions are from our toolset and the final ANSWER action is de- signed as stop criterion. Action Parameter GLOBAL BROWSEvideo database D user query Q CLIP SEARCHvideo database D agent synthesized query ˆQ return top- kcaptions FRAME INSPECTvideo database D agent synthesized query ˆQ temporal range [ts, te] ANSWER the answer to user queryAlgorithm 1: Agentic Search and Answer. Input : Initial query Q, max step N, LLM M, tool setT, action space A=T ∪ { ANSWER } Output : Answer to Q Initialize history H0← {Q,A} fori←1toNdo Ri←M.reason (Hi−1) Ai, Pi←M.call (Ri, Hi−1)where A i∈ A ifAi=ANSWER then break end Oi←Ai(Pi) Hi←Hi−1∪ {(Ri, Ai, Oi)} ift=Nthen Pi←M.answer (Hi)
https://arxiv.org/abs/2505.18079v2
end end return ANSWER (Pi) for subsequent iterations of inference. The iterative process terminates either when the agent explicitly selects the ANSWER action, or upon reaching the step limit N, at which prompts the agent directly generates a final answer prediction. The agent then outputs the final answer to the original user query. By positioning the LLM’s sophisticated reasoning at the core of this iterative loop, this approach endows the agent with an inherently autonomous, evidence-guided, and flexible action-taking mecha- nism. This autonomous and iterative paradigm fosters a strategic and context-sensitive inquiry cycle, thereby enabling the agent to effectively leverage the available tools to iteratively decompose the original query into progressively refined sub-queries, updating and improving the query representation as it receives new observations. Through iterative reasoning and interaction cycles, guided by deeper and increasingly comprehensive observations collected from prior tool usage, the agent systematically enhances its understanding and interpretation of the task context, ultimately leading to more accurate and informed answers to the given question. 4 Experiment 4.1 Benchmarks We assess the long-form video understanding capabilities of Deep Video Discovery using several es- tablished long video benchmarks. Our primary evaluation benchmark, LVBench [26], includes 1,549 multiple-choice questions across 103 hour-long videos. It stands as one of the most comprehensive and challenging benchmarks for extreme long-form video understanding. LongVideoBench [31] features 6,678 questions from 3,763 videos, ranging in duration from a few seconds to an hour. We emphasize the longest subset with durations in (900s,3600s](denoted as the Long subset), compris- ing 564 questions from 188 videos. Video MME [8] is segmented by video duration; we concentrate on the Long subset without subtitles to isolate long-video comprehension, covering 300 videos of 30 to 60 minutes with 900 questions. Finally, EgoSchema [15] serves as a diagnostic benchmark for long-video understanding, where we evaluate on its validation split of 500 videos with 500 questions. 4.2 Implementation Details Baselines . We compare Deep Video Discovery with a range of long-video understanding systems, including both VLM-based [ 24,1,18,9,36,29,37,4,13,28] and agent-based approaches [ 30,7, 34,19]. Most baseline results are taken from official leaderboards or published reports, except for the recently released OpenAI o3 [ 18], which has not yet been evaluated on these benchmarks. Following [19], we uniformly sample 256 frames per video to evaluate OpenAI o3. Deep Video Discovery flexibly integrates different models depending on the needs of each component. For the VLM in video database construction, we use GPT-4.1 [ 17] to produce high-quality captions on LVBench, and GPT-4.1-mini for other benchmarks to reduce cost. During agentic search and 6 Table 2: Comparison on LVBench. Methods ER EU KIR TG Rea Sum Overall Commercial VLMs Gemini-1.5-Pro [24] 32.1 30.9 39.3 31.8 27.0 32.8 33.1 Gemini-2.0-Flash [24] 47.4 48.5 56.8 39.3 44.4 41.4 48.6 GLM-4V-Plus [9] 46.2 47.8 54.1 42.7 46.5 37.9 48.7 GPT-4o [1] 48.9 49.5 48.1 40.9 50.3 50.0 48.9 OpenAI o3 [18] 57.6 56.4 62.9 46.8 50.8 67.2 57.1 Open-Source VLMs InternVL2.5-78B [29] 43.8 42.0 42.1 36.8 51.0 37.9 43.6 VideoLLaMA3-7B [37] 45.8 42.4 47.8 35.9 45.8 36.2 45.3 Qwen2.5-VL-72B [4] - - -
https://arxiv.org/abs/2505.18079v2
- - - 47.7 VideoChat-Flash [13] 51.1 46.0 49.0 38.9 48.5 34.5 48.2 AdaR ETAKE[28] 53.0 50.7 62.2 45.5 54.7 37.9 53.3 Video Agents and Others VideoTree [30] 30.3 25.1 26.5 27.7 31.9 25.5 28.8 VideoAgent [27] 28.0 30.3 28.0 29.3 28.0 36.4 29.3 VCA [34] 43.7 40.7 37.8 38.0 46.2 27.3 41.3 MR. Video [19] 59.8 57.4 71.4 58.8 57.7 50.0 60.8 Deep Video Discovery (Ours) 73.4 73.3 80.4 72.3 70.7 74.1 74.2 + Auxiliary transcripts 75.5 77.1 79.0 72.7 68.7 84.5 76.0 Table 3: Comparison on long video benchmarks. MethodsLVBench LongVideoBench (Val) Video MME EgoSchema Overall Overall Long Long (w/o sub) Val Commercial VLMs Gemini-1.5-Pro [24] 33.1 64.0 58.6 67.4 - Gemini-2.0-Flash [24] 48.3 - 45.7 63.0 71.2 GPT-4o [1] 48.9 66.7 60.9 65.3 70.4 OpenAI o3 [18] 57.1 67.5 60.6 64.7 63.2 Open-Source VLMs mPLUG-Owl3 [36] 43.5 59.8 - 50.1 - InternVL2.5-78B [29] 43.6 63.6 - 62.6 - Qwen2.5-VL-72B [4] 47.7 60.7 - 63.9 - AdaR ETAKE[28] 53.3 67.0 - 65.0 - Video Agents and Others VideoTree [30] 28.8 - - - 67.0 VideoAgent [27] 29.3 - - - 63.2 VCA [34] 41.3 - - - 73.6 MR. Video [19] 60.8 - 61.6 61.8 73.0 Deep Video Discovery (Ours) 74.2 71.6 68.6 67.3 76.6 answering, we employ OpenAI o3 as LLM Mfor its strong reasoning ability, including in the Frame Inspect module for fine-grained VQA. All frames are resized to 720p to maintain visual details. In Clip Search, we set 16 as the default value of top- kwhile leaving the flexibility for LLM to change it. Maximum reasoning step is set to N= 15 . To explore the upper bound of understanding ability, we additionally evaluate LVBench using auxiliary transcripts . Audio is transcribed with WhisperX[ 5], and transcripts are used to guide video segmentation and enrich captions. This audio-visual fusion enhances understanding of long, complex content, leading to stronger results. API Content filtering. We use LLM API via Azure OpenAI Service. We observe that the safety content filtering mechanism of the service misjudges a small part of data from the benchmark as offensive and block the request, which leads to the reduced performance of both OpenAI o3 baseline and our DVD agent. We provide more details and mitigation strategies in our supplementary material. 7 Table 4: Ablation on used models. Mdatabase for captioning in database construction, Mreasoning for reasoning in ASA, Mtoolfor Frame Inspect. Adopted models LVBench Mdatabase Mreasoning Mtool w/ transcripts 4.1 o3 4.1-mini 72.3 4.1 o4-mini o3 70.2 4.1 4o o3 62.3 4.1-mini o3 o3 71.9 4.1 o3 o3 76.0Table 5: Ablation on the search-centeric tools T. Note that the anchor uses 4.1-mini for Mdatabase , and o3 for bothMreasoning andMtool. Search-centeric Tools LVBench Global Browse Clip Search Frame Inspect w/ transcripts ✓ ✓ 69.0 ✓ ✓ 59.6 ✓ ✓ 63.5 ✓ ✓ ✓ 71.9 4.3 Main Results Table 2 presents the comparison results on LVBench. DVD significantly outperforms all baselines, surpassing the previous SOTA MR. Video by 13.4%. Compared to the prior leading video agent VCA, our method achieves a remarkable 32.9% gain. Against our
https://arxiv.org/abs/2505.18079v2
base VLM, OpenAI o3, our full system delivers a substantial 17.1% gain, highlighting the importance of agentic reasoning. Incorporating transcript information provides an additional 1.8% boost. These results highlight the effectiveness of our search-centric agentic reasoning framework in handling ultra-long video understanding tasks. Table 3 provides a comprehensive evaluation across several long-video benchmarks. On LongVideoBench, DVD outperforms the previous SOTA by 4.1% overall and 7.0% on the longest- duration subset. On the Video MME Long subset, it beats the best open-source VLM, AdaR ETAKE, by 2.3%, and MR. Video by 5.5%, approaching the performance of Gemini-1.5-Pro. On EgoSchema, our method exceeds the previous best by 3.0%. Notably, it exceeds reported human-level accuracy of ∼76% on this benchmark. Across all datasets, our system consistently outperforms the base VLM OpenAI o3, confirming the effectiveness and generalizability of our agentic reasoning framework. 4.4 Ablation Study We evaluate the impact of different model choices across system components. By default, GPT-4.1 is used for captioning and subject extraction during Multi-granular Video Database Construction and OpenAI o3 serves as the reasoning model in the Agentic search and Answer with tool use process while Frame Inspect tool also leverages OpenAI o3 to query the fine-grained details on the frame pixels. We denote the three models as Mdatabase ,Mreasoning andMtoolin Table 4. Replacing GPT-4.1 with GPT-4.1-mini for database construction or Frame Inspect tool results in moderate drops of 4.1% and 3.7%, respectively, indicating relatively minor impact. For reasoning model in agentic search, switching to OpenAI o4-mini [ 18] leads to a 5.8% drop, while GPT-4o causes a substantial 13.7% decline. It highlights the reasoning model as the most critical component in our agentic system because our system is designed surrounding and to make full use of the reasoning capability of LLM. The lack of reasoning ability leads to the collapse of agent behavior, as analyzed further in the subsequent subsection. We also assess the contribution of each tool in the agentic search and answer phase (Table 5). Removing Global Browse which is responsible for global summarization and long-range event linking leads to a 2.9% drop. Disabling Frame Inspect with the fine-grained VQA results in a 8.4% decline, highlighting its role in fine-grained understanding. Removing Clip Search causes the largest drop of 12.3%, as it breaks the searching ability for iteratively refine reasoning. These results underscore the importance of tool integration in our search-centric framework. 4.5 Analysis on Agentic Reasoning Behavior The reasoning model is the most critical component in DVD. During the observe-reason-act loop, the agent autonomously integrates current context and flexibly decides the next tool to invoke. To better understand this, we analyze the tool-calling behavior during the agentic search and answer phase and category it into five types for analysis (see Fig. 3). Global Browse Only means the agent answers immediately after a single Global Browse call, reflecting strong confidence in the global context. Though rare, this behavior reaches high accuracy. 8 OpenAI o3 Steps / Score = 7.6 / 76.0OpenAI o4 -mini Steps / Score = 5.8 / 70.2GPT -4o Steps / Score = 4.6 / 62.3Score
https://arxiv.org/abs/2505.18079v2
Score ScoreSA (53.3%) 5.2 / 84GBO (2.8%) 3.0 / 95 IS (6.1% ) 8.0 / 71FIT (10.4%) 10.7 / 63CST (27.4%) 11.6 / 58Behavior Type (Ratio) Steps / Score SA (71.9% ) 4.8 / 75GBO (2.9%) 3.0 / 91 IS (5.8%) 7.7 / 56FIT (3.4%) 9.6 / 45CST (16.0%) 9.4 / 54 SA (91.4%) 4.6 / 56GBO (5.5%) 3.0 / 76IS (0.7%) 6.3 / 36FIT (1.6%) 7.6 / 36CST (0.7%) 7.2 / 45RatioScore SA Simple ActionIS Iterative SearchFIT Frame Inspect TrapCST Clip Search TrapGBO Global Browse OnlyFigure 3: Analysis of the behavior of Deep Video Discovery using different reasoning models. We categorize tool-calling behavior into five types. For each type, we report its proportion ( Ratio , sector angels), average reasoning steps ( Steps , sector radius) and score ( Score , dashed lines). A clear correlation emerges among behavior patterns, reasoning depth, and score (see Section 4.5 for details). Simple Action involves at most two continuous Clip Search and two continuous Frame Inspect calls, following a straightforward search-query-answer logic. This is the dominant strategy, covering over half the queries with strong accuracy. Iterative Search means the agent iteratively alternates between Clip Search and Frame Inspect to search for new contexts, indicating difficulty in finding sufficient information in early steps. It shows longer reasoning chains (e.g., 8.0 iters vs. 5.2 for Simple Action with o3) and slightly lower accuracy. Frame Inspect Trap means the agent invokes more than three consecutive Frame Inspect without concluding, becoming stuck in fine-grained analysis. This leads to long reasoning and low accuracy. Clip Search Trap means the agent repeatedly calls more than three consecutive Clip Search without reaching a conclusion, e.g., when the key information misses in the database, causing the agent to loop without progress. This is frequent for o3 and accounts for most failures of it. From these results, we further draw two key insights, which we believe will provide valuable insights for the development of future autonomous video agent systems. Insight 1: Reasoning Steps v.s. Accuracy . Within the same model, longer reasoning chains often imply uncertainty and lower accuracy. However, across models, better performance is typically associated with more thorough and longer reasoning. Insight 2: Overconfidence and Behavioral Collapse . GPT-4o underperforms significantly compared to o3 and o4-mini. Its behavior collapses to Simple Action in 91.4% queries and rarely explores alternative strategies. With an average of just 4.6 reasoning steps, it tends to conclude prematurely. This suggests overconfidence and limited flexibility, likely leading to its lower accuracy. 5 Conclusion We introduces the proposed Deep Video Discovery agent for long-form video understanding, utilizing multi-granular search tools on constructed database for iterative search and reasoning over extensive video content. Our approach outperforms prior methods by adaptively integrating global browsing, clip search, and frame inspection, as demonstrated by state-of-the-art results on multiple benchmarks. Ablation studies confirm the effectiveness of our tool design, while analyses of reasoning model behavior provide insight into model reasoning patterns. Overall, our framework offers a scalable and flexible solution for comprehensive analysis of long videos. Limitations . While our agent significantly improves
https://arxiv.org/abs/2505.18079v2
long video understanding, the iterative reasoning introduces higher computational overhead. In future work, we will explore more effective database construction and searching to reduce reasoning difficulty and thereby lower computational costs. 9 Change Log •v1(2025-05-23): Initial submission. •v2(2025-05-28): Fixed the evaluation code to correctly account for answers enclosed in parentheses, resulting in consistently improved reported accuracy. References [1]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. GPT-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2] J. AI. DeepSearch - Jina AI. https://jina.ai/deepsearch/ , 2025. [3] X. AI. Grok 3 Beta — The Age of Reasoning Agents. https://x.ai/news/grok-3 , 2025. [4]S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang, et al. Qwen2.5-VL technical report. arXiv preprint arXiv:2502.13923 , 2025. [5]M. Bain, J. Huh, T. Han, and A. Zisserman. Whisperx: Time-accurate speech transcription of long-form audio. INTERSPEECH 2023 , 2023. [6]Y . Chen, F. Xue, D. Li, Q. Hu, L. Zhu, X. Li, Y . Fang, H. Tang, S. Yang, Z. Liu, et al. Longvila: Scaling long-context visual language models for long videos. arXiv preprint arXiv:2408.10188 , 2024. [7]Y . Fan, X. Ma, R. Wu, Y . Du, J. Li, Z. Gao, and Q. Li. VideoAgent: A memory-augmented multimodal agent for video understanding. In ECCV , 2024. [8]C. Fu, Y . Dai, Y . Luo, L. Li, S. Ren, R. Zhang, Z. Wang, C. Zhou, Y . Shen, M. Zhang, et al. Video-MME: The first-ever comprehensive evaluation benchmark of multi-modal LLMs in video analysis. arXiv preprint arXiv:2405.21075 , 2024. [9]T. GLM, A. Zeng, B. Xu, B. Wang, C. Zhang, D. Yin, D. Zhang, D. Rojas, G. Feng, H. Zhao, et al. ChatGLM: A family of large language models from GLM-130B to GLM-4 all tools. arXiv preprint arXiv:2406.12793 , 2024. [10] Google. Gemini Deep Research - your personal research assistant. https://gemini.google/ overview/deep-research , 2025. [11] D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [12] Y . Han, Q. Guo, L. Pan, L. Liu, Y . Guan, and M. Yang. Dynfocus: Dynamic cooperative network empowers llms with video understanding. arXiv preprint arXiv:2411.12355 , 2024. [13] X. Li, Y . Wang, J. Yu, X. Zeng, Y . Zhu, H. Huang, J. Gao, K. Li, Y . He, C. Wang, Y . Qiao, Y . Wang, and L. Wang. VideoChat-Flash: Hierarchical compression for long-context video modeling. arXiv preprint arXiv:2501.00574 , 2024. [14] X. Liu, Y . Shu, Z. Liu, A. Li, Y . Tian, and B. Zhao. Video-xl-pro: Reconstructive token compression for extremely long video understanding. arXiv preprint arXiv:2503.18478 , 2025. [15] K. Mangalam, R. Akshulakov, and J. Malik. EgoSchema: A diagnostic benchmark for very long-form video language understanding. In NeurIPS , 2023. [16] OpenAI. Introducing deep research. https://openai.com/index/ introducing-deep-research/ , 2025. [17] OpenAI. Introducing GPT-4.1 in the API. https://openai.com/index/gpt-4-1/ , 2025. Accessed: 2025-04-14. [18]
https://arxiv.org/abs/2505.18079v2
OpenAI. Introducing OpenAI o3 and o4-mini. https://openai.com/index/ introducing-o3-and-o4-mini/ , 2025. Accessed: 2025-05-15. 10 [19] Z. Pang and Y .-X. Wang. Mr. video:" mapreduce" is the principle for long video understanding. arXiv preprint arXiv:2504.16082 , 2025. [20] Perplexity. Introducing Perplexity Deep Research. https://www.perplexity.ai/hub/ blog/introducing-perplexity-deep-research , 2025. [21] Y . Qin, S. Hu, Y . Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, X. Zhou, Y . Huang, C. Xiao, et al. Tool learning with foundation models. ACM Computing Surveys , 57(4):1–40, 2024. [22] C. Qu, S. Dai, X. Wei, H. Cai, S. Wang, D. Yin, J. Xu, and J.-R. Wen. Tool learning with large language models: A survey. Frontiers of Computer Science , 19(8):198343, 2025. [23] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems , 36:68539–68551, 2023. [24] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. [25] G. Team, P. Georgiev, V . I. Lei, R. Burnell, L. Bai, A. Gulati, G. Tanzer, D. Vincent, Z. Pan, S. Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [26] W. Wang, Z. He, W. Hong, Y . Cheng, X. Zhang, J. Qi, X. Gu, S. Huang, B. Xu, Y . Dong, et al. LVBench: An extreme long video understanding benchmark. arXiv preprint arXiv:2406.08035 , 2024. [27] X. Wang, Y . Zhang, O. Zohar, and S. Yeung-Levy. VideoAgent: Long-form video understanding with large language model as agent. In ECCV , 2024. [28] X. Wang, Q. Si, J. Wu, S. Zhu, L. Cao, and L. Nie. Adaretake: Adaptive redundancy reduction to perceive longer for video-language understanding. arXiv preprint arXiv:2503.12559 , 2025. [29] Y . Wang, X. Li, Z. Yan, Y . He, J. Yu, X. Zeng, C. Wang, C. Ma, H. Huang, J. Gao, et al. InternVideo2.5: Empowering video MLLMs with long and rich context modeling. arXiv preprint arXiv:2501.12386 , 2025. [30] Z. Wang, S. Yu, E. Stengel-Eskin, J. Yoon, F. Cheng, G. Bertasius, and M. Bansal. VideoTree: Adaptive tree-based video representation for LLM reasoning on long videos. arXiv preprint arXiv:2405.19209 , 2024. [31] H. Wu, D. Li, B. Chen, and J. Li. Longvideobench: A benchmark for long-context interleaved video-language understanding. In NeurIPS , 2024. [32] Y . Yan, S. Jiang, T. Cao, Y . Yang, Q. Yang, Y . Shu, Y . Yang, and L. Qiu. Empowering agentic video analytics systems with video language models. arXiv preprint arXiv:2505.00254 , 2025. [33] A. Yang, B. Yu, C. Li, D. Liu, F. Huang, H. Huang, J. Jiang, J. Tu, J. Zhang, J. Zhou, et al. Qwen2. 5-1m technical report. arXiv preprint arXiv:2501.15383 , 2025. [34] Z. Yang, D. Chen, X. Yu, M. Shen, and C. Gan. VCA: Video curious agent for long video understanding. arXiv preprint arXiv:2412.10471 , 2024. [35] S. Yao,
https://arxiv.org/abs/2505.18079v2
J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y . Cao. ReAct: Synergizing reasoning and acting in language models. In ICLR , 2023. [36] J. Ye, H. Xu, H. Liu, A. Hu, M. Yan, Q. Qian, J. Zhang, F. Huang, and J. Zhou. mPLUG-OWL3: Towards long image-sequence understanding in multi-modal large language models. In ICLR , 2024. [37] B. Zhang, K. Li, Z. Cheng, Z. Hu, Y . Yuan, G. Chen, S. Leng, Y . Jiang, H. Zhang, X. Li, P. Jin, W. Zhang, F. Wang, L. Bing, and D. Zhao. Videollama 3: Frontier multimodal foundation models for image and video understanding. arXiv preprint arXiv:2501.13106 , 2025. 11 [38] K. Zhang, J. Li, G. Li, X. Shi, and Z. Jin. Codeagent: Enhancing code generation with tool- integrated agent systems for real-world repo-level coding challenges. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 13643–13658, 2024. [39] Z. Zhang, X. Zhang, W. Xie, and Y . Lu. Responsible task automation: Empowering large language models as responsible task automators. arXiv preprint arXiv:2306.01242 , 2023. 12
https://arxiv.org/abs/2505.18079v2
Data Mixing Can Induce Phase Transitions in Knowledge Acquisition Xinran Gu1,2∗Kaifeng Lyu1∗†Jiazheng Li4Jingzhao Zhang1,2,3‡ 1Institute for Interdisciplinary Information Sciences, Tsinghua University 2Shanghai Qizhi Institute3Shanghai AI Laboratory4Beijing Institute of Technology guxr24@mails.tsinghua.edu.cn, vfleaking@gmail.com jingzhaoz@tsinghua.edu.cn, foreverlasting1202@outlook.com Abstract Large Language Models (LLMs) are typically trained on data mixtures: most data come from web scrapes, while a small portion is curated from high-quality sources with dense domain-specific knowledge. In this paper, we show that when train- ing LLMs on such data mixtures, knowledge acquisition from knowledge-dense datasets—unlike training exclusively on knowledge-dense data [Allen-Zhu and Li, 2024a]— does not always follow a smooth scaling law but can exhibit phase transitions with respect to the mixing ratio and model size. Through controlled experiments on a synthetic biography dataset mixed with web-scraped data, we demonstrate that: (1) as we increase the model size to a critical value, the model sud- denly transitions from memorizing very few to most of the biographies; (2) below a critical mixing ratio, the model memorizes almost nothing even with extensive training, but beyond this threshold, it rapidly memorizes more biographies. We attribute these phase transitions to a capacity allocation phenomenon: a model with bounded capacity must act like a knapsack problem solver to minimize the overall test loss, and the optimal allocation across datasets can change discontinuously as the model size or mixing ratio varies. We formalize this intuition in an information- theoretic framework and reveal that these phase transitions are predictable, with the critical mixing ratio following a power-law relationship with the model size. Our findings highlight a concrete case where a good mixing recipe for large models may not be optimal for small models, and vice versa. 1 Introduction The pre-training data of large language models (LLMs) can be categorized into two major types. The first type consists of large-scale corpora scraped from the web [Raffel et al., 2020, Penedo et al., 2024, Li et al., 2024], often spanning billions to trillions of tokens across diverse topics and styles. Due to the scale, it is inherently hard to ensure the information density of the dataset and its relevance to downstream tasks. Hence, a second type of data, smaller-scale datasets curated from high-quality sources, is incorporated. This type of data usually contains very dense knowledge on tasks or domains with significant practical value. For example, Wikipedia and Stack Exchange cover a wide range of world knowledge. OpenWebMath [Paster et al., 2024] and StarCoder [Li et al., 2023, Kocetkov et al., 2022] provide valuable data for improving model performance on mathematics and coding tasks. ∗Equal contribution †Work done while at the Simons Institute for the Theory of Computing, UC Berkeley. ‡Corresponding author Preprint. Under review.arXiv:2505.18091v1 [cs.LG] 23 May 2025 14 31 50 90 160 256 410 Model size (M, log scale)11060100Acc. on SynBio-320k (%, log scale) Mixing ratio 0.1 0.2 0.3 0.4Figure 1: Phase transition in model size. For each mixing ratio, as model size in- creases, accuracy initially remains zero. Once model size surpasses some thresh- old, accuracy rapidly grows to over 60%. 0.0 0.1 0.2 0.3 0.4 Mixing ratio r0204060Acc. on SynBio-320k (%)
https://arxiv.org/abs/2505.18091v1
(a) 70M models. 0.0 0.1 0.2 0.3 0.4 Mixing ratio r020406080Acc. on SynBio-1.28M (%) (b) 410M models. Figure 2: Phase transition in mixing ratio. For each model size, as mixing ratio rincreases, accuracy initially remains zero. Only when rexceeds some threshold does accuracy quickly improve. The second type of data, which we refer to as knowledge-dense data, typically accounts for only a small fraction of the entire corpus. In the pre-training data of a recently released model family, OLMo 2 [OLMo et al., 2025], over 95% of the tokens are from web data, and only less than 5% are from knowledge-dense data. The proportion of each individual knowledge-dense dataset is even smaller, e.g., only less than 0.1% of the tokens are from Wikipedia. This naturally raises a question: How much knowledge can LLMs really acquire from this small amount of knowledge-dense data? If LLMs were exclusively trained on knowledge-dense data without any data mixing, the amount of knowledge acquired after sufficient training should scale linearly with model size . Although quantifying knowledge in natural data is non-trivial, Allen-Zhu and Li [2024a] sidestep this issue and provide strong empirical evidence for this linear scaling law through extensive pre-training experiments on synthetically generated biographies. In their setting, the amount of knowledge stored by a model is quantified by evaluating its memorization of the biographies using information-theoretic metrics. Similar linear scaling laws are also observed in memorizing Wikidata fact triples by Lu et al. [2024], and analyzed theoretically by Nichani et al. [2025]. Based on these results, one might naively expect a similar linear relationship between model size and acquired knowledge when knowledge-dense data is mixed with web data. However, in this paper, we show that the linear scaling no longer holds under data mixing . We consider the setup where a knowledge-dense dataset focused on a single domain constitutes a small fraction rof the pre-training corpus—referred to as the mixing ratio —and the rest is large-scale web text (see Appendix C.1 for our implementation of data mixing). We demonstrate via a quantitative study that knowledge acquisition from the knowledge-dense data exhibits a more intricate behavior with notable phase transitions with respect to the mixing ratio and model size. More specifically, we study factual knowledge acquisition. We follow the approach of Allen-Zhu and Li [2024a] to curate a synthetic dataset of biographies, where each individual’s information is embedded into natural text descriptions using diverse templates. Due to the uniform data format and content of this dataset, we can quantify how much knowledge the model has stored simply by counting the number of memorized biographies. We then mix this synthetic biography dataset with large-scale web corpus FineWeb-Edu [Penedo et al., 2024] or the Pile [Gao et al., 2020] to create the pre-training mixture. We pre-train or continually pre-train Pythia models [Biderman et al., 2023] ranging from 14M to 6.9B parameters on these mixtures. While setting rcloser to 1will make the model learn more from the knowledge-dense data, in practice, ris typically set to a small value either because the knowledge-dense data has limited amount or increasing
https://arxiv.org/abs/2505.18091v1
rmay hurt the model’s capabilities acquired from other domains. Therefore, the essence of our study is to understand whether models can still memorize a decent number of biographies for relatively small r. Our experiments reveal two interesting findings (Section 3): Finding 1: Phase Transition in Model Size (Figure 1). Fixing the mixing ratio rand varying the model size M, we observe that when Mis smaller than a critical model size Mthres, the number of memorized biographies can be nearly zero. Only when M > M thres, the model suddenly memorizes most biographies. Moreover, the threshold Mthres is higher for smaller r. Finding 2: Phase Transition in Mixing Ratio (Figures 2 and 10). When varying the mixing ratio rwhile keeping the model size Mfixed, we find that below a critical mixing ratio rthres, 2 the model memorizes almost nothing even after significantly longer training, during which each biography appears hundreds of times or more (Figures 3(a) and 5). But when r > r thres, the number of memorized biographies grows rapidly with r. We further find that as we gradually decrease r, the number of steps needed to memorize a fixed number of biographies initially grows linearly with 1/r(Figure 3(b)), but soon becomes exponential and even superexponential (Figure 3(c)), making it impossible or practically infeasible for the model to memorize a non-trivial number of biographies. Theoretical Analysis. In Section 4, we attribute the observed phase transitions to a capacity allocation phenomenon: a model with bounded capacity must act like a knapsack problem solver to minimize the overall test loss, and the optimal allocation across datasets can change discontinuously as the model size or mixing ratio varies. To formalize this intuition, we model a sufficiently trained LLM as the best model that minimizes the test loss under a fixed capacity constraint M. We develop an information-theoretic framework and show that, when trained on a mixture of knowledge-dense and web-scraped data, the model should allocate its capacity across the two datasets based on their respective “marginal values”—that is, the reduction in test loss achieved by assigning one additional unit of capacity to that dataset. We rigorously prove that only when the mixing ratio ror the model sizeMis above a certain threshold does the knowledge-dense dataset become worth learning, thus leading to the observed phase transitions. Assuming that the optimal test loss on web-scraped data follows a power law in model size, we further show that these phase transitions are in fact predictable, with the critical mixing ratio following a power-law relationship with the model size. Empirically, we validate this power-law relationship on both synthetic biographies and a set of real-world knowledge extracted from Wikipedia (Section 5). Strategies to Enhance Knowledge Acquisition Under Low Mixing Ratios (Section 6). Inspired by our theory, we propose two strategies to enhance knowledge acquisition at low mixing ratios: (1) randomly subsampling the knowledge-dense dataset; (2) rephrasing knowledge into more compact forms and augmenting the original dataset with the rephrased versions. The key idea is to increase the “marginal value” of the knowledge-dense dataset by increasing the exposure frequency of each
https://arxiv.org/abs/2505.18091v1
single fact. We validate on both synthetic and real-world Wikipedia biographies that these strategies help models memorize significantly more biographies while preserving models’ general capability. Takeaways. The key takeaways of our paper are as follows: 1.The mixing ratio should be set with care for different model sizes: mixing in knowledge-dense datasets with small mixing ratios can offer no benefit at all, especially when training small LMs. 2.Naively measuring the performance of small models on a small data domain may provide little to no predictive signal on how well larger models perform, revealing a potential limitation of using small proxy models for data curation, as also evidenced by Kang et al. [2024], Jiang et al. [2024], Ye et al. [2024], Magnusson et al. [2025]. 3.Slightly improving the “marginal value” of knowledge-dense data can offer a large gain in performance, as evidenced by our proposed strategies. 2 Experimental Setup The SynBio Dataset. We follow Allen-Zhu and Li [2024b] to create a synthetic biography dataset, with each individual characterized by five attributes: birth date, birth city, university, major, and employer. For each individual, the value of each attribute is randomly and independently sampled from a predefined domain. These (name, attribute, value) triplets are then converted into natural text using sentence templates. For instance, (Gracie Tessa Howell, birth city, St. Louis, MO) is converted into “Gracie Tessa Howell’s birthplace is St. Louis, MO.” Following [Allen-Zhu and Li, 2024b], every time the model encounters a biography, the five sentences are randomly shuffled, and a new sentence template is selected for each attribute from a set of five possible templates. We denote the dataset containing Nbiographies as SynBio- N. See Appendix C.2.1 for full details. Evaluation. Denote a knowledge triplet (name, attribute, value) as (n,a,v)and let |v|represent the number of tokens in v. For evaluation, the model is prompted with the sentence prefix containing nandaand is tasked to generate |v|tokens via greedy decoding. A triplet is considered learned if the output exactly matches v. For example, given the triplet (Gracie Tessa Howell, birth city, St. Louis, MO), the prompt “Gracie Tessa Howell’s birthplace is” is provided. We say the model has 3 0 50 100 150 200 2500102030405060 500 T otal training tokens (B)Acc. on SynBio-320k (%)Mixing ratio (r) 0.2 0.250.3 0.40.45 0.50.55 0.60.7 0.8(a) Train until acc. 60% or a total of 256B tokens are passed. 1 0.81 0.51 0.41 0.31 0.251 0.2 1/r050100150200250T otal training tokens (B)not attained after 256B T arget acc. (%) 60 40 20(b) Required training steps to achieve target accuracy v.s 1/r. 1 0.81 0.51 0.41 0.31 0.251 0.2 1/r02004006008001000T otal tokens (B) to attain 40%>2.0x >2.9x>1.9xExponential fitting Power-law fitting not attained (c) Fitting required training steps to attain 40% accuracy against 1/r. Figure 3: Training longer barely helps for low mixing ratios, with the required training steps to reach a target accuracy grow exponentially or even superexponentially with 1/r. We train 70M models on the mixture of FineWeb-Edu and SynBio-320k with rranging from 0.2 to 0.8. learned the fact if it generates “St. Louis, MO.” We report the accuracy averaged over
https://arxiv.org/abs/2505.18091v1
all individuals, attributes, and templates in the main text and defer the detailed results to Appendix B.4. Training Setup. Our experiments use the Pythia architecture [Biderman et al., 2023], with model sizes ranging from 14M to 6.9B. The default setup involves pre-training from scratch on a mixture of FineWeb-Edu and SynBio. Since FineWeb-Edu is large ( >1T tokens) and SynBio is small ( <1B tokens), our typical training runs involve the model seeing SynBio for multiple epochs but FineWeb- Edu for less than one epoch. For instance, in a 32B-token run with the mixing ratio for SynBio-320k set as 0.1, the model passes SynBio ∼100times. We also study the continual pre-training setup in Section 6 and Appendix B.1. Full details are provided in Appendix C. 3 Phase Transitions of Knowledge Acquisition within Data Mixtures 3.1 Phase Transition in Model Size We first investigate how knowledge acquisition within data mixtures is affected by model size at fixed mixing ratios. For each r∈ {0.1,0.2,0.3,0.4}, we train models with sizes from 14M to 410M on the mixture of FineWeb-Edu and SynBio-320k for a sufficiently long horizon of 32B tokens, which is approximately four times the optimal computation for 410M models predicted by the Chinchilla scaling law [Hoffmann et al., 2022]. As shown in Figure 1, as model size increases, accuracy on SynBio initially remains near zero. Once the model size surpasses some threshold, accuracy rapidly grows to above 60%. The transition is consistently sharp across different mixing ratios while larger r leads to a smaller critical point. 3.2 Phase Transition in Mixing Ratio We now study how knowledge acquisition under data mixing scenario is affected by mixing ratios. Performance on knowledge-dense data undergoes a phase transition as mixing ratio increases. We begin by training models of the same size with different mixing ratios r. Specifically, we train 70M models on the mixture of FineWeb-Edu and SynBio-320K, varying rfrom 0.1 to 0.45 (stepsize 0.05), and 410M models on the mixture of FineWeb-Edu and SynBio-1.28M, varying rfrom 0.1 to 0.4 (stepsize 0.1). All models are trained for a total of 32B tokens. As shown in Figure 2(a), for 70M models, as rincreases from 0.1 to 0.25, its accuracy on SynBio remains near zero. Only when r >0.3does the accuracy begin to steadily improve. In Figure 2(b), the accuracy for 410M models exhibit similar trends where it remains near zero for r≤0.3and suddenly attains 80% when rgrows to 0.4. In Figure 10, we replicate the experiments on Pythia 2.8B and 6.9B models to show that similar phase transition in mixing ratio persists for larger models. Training longer barely helps for low mixing ratios. Given the observed phase transition, one may raise the following counter-argument: if models are trained for a sufficiently long horizon—such that even a small mixing ratio rwould eventually result in each biography being encountered hundreds or even thousands of times—then the phase transition might no longer exist. To test this counter- argument, we extend the training horizon for r= 0.2to 512B tokens for the 70M and 410M models by 16 and 4 times respectively. Under
https://arxiv.org/abs/2505.18091v1
this extended training, each biography appears ∼200times for 4 0 10 20 30 40 50 60 T otal training tokens (B)020406080Acc. on knowledge (%) Mixing ratio and batch size (r,B) (0.8,256) (0.8,512) (0.8,1024)(0.4,256) (0.4,512) (0.4,1024)(0.2,256) (0.2,512) (0.2,1024)(a) Vary the batch size. 0 10 20 30 40 50 60 T otal training tokens (B)020406080Acc. on knowledge (%) Mixing ratio and learning rate (r,) (0.8,2.5e-4) (0.8,1e-3) (0.8,4e-3)(0.4,2.5e-4) (0.4,1e-3) (0.4,4e-3)(0.2,2.5e-4) (0.2,1e-3) (0.2,4e-3) (b) Vary the peak learning rate. 0 10 20 30 40 50 60 T otal training tokens (B)020406080Acc. on knowledge (%) Mixing ratio and learning rate schedule (r,sche) (0.8,cosine) (0.8,wsd)(0.4,cosine) (0.4,wsd)(0.2,cosine) (0.2,wsd) (c) Vary the learning rate schedule. Both schedules use a peak learning rate of 10−3. Figure 4: Ablation studies on hyperparameters. The models exhibit consistent trends in knowledge acquisition across different batch sizes, learning rate values and schedules. All experiments are conducted by training 70M models on the mixture of FineWeb-Edu and SynBio-320k. the 70M model and ∼3000 times for the 410M model. As shown in Figures 3(a) and 5, the accuracy on SynBio remains near zero even after such extensions. Required training steps increase exponentially or even superexponentially with 1/r.To further refute this counter-argument, we quantify how the required training steps to reach a target accuracy, denoted as T, scales with 1/r. Specifically, we train 70M models with rranging from 0.2 to 0.8. For each mixing r, we evaluate 20 training horizons, approximately evenly spaced on a logarithmic scale with a factor of 1.2 ranging from 0 to 256B tokens. Training continues until the model reaches 60% accuracy or exhausts 256B tokens. As shown in Figures 3(a) and 3(b), when rdecreases from 0.8, T initially increase linearly with 1/rforr >0.4and quickly deviates from the linear trend for r <0.4. We further fit a scaling law the required training steps to reach 40% accuracy against 1/r, modeling Tas a power-law or exponential function of 1/r. Specifically, we fit Tagainst 1/rforr≥0.3 and examine whether the extrapolation can predict Tfor smaller r. As shown in In Figure 3(c), the actual Tis more than 2.9 times the power-law prediction for r= 0.25, and more than 1.9 times for r= 0.2. Moreover, the actual Tforr= 0.25is even more than twice the exponential prediction. These significant deviations suggest exponential or even superexponential growth of Twith respect to1/r. See Appendix C.4 for the detailed fitting process. 3.3 Ablation Studies We now conduct ablation studies to demonstrate the robustness of our findings with respect to hyperparameters. We explore r∈ {0.2,0.4,0.8}and train 70M models for a total of 64B, 32B, and 16B tokens, respectively, ensuring each configuration passes SynBio the same number of times. Consistent Trends Across Different Batch Sizes. As shown in Figure 4(a), we evaluate three batch sizes, B∈ {256,512,1024}, for each rand observe consistent general trends across all batch sizes. For r= 0.4andr= 0.8, smaller batch sizes yield slightly higher accuracies, likely due to the increased number of update steps. These experiments further distinguish between two types of frequency at which the model encounters the knowledge dataset: per-token frequency and per-step frequency. For a fixed mixing
https://arxiv.org/abs/2505.18091v1
ratio, doubling the batch size doubles the occurrences of each biography per step, while the occurrences per token remain unchanged. The results demonstrate that per-token frequency, rather than per-step frequency, determines training efficiency in knowledge acquisition. Consistent trends across learning rate values and schedules. In Figure 4(b), we explore peak learning rates among {2.5×10−4,10−3,4×10−3}using the WSD scheduler. We observe that the trends are consistent across these values, although the learning process slows down at the lowest value 2.5×10−4. In Figure 4(c), results for both cosine and WSD schedulers show similar trends. 3.4 Phase Transitions on Reasoning Tasks In this subsection, we show that phase transitions can also arise when mixing a knowledge-dense dataset aimed at improving models’ reasoning skills with web text. Such datasets are often multi- task in practice. For example, OpenWebMath [Paster et al., 2024] covers diverse math topics. We show that phase transitions can occur for each single subtask within this knowledge-dense dataset. Inspired by Ruis et al. [2024], we consider slope calculation between two points (x1, y1)and(x2, y2), 5 0 20 40 60 80 100 120 T otal training tokens (B)020406080Acc. on knowledge (%) Mixing ratio r 0.1 0.2 0.3 0.4Figure 5: For 410M mod- els trained on FineWeb-Edu + SynBio-1.28M, acc. for r= 0.2remains near zero even with 4x more training. 14 31507095160256410 Model size (M, log scale)1103060100Acc. on Slope Calculation (%, log scale) Mixing ratio 0.1 0.2(a) Phase transition in model size. 0.0 0.1 0.2 0.3 0.4 Mixing ratio r0102030405060Acc. on Slope Calculation (b) Phase transition in r. Figure 6: Similar phase transitions for the slope calculation subtask persist when we mix the modified OpenWebMath with FineWeb-Edu. The model size for (b) is 70M. and replace all the documents containing the word “slope” in OpenWebMath with our cleaner and higher quality slope data. We then mix the modified OpenWebMath with FineWeb-Edu and train Pythia models from scratch. Similar to the setup of SynBio, every time the model sees a slope calculation example, we uniformly sample x1, y1, x2, y2from{0,1,···,99}(ensuring x1̸=x2), and apply randomly chosen question and step-by-step answer templates. For evaluation, we randomly generate 1k questions for slope calculation and check if the model produces the correct final answer. Results in in Figure 6 show similar phase transitions as factual knowledge acquisition. See details in Appendix C.3. 4 Theoretical Analysis Theoretical Result (Informal)Case 1: Train exclusively on random factsModel capacityMThe model stores a maximal number of facts without exceeding its “capacity”. Knowledge stored ∝ capacityCase 2: Train on facts + web data Model capacity 𝑀 Fact 1Fact 2…Fact 𝑖Fact 2Fact 𝑖…𝑟 × random facts (𝐷!)1−𝑟 × web data𝐷" Fact 1Fact 2Fact 3…Fact 𝑖Knowledge-dense data(facts with uniform prob. 𝑝)Fact 𝑖+1“Marginal value”:test loss reductionper extra unit “capacity” assigned to a dataset Key Example: 𝓛2=𝑨⋅𝑴!𝜶+𝑪 (Power Law Web Data)I should allocate my capacity based onthe “marginal value” of each dataset! Only when does the knowledge-dense data become worth learning. Marginal value of 𝐷! Marginal value of 𝐷" slope:Loss on web data: ℒ2 Figure 7: An illustration of the intuition behind our theory. In this section, we take an information-theoretic
https://arxiv.org/abs/2505.18091v1
view to explain the observed phase transitions. The key challenge in developing a theory is that training LLMs can involve a lot of tricks, making it hard to identify the most important factors in inducing the phase transitions. In our paper, we consider an ideal case where the model is sufficiently trained, allowing us to focus on the key factor—model capacity—and abstract away all other complexities. 4.1 High-Level Intuition We model a sufficiently trained language model with capacity Mas an optimal bounded-capacity learner , which minimizes test loss as much as possible under the capacity constraint M. The high-level intuition can be framed as a fractional knapsack problem (see Figure 7 for an illustration). When training solely on knowledge-dense data, where each fact appears with equal probability, the optimal learner seeks to store as much knowledge as possible within its capacity. As a result, the total amount of memorized knowledge scales proportionally with the model’s capacity M(Section 4.3). However, the situation changes when the knowledge-dense data is mixed with web-scraped data. In this case, the optimal learner should allocate its capacity across the two datasets based on their 6 respective “marginal values”—that is, the reduction in test loss resulting from assigning one additional unit of capacity to a dataset. Only when rorMexceeds a certain threshold does the knowledge-dense data become worth learning. 4.2 Problem Formulation Data distribution. The essence of language modeling is to model the distribution of the next token yfor a given context xcontaining all previous tokens. We take a Bayesian view, assuming a latent variable θ∈Θgoverning the distribution of (x, y), denoted as (x, y)∼ D θ. Conceptually, θencodes knowledge about the world. For example, a person may be born in 1996 in one universe but 1999 in another. Or, in a different universe, popular Python libraries may feature a different set of functions. We assume the universe first draws θfrom a prior Pbefore we observe the data distribution Dθ. Learning Algorithm. A learning algorithm Ais a procedure that takes samples from a data distribution Dof(x, y)and outputs a predictor h=A(D), which maps xto a distribution over y. The performance of his measured by the expected cross-entropy loss L(h;D) :=E(x,y)∼D[−logp(y| h, x)], where p(y|h, x)denotes the predicted distribution of ygiven xby the predictor h, and logis in base 2 for convenience. We measure the performance of a learning algorithm Aby its expected loss over all data distributions Dθwith respect to the prior P: ¯LP(A) :=Eθ∼P[L(A(Dθ);Dθ)]. (1) In practice, a predictor hcan be a transformer, and Acan be the pre-training algorithm. Model Capacity and Mutual Information. We measure a model’s “effective” capacity—the amount of information a model, produced by some learning algorithm A, stores about the data distribution Dθ—by the mutual information (MI) between the model and the data distribution Dθ, i.e., I(A(Dθ);Dθ). For practical learning algorithms with bounded capacity, if Aalways outputs a model hwith at most Nparameters each represented by a b-bit floating number, then I(A(Dθ);Dθ)≤bN by information theory. Empirically, Allen-Zhu and Li [2024a] found that I(A(Dθ);Dθ)≈2Nholds across various training setups by controlled experiments. We model a sufficiently
https://arxiv.org/abs/2505.18091v1
trained LM with capacity Mas an optimal bounded-capacity learner, which minimizes the expected loss as much as possible under the capacity constraint M: Definition 4.1 (Optimal Bounded-Capacity Learner) .For a given prior PandM > 0, the best achievable loss under the capacity constraint Mis defined as FP(M) := inf A¯LP(A) :I(A(Dθ);Dθ)≤M , (2) where the infimum is taken over all learning algorithms. An optimal M-bounded-capacity learner is a learning algorithm Asuch that I(A(Dθ);Dθ)≤Mand¯LP(A) =FP(M). 4.3 Warmup: Training Exclusively on Mixture of Facts We start with a simple case where the data distribution Dθcontains Krandom facts. Each fact is a pair(Xi, yi), where Xiis a set of input contexts (e.g., paraphrases) and yiis the target token. For instance, the fact “Gracie Tessa Howell was born in 1946” can have contexts like “Gracie Tessa Howell’s birth year is” or “Gracie Tessa Howell came to this world in the year,” all mapping to the target y=“1946”. We further assume that X1, . . . , X Kare disjoint. LetDθ(y|x)be the next-token distribution given context x. The universe samples y1, y2, . . . , y K independently from fixed distributions Y1, . . . ,YKand sets θ= (y1, . . . , y K). The universe further setsDθ(y|xi)as a point mass at yi,∀xi∈Xi. Other inputs xmay occur in Dθ, but their target tokens are independent of θ. Define the exposure frequency of the i-th fact as the total probability that any x∈Xiappears in Dθ:P x′∈XiPθ(x=x′). If all Kfacts have equal exposure frequency p (despite different entropies), a bounded-capacity learner reduces expected loss linearly with capacity M, thus no phase transitions: Theorem 4.2. For all M≥0, if all the facts have the same exposure frequency p, then FP(M) =C+p·max{Htot−M,0}, (3) where Htot:=PK i=1H(Yi)andC:=FP(∞). 7 4.4 Data Mixing Induces Phase Transitions What if we mix the random facts with data from another domain, say web text? Consider a data distribution Dθcomposed of two domains: (1) a mixture of Krandom facts (as in Section 4.3) and (2) another domain with a much more complex structure. Let the latent variable θ= (θ1, θ2), where θ1governs the distribution of Krandom facts, D(1) θ1, and θ2governs the data distribution of the second domain, D(2) θ2. Assume the universe draws θ1andθ2independently from priors P1and P2, respectively. The overall data distribution DθisDθ=rD(1) θ1+ (1−r)D(2) θ2, with mixing ratio r∈(0,1). Let pdenote the exposure frequency of each fact in D(1) θ1, and Htot:=PK i=1H(Yi)be the total entropy of the target tokens in the first domain (as in Section 4.3). For simplicity, we assume the two domains contain non-overlapping information (see Definition D.5). To measure models’ performance on the first domain after training with algorithm Aon the data mixture, we define ¯L1(A) :=Eθ∼P1[L(A(Dθ);D(1) θ1)]as the model’s expected loss on the first domain. If ¯L1(A) =FP1(0), then the model learns nothing (random guessing). If ¯L1(A) =FP1(∞), the model perfectly learns the facts. Theorem 4.3 shows that the learner sharply transitions between the two extremes as model size increases. This transition is characterized by two functions: M− 0(t) := sup {M≥0 :−F′ P2(M)> t}andM+ 0(t) := inf
https://arxiv.org/abs/2505.18091v1
{M≥0 :−F′ P2(M)< t}. By rate-distortion theorem, FP2(M)is convex and hence−F′ P2(M)is non-increasing. Thus, M− 0(t)andM+ 0(t)mark the last and first model sizes where−F′P2(M)exceeds or falls below t. IfF′ P2(M)is strictly decreasing, then M− 0(t) =M+ 0(t). Theorem 4.3 (Phase Transition in Model Size) .For any optimal M-bounded-capacity learner A, 1. ifM≤M− 0(r 1−r·p), then ¯L1(A) =FP1(0); 2. ifM≥M+ 0(r 1−r·p) +Htot, then ¯L1(A) =FP1(∞). Key Example: When Web Data Loss Follows a Power Law in Model Size. Consider the case where FP2(M)is a power-law function of M,i.e.,FP2(M) =C+A·M−α. Here, α∈(0,1)andAis a large constant. This is a reasonable assumption since LLM pre-training usually exhibits such power- law scaling behavior in model size [Kaplan et al., 2020, Hoffmann et al., 2022]. In this case, taking the derivative of FP2(M)gives−F′ P2(M) =A·α·M−α−1. Then, M− 0(t) =M+ 0(t) = (Aα t)1/(α+1). Plugging this into Theorem 4.3, we have the critical value for model size: Mthres∼1 rp1/(α+1) . (4) This implies that a small rorpmay cause the model to learn nothing from the knowledge-dense dataset, even if its capacity is sufficient to learn the entire dataset. Arranging the terms in (4), we can also obtain the critical value in the mixing ratio r: rthres∼1 p·Mα+1. (5) Threshold Frequency for a Single Fact. For each fact in the first domain, its overall probability of being sampled is rpin the data mixture. Again, arranging the terms in (5), we obtain that for a single fact to be learned by the model, its frequency of appearing in the pre-training corpus should be larger than a threshold frequency fthres, which scales with the model size following a power law: fthres∼1 Mα+1. (6) 5 Power-Law Relationship of Threshold Frequency and Model Size In this section, we validate the predicted power-law relationship between model size and threshold frequency on both synthetic biographies and a set of knowledge extracted from Wikipedia. 5.1 Experiments on Synthetic Biographies We construct SynBio-10k-power-law, where 10k biographies are divided into 100 subsets of 100 individuals, with subset sampling probability following a power-law distribution (exponent 1.5). 8 70 410 95 120 160 256 Model parameters (log scale)12345678Threshold frequency (log scale)1e−51e−5 fitted slope: -1.152 (a) Threshold frequency of syn- thetic biographies across differ- ent model sizes. 100 200 300 400 Model size (M)2.83.03.2Val. loss on FineWeb-Edufitted model scaling law: L2= 1.769 + 4.952M−0.283(b) The scaling law for the vali- dation loss on FineWeb-Edu with respect to model size. 102103104105 Threshold popularity (log scale)110100100010000Non-embedding params. (B, log scale)Llama-2 Qwen2.5Gemma-2 Fitted(c) The threshold popularity for knowledge tested in PopQA v.s. model size. Figure 8: Validating the power-law relationship of threshold Frequency and model size. (a) & (b): Experiments on the mixture of SynBio-10k-power-law and FineWeb-Edu confirm that (1) the threshold frequency follows a power-law relationship with model size, and (2) the power-law exponent is approximately equal to the model scaling exponent plus one. (c) For the three open-source model families we examined, the threshold popularity for knowledge tested in PopQA also follows a power-law relationship with model size. Within each subset, all biographies have a uniform sampling probability. We then mix this
https://arxiv.org/abs/2505.18091v1
dataset with FineWeb-Edu using r= 0.01and train models under this setup. To estimate the threshold frequency fthres, we sort the subsets by sampling probability in descending order and identify the first group where model accuracy falls below a target value αtarget . The frequency of biographies in this subset is used to approximate fthres. We use αtarget = 80% . As shown in Figure 8(a), logfthres andlogMexhibit a linear relationship, yielding a slope of 1.152. This value is larger than 1, as expected from our theory. Further, we wonder if this slope is indeed close to α+ 1. Following the approach of Hoffmann et al. [2022], we fit a model scaling function for FineWeb-Edu validation loss in Figure 8(b), obtaining α≈0.283. This leads to a predicted exponent of1.283, which is close to the observed value of 1.152. 5.2 Experiments on Knowledge Extracted from Wikipedia We further evaluate models on PopQA [Mallen et al., 2023], which contains 14k QA pairs derived from Wikidata triplets, along with monthly page view for corresponding Wikipedia articles. Since knowledge tested in PopQA can be structured as triplets, we consider them as homogeneous and expect them to exhibit similar threshold frequencies for a given model size. Estimating the Threshold Frequency. Counting the frequency of specific knowledge in the pre- training data is challenging due to the scale [Kandpal et al., 2023]. Following Mallen et al. [2023], we use Wikipedia page views as a proxy for popularity, which is assumed roughly proportional to the frequency of the knowledge in web data. To estimate the threshold popularity Pthres, we identify the smallest popularity Psuch that the model’s accuracy on knowledge with popularity above Pmeets the target accuracy αtarget which is set to 60% in our experiments. See Appendix C.5 for details. Threshold frequency and model size follow a power law. We examine base models from Llama- 2 [Touvron et al., 2023], Qwen-2.5 [Qwen et al., 2024], and Gemma-2 [Team et al., 2024], which are likely trained on similar data mixtures within each family. Figure 8(c) reveals that logPthres generally decreases linearly as logmodel size increases, though the slope varies across families due to differences in architecture and training data. We examine more model families in Appendix B.3. 6 Strategies to Enhance Knowledge Acquisition Under Low Mixing Ratios Inspired by our theory, we propose two simple yet effective strategies to enhance knowledge ac- quisition under low mixing ratios—a common setting in practice, as a large rmay harm general capabilities expected to be acquired from multiple other data sources. The key idea is to raise the frequency of each fact, thereby increasing the “marginal value” of the knowledge-dense data. 9 2.68 2.70 2.72 FineWeb-Edu val. loss010203040Acc. on SynBio-1.28M (%)=25% =50% =56.25% =62.5% w/o subsampling r=0 r=0.1r=0.2 r=0.3w/ subsampling, r=0.2 loss increase > 0.05(a) 410M, train from scratch on FineWeb-Edu&SynBio-1.28M 2.26 2.27 2.28 2.29 2.30 Pile val. loss0250050007500100001250015000Num. facts learned=25% =50% =75% =10% =30% =60% w/o mitigation original model r=0.1 r=0.15 r=0.2w/ subsampling, r=0.1 loss increase > 0.05w/ CKM, r=0.1(b) 410M, continual pre-train on the Pile&WikiBio. 0102030Acc. on SynBio-2.56M (%) 0816243240485664 T otal
https://arxiv.org/abs/2505.18091v1
training tokens (B)2.172.182.19Pile val. loss loss increase>0.03w/o subsampling r=0.4 r=0.2 w/ subsampling, r=0.2 =50% (c) 1B, continual pre-train on the Pile&SynBio-2.56M. Figure 9: Our proposed strategies significantly boost knowledge acquisition under low mixing ratios while preserving models’ general capability. •Strategy 1: Random Subsampling : Randomly subsample the knowledge dataset. •Strategy 2: Compact Knowledge Mixing (CKM) : Rephrase the knowledge into a compact form and add the rephrased version to the original dataset while keeping the overall mixing ratio fixed. We validate on both SynBio and a new real-world dataset, WikiBio, that these strategies significantly boost knowledge acquisition. For example, on WikiBio, subsampling and CKM improve the number of learned facts by 4 and 20 times, respectively. This is particularly surprising for subsampling, as it removes a significant proportion of the knowledge-dense data but ends up with higher accuracy. 6.1 Real-World Knowledge Data: WikiBio The WikiBio Dataset. To extend our study to a more real-world scenario, we curate WikiBio, a dataset containing Wikipedia biographies along with ten paraphrased versions of the first paragraph for 275k individuals, totaling 453M tokens. We ensure that the key information—name, occupation, and birth date—is mentioned within the first paragraph. This task is more challenging as Wikipedia biographies comprise diverse texts without uniform formats, requiring the model to generalize to prompts that rarely have exact matches in the training data. See Appendix C.2.2 for full details. Evaluation. We evaluate whether the model can recall a person’s birth date as a proxy for how well it memorizes the person’s information. Specifically, for a (name, occupation, birth date) triplet, we prompt the model with “The {occupation } {name}was born on” and consider the response correct if it includes correct the birth year and month in the generated text. The occupation is included not only to create out-of-distribution prompts but also to provide additional context. 6.2 Strategy 1: Random Subsampling While random subsampling seems counterintuitive at first glance, it becomes reasonable if we consider how the threshold mixing ratio rthres relates to the exposure frequency of each fact within the knowledge-dense dataset, denoted as p. For a dataset containing only Sfacts with uniform probability, p∝1/S. We can derive from (5)that the threshold mixing ratio rthres∼S Mα+1. Subsampling reduces Sand thus lowers the threshold mixing ratio, allowing the model to achieve much higher accuracy on the subsampled dataset. We use ρto represent the subsampling ratio below. Experimental Setup. We study both pre-training from scratch and continual pre-training setups. To evaluate the model’s general capabilities, we use its validation loss on the web data (the Pile or FineWeb-Edu) and its zero-shot performance on five downstream tasks (see details in Appendix B.5). We compare the validation loss and average downstream performance to the model trained with r= 0 in the pre-training-from-scratch setup or to the original Pythia model in the continual pre-training setup. Downstream performance drop of more than 2% is considered unacceptable. Subsampling enables faster knowledge acquisition while maintaining general capability. We train 410M models from scratch FineWeb-Edu mixed with SynBio-1.28M using r∈ {0,0.1,0.2,0.3} for a total of 32B tokens. As shown in Figures
https://arxiv.org/abs/2505.18091v1
9(a) and 11(a), increasing rdegrades FineWeb-Edu validation loss and downstream accuracy, with performance becoming unacceptable at r= 0.3 10 (-2.09% accuracy, +0.05 loss), while SynBio accuracy remains near zero. In contrast, subsampling SynBio-1.28M to 25%, 50%, and 56.25% boosts SynBio accuracy to 23.53%, 37.46%, and 39.81%, respectively, while maintaining downstream performance within the acceptable range. Note that further increasing ρto 62.5% makes the frequency of each biography too low, resulting in SynBio accuracy dropping back to near zero. See more details in Appendix C.6, Tables 1(b) and 2(a). Consistent Results for Continual Pre-training. We continually pre-train the 410M and 1B Pythia models from their 100k-step checkpoints by mixing Pile with WikiBio and SynBio-2.56M, respec- tively. The 410M models are trained for 32B tokens and 1B models for 64B. Due to the distribution shift, the Pile validation loss may increase with training due to catastrophic forgetting [Ibrahim et al., 2024]. To preserve models’ general capabilities, we apply early stopping when Pile validation loss increases by 0.05 (410M model) or 0.03 (1B model), each corresponding to ∼2%drop in downstream performance. As shown in Figures 9(b) and 11(b), without subsampling, r= 0.1or0.15results in slow learning of WikiBio, while r= 0.2triggers early stopping after 20B tokens, resulting in poor WikiBio performance. By contrast, subsampling WikiBio to 25% or 50% significantly accelerates knowledge acquisition and keeps Pile validation loss acceptable. For example, for r= 0.1, setting ρto 50% improves the number of learned facts by 4 times. Similar trends hold for 1B models: subsampling SynBio to 50% at r= 0.2outperforms both r= 0.2and early-stopped r= 0.4without subsampling by ∼30%. See more details in Appendix C.6, Tables 1(c), 2(b) and 3. 6.3 Strategy 2: Compact Knowledge Mixing (CKM) The second strategy rephrases knowledge into compact forms (e.g., tuples) and adds them to the original dataset. Since the frequency of occurrence ffor each fact is inversely proportional to the average token count to represent each fact, adding compact representations can push fabove the threshold fthres by reducing the average token count. For WikiBio, we compress the key information—name, birth date, and occupation—into a tuple format “Bio: N {name}B{birth date}O{occupation }”, and add these tuples until their token count reaches τtimes the total token count of the original dataset. We call τthe CKM ratio. Experimental Setup. We apply CKM to WikiBio with the same continual pre-training setup as in Section 6.2. Each time models encounter the tuple-form data point, the order of birth date and occupation is randomly flipped. We apply early stopping when Pile validation loss increases by 0.05. CKM significantly improves knowledge acquisition efficiency while preserving general capabil- ity. We explore CKM ratios τ∈ {0.1,0.3,0.6}withr= 0.1. As shown in Figures 9(b) and 11(c), CKM preserves the general capability and consistently boosts knowledge acquisition. Notably, performance on WikiBio improves by 4 times when τis only 0.1. Increasing τto 30% further boosts the number of learned facts by 20 times. See downstream performance in Table 4. 7 Discussions and Future Directions Extensions to reasoning tasks. This paper identifies phase transitions in knowledge acquisition under data mixing and provides information-theoretic explanations.
https://arxiv.org/abs/2505.18091v1
While we mainly focus on factual knowledge acquisition, we also conduct preliminary experiments on a simple reasoning task. We emphasize that memorization is also important for reasoning—without basic knowledge, models cannot reason effectively, as also noted by Ruis et al. [2024], Xie et al. [2024]. For example, solving math problems requires memorizing theorems, definitions, and techniques. We leave extensions to more complex reasoning tasks for future work. Connection to real-world data. Following Allen-Zhu and Li [2024a], we use SynBio as a proxy for knowledge-dense data due to its clean and uniform format, which enables controlled experiments and easy measurement of knowledge acquisition. In contrast, real-world datasets are often messier and more heterogeneous—for example, Wikipedia includes both simple facts (e.g., biographies) and more complex content (e.g., scientific theories). These types of knowledge vary in learning difficulty and may exhibit different threshold frequencies. As a result, phase transitions may not be as apparent when mixing a heterogeneous dataset with web text. Nevertheless, our experiments on PopQA and WikiBio confirm that the theoretical insights still hold in real-world settings. 11 References Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.1, knowledge storage and extraction. arXiv preprint arXiv:2309.14316 , 2023. Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.3, knowledge capacity scaling laws. arXiv preprint arXiv:2404.05405 , 2024a. Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.2, knowledge manipulation, 2024b. Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Halla- han, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Jason Phang, Shivanshu Purohit, Hailey Schoelkopf, Dashiell Stander, Tri Songz, Curt Tigges, Benjamin Th ´erien, Phil Wang, and Samuel Weinbach. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch, 9 2023. URL https://www.github.com/eleutherai/gpt-neox . Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning , pages 2397–2430. PMLR, 2023. Stella Biderman, Usvsn Prashanth, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony, Shivan- shu Purohit, and Edward Raff. Emergent and predictable memorization in large language models. Advances in Neural Information Processing Systems , 36, 2024. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 7432–7439, 2020. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations , 2023. Hoyeon Chang, Jinho Park, Seonghyeon Ye, Sohee Yang, Youngkyung Seo, Du-Seong Chang, and Minjoon Seo. How do large language models acquire factual knowledge during pretraining? In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. Jeff Da, Ronan Le Bras, Ximing Lu, Yejin Choi, and
https://arxiv.org/abs/2505.18091v1
Antoine Bosselut. Analyzing commonsense emergence in few-shot knowledge models. In 3rd Conference on Automated Knowledge Base Construction , 2021. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing , pages 954–959, 2020. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027 , 2020. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 07 2024. 12 Ce Ge, Zhijian Ma, Daoyuan Chen, Yaliang Li, and Bolin Ding. Data mixing made efficient: A bivariate scaling law for language model pretraining. arXiv preprint arXiv:2405.14908 , 2024. Gaurav Rohit Ghosal, Tatsunori Hashimoto, and Aditi Raghunathan. Understanding finetuning for factual knowledge extraction. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 15540–15558. PMLR, 21–27 Jul 2024. Dirk Groeneveld, Iz Beltagy, Evan Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, William Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah Smith, and Hannaneh Hajishirzi. OLMo: Accelerating the science of language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Bangkok, Thailand, August 2024. Association for Computational Linguistics. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Kar ´en Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 30016–30030. Curran Associates, Inc., 2022. Jing Huang, Diyi Yang, and Christopher Potts. Demystifying verbatim memorization in large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 10711–10732, 2024. Adam Ibrahim, Benjamin Th ´erien, Kshitij Gupta, Mats L Richter, Quentin Anthony, Timoth ´ee Lesort,
https://arxiv.org/abs/2505.18091v1
Eugene Belilovsky, and Irina Rish. Simple and scalable strategies to continually pre-train large language models. arXiv preprint arXiv:2403.08763 , 2024. Yiding Jiang, Allan Zhou, Zhili Feng, Sadhika Malladi, and J Zico Kolter. Adaptive data optimization: Dynamic sample selection with scaling laws. arXiv preprint arXiv:2410.11820 , 2024. Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. Large language models struggle to learn long-tail knowledge. In International Conference on Machine Learning , pages 15696–15707. PMLR, 2023. Feiyang Kang, Yifan Sun, Bingbing Wen, Si Chen, Dawn Song, Rafid Mahmood, and Ruoxi Jia. Autoscale: Automatic prediction of compute-optimal data composition for training llms. arXiv preprint arXiv:2407.20177 , 2024. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Mu ˜noz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, et al. The stack: 3 tb of permissively licensed source code. arXiv preprint arXiv:2211.15533 , 2022. Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, et al. Datacomp-lm: In search of the next generation of training sets for language models. arXiv preprint arXiv:2406.11794 , 2024. R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, J Li, J Chim, et al. Starcoder: May the source be with you! Transactions on machine learning research , 2023. 13 Qian Liu, Xiaosen Zheng, Niklas Muennighoff, Guangtao Zeng, Longxu Dou, Tianyu Pang, Jing Jiang, and Min Lin. Regmix: Data mixture as regression for language model pre-training. arXiv preprint arXiv:2407.01492 , 2024. Xingyu Lu, Xiaonan Li, Qinyuan Cheng, Kai Ding, Xuanjing Huang, and Xipeng Qiu. Scaling laws for fact memorization of large language models. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 11263–11282, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.658. Ian Magnusson, Nguyen Tai, Ben Bogin, David Heineman, Jena D Hwang, Luca Soldaini, Akshita Bhagia, Jiacheng Liu, Dirk Groeneveld, Oyvind Tafjord, et al. Datadecide: How to predict best pretraining data with small experiments. arXiv preprint arXiv:2504.11393 , 2025. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9802–9822, Toronto, Canada, July 2023. Association for Computational Linguistics. Eshaan Nichani, Jason D. Lee, and Alberto Bietti. Understanding factual recall in transformers via associative memories. In The Thirteenth International Conference on Learning Representations , 2025. Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng
https://arxiv.org/abs/2505.18091v1
Liu, Saumya Malik, William Merrill, Lester James V . Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious, 2025. Denis Paperno, Germ ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern ´andez. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031 , 2016. Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset of high-quality mathematical web text. In The Twelfth International Conference on Learning Representations , 2024. Guilherme Penedo, Hynek Kydl ´ıˇcek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro V on Werra, and Thomas Wolf. The FineWeb datasets: Decanting the web for the finest text data at scale. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. Fabio Petroni, Tim Rockt ¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. Language models as knowledge bases? arXiv preprint arXiv:1909.01066 , 2019. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2024. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. 14 Adam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910 , 2020. Laura Ruis, Maximilian Mozes, Juhan Bae, Siddhartha Rao Kamalakara, Dwarak Talupuru, Acyr Locatelli, Robert Kirk, Tim Rockt ¨aschel, Edward Grefenstette, and Max Bartolo. Procedural knowl- edge in pretraining drives reasoning in large language models. arXiv preprint arXiv:2411.12580 , 2024. Kai Sun, Yifan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. Head-to-tail: How knowledgeable are large language models (llms)? aka will llms replace knowledge graphs? In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 311–325, 2024. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, L ´eonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ram ´e, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen,
https://arxiv.org/abs/2505.18091v1
Andy Brock, Andy Coenen, Anthony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Weinberger, Dimple Vijaykumar, Dominika Rogozi ´nska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Eltyshev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Pluci ´nska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svensson, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fernandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mohamed, Kartikeya Badola, Kat Black, Katie Millican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjoesund, Lauren Usui, Laurent Sifre, Lena Heuermann, Leticia Lago, Lilly McNealus, Livio Baldini Soares, Logan Kilpatrick, Lucas Dixon, Luciano Martins, Machel Reid, Manvinder Singh, Mark Iverson, Martin G ¨orner, Mat Velloso, Mateo Wirth, Matt Davidow, Matt Miller, Matthew Rahtz, Matthew Watson, Meg Risdal, Mehran Kazemi, Michael Moynihan, Ming Zhang, Minsuk Kahng, Minwoo Park, Mofi Rahman, Mohit Khatwani, Natalie Dao, Nenshad Bardoliwalla, Nesh Devanathan, Neta Dumai, Nilay Chauhan, Oscar Wahltinez, Pankil Botarda, Parker Barnes, Paul Barham, Paul Michel, Pengchong Jin, Petko Georgiev, Phil Culliton, Pradeep Kuppala, Ramona Comanescu, Ramona Merhej, Reena Jana, Reza Ardeshir Rokni, Rishabh Agarwal, Ryan Mullins, Samaneh Saadat, Sara Mc Carthy, Sarah Cogan, Sarah Perrin, S ´ebastien M. R. Arnold, Sebastian Krause, Shengyang Dai, Shruti Garg, Shruti Sheth, Sue Ronstrom, Susan Chan, Timothy Jordan, Ting Yu, Tom Eccles, Tom Hennigan, Tomas Kocisky, Tulsee Doshi, Vihan Jain, Vikas Yadav, Vilobh Meshram, Vishal Dharmadhikari, Warren Barkley, Wei Wei, Wenming Ye, Woohyun Han, Woosuk Kwon, Xiang Xu, Zhe Shen, Zhitao Gong, Zichuan Wei, Victor Cotruta, Phoebe Kirk, Anand Rao, Minh Giang, Ludovic Peran, Tris Warkentin, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, D. Sculley, Jeanine Banks, Anca Dragan, Slav Petrov, Oriol Vinyals, Jeff Dean, Demis Hassabis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Sebastian Borgeaud, Noah Fiedel, Armand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. Gemma 2: Improving open language models at a practical size, 2024. Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems , 35:38274–38290, 2022. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowdsourcing multiple choice science questions. arXiv preprint arXiv:1707.06209 , 2017. 15 Chulin Xie, Yangsibo Huang, Chiyuan Zhang, Da Yu, Xinyun Chen, Bill Yuchen Lin, Bo Li, Badih Ghazi, and Ravi Kumar. On memorization of large language models in logical reasoning. arXiv preprint arXiv:2410.23123 , 2024. Jiasheng Ye, Peiju Liu, Tianxiang Sun, Yunhua Zhou, Jun Zhan, and Xipeng Qiu. Data mixing laws: Optimizing data mixtures by predicting language modeling performance. arXiv preprint arXiv:2403.16952 , 2024. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your
https://arxiv.org/abs/2505.18091v1
sentence? arXiv preprint arXiv:1905.07830 , 2019. Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y Wu, Yukun Li, Huazuo Gao, Shirong Ma, et al. Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence. arXiv preprint arXiv:2406.11931 , 2024. 16 Contents 1 Introduction 1 2 Experimental Setup 3 3 Phase Transitions of Knowledge Acquisition within Data Mixtures 4 3.1 Phase Transition in Model Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Phase Transition in Mixing Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.3 Ablation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.4 Phase Transitions on Reasoning Tasks . . . . . . . . . . . . . . . . . . . . . . . . 5 4 Theoretical Analysis 6 4.1 High-Level Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.3 Warmup: Training Exclusively on Mixture of Facts . . . . . . . . . . . . . . . . . 7 4.4 Data Mixing Induces Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . 8 5 Power-Law Relationship of Threshold Frequency and Model Size 8 5.1 Experiments on Synthetic Biographies . . . . . . . . . . . . . . . . . . . . . . . . 8 5.2 Experiments on Knowledge Extracted from Wikipedia . . . . . . . . . . . . . . . 9 6 Strategies to Enhance Knowledge Acquisition Under Low Mixing Ratios 9 6.1 Real-World Knowledge Data: WikiBio . . . . . . . . . . . . . . . . . . . . . . . . 10 6.2 Strategy 1: Random Subsampling . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.3 Strategy 2: Compact Knowledge Mixing (CKM) . . . . . . . . . . . . . . . . . . 11 7 Discussions and Future Directions 11 A Related Works 19 B Additional Experimental Results
https://arxiv.org/abs/2505.18091v1
20 B.1 Phase Transition in Mixing Ratio for Larger Models . . . . . . . . . . . . . . . . . 20 B.2 Additional Plots for Mitigation Strategies . . . . . . . . . . . . . . . . . . . . . . 20 B.3 Additional Results for Validating the Power-Law Relationship of Threshold Fre- quency and Model Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Detailed Performance on SynBio . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.5 Detailed Downstream Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C Experimental Details 23 C.1 General Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2 Details of Dataset Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 C.2.1 Constructing the SynBio Dataset . . . . . . . . . . . . . . . . . . . . . . . 24 C.2.2 Constructing the WikiBio Dataset . . . . . . . . . . . . . . . . . . . . . . 25 C.3 Constructing the SlopeQA Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 27 17 C.4 Details of the Fitting Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 C.5 Details of Estimating the Threshold Popularity . . . . . . . . . . . . . . . . . . . . 29 C.6 Experimental Details for Strategies to Enhance Knowledge Acquisition Under Low Mixing Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 D Proofs of Theoretical Results 31 D.1 Convexity of the Best Achievable Loss . . . . . . . . . . . . . . . . . . . . . . . . 32 D.2 Proofs for the Warmup Case . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.18091v1
. . . . . . . . . . . . . . 32 D.3 Proofs for the Data Mixing Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 18 A Related Works Knowledge Capacity Scaling Law. LLMs are typically trained on a vast amount of data that are rich in knowledge, and extensive studies have investigated how much knowledge LLMs can acquire from the training data. Pioneering studies [Petroni et al., 2019, Roberts et al., 2020, Da et al., 2021] demonstrate that LLMs can capture a substantial amount of knowledge, suggesting their potential as knowledge bases. To quantify the relationship between model size and knowledge storage, Allen-Zhu and Li [2024a] and Lu et al. [2024] discover a linear relationship between models’ knowledge capacity and their parameter count by training LLMs on data only containing fixed-format knowledge for sufficiently long horizons. Later, Nichani et al. [2025] formally proved this linear relationship. In contrast, this paper examines the data mixing scenario and demonstrates that this linear scaling can be disrupted when the knowledge-dense dataset is mixed with vast amounts of web-scraped data. Another important factor is the frequency of occurrence for knowledge. Impact of Frequency on Knowledge Acquisition. This paper identifies phase transitions in knowledge acquisition within data mixtures with respect to model size and mixing ratio. Some relevant observations can be found in previous papers, but we takes a more direct and systematic approach. Kandpal et al. [2023], Mallen et al. [2023], Sun et al. [2024] find that LLMs can perform poorly on low-frequency knowledge. Ghosal et al. [2024] show that frequency of knowledge in the pre-training data determines how well the model encodes the knowledge, which influences its extractability after QA fine-tuning. Taking a more microscopic view, Chang et al. [2024] insert a few pieces of new knowledge during training and track their loss. By fitting a forgetting curve, they conjecture that the model may fail to learn the knowledge if its frequency is lower than some threshold. Memorization and Forgetting. Our findings also relate to prior observations on the memorization and forgetting behaviors of LLMs, but we explicitly characterize phase transitions in the context of data mixing. Carlini et al. [2023] show that memorization of training data follows a log-linear relationship with model size, the number of repetitions, and prompt length. Biderman et al. [2024] take a data point-level perspective and demonstrate that it is difficult to predict whether a given data point will be memorized using a smaller or partially trained model. By injecting a few new sequences into the training data, Huang et al. [2024] find that a sequence must be repeated a non-trivial number of times to be memorized. By examining training dynamics, Tirumala et al. [2022] observe that memorization can occur before overfitting and that larger models memorize faster while forgetting more slowly. From a theoretical perspective, Feldman [2020] prove that memorization of training labels is necessary to achieve near-optimal generalization error for long-tailed data
https://arxiv.org/abs/2505.18091v1
distributions. Scaling laws for Data Mixing. LLM performance is significantly influenced by the mixing propor- tions of the training data from different domains. Our paper is related to a line of studies that optimize the mixing proportions by modeling LLM performance as a function of the mixing proportions [Liu et al., 2024, Kang et al., 2024, Ye et al., 2024, Ge et al., 2024]. However, their datasets can be highly heterogeneous even within a single domain (e.g., OpenWebText, Pile-CC) while we focus on mixing a uniform, knowledge-dense dataset into web-scraped data. 19 B Additional Experimental Results B.1 Phase Transition in Mixing Ratio for Larger Models 0.00 0.01 0.02 0.03 0.04 0.05 Mixing ratio r0204060Acc. on SynBio-20k (%) (a) Pythia-2.8B. 0.00 0.01 0.02 0.03 0.04 Mixing ratio r010203040Acc. on SynBio-10k (%) (b) Pythia-6.9B. Figure 10: Phase transition in mixing ratio persists for larger models. We train Pythia-2.8B and Pythia-6.9B with 2B and 1B total training tokens, respectively. To ensure sufficient exposure to SynBio within these training horizons, we use smaller SynBio datasets—SynBio-20k for the 2.8B model and SynBio-10k for the 6.9B model—mixed with FineWeb-Edu. B.2 Additional Plots for Mitigation Strategies 55.5 56.0 56.5 57.0 Downstream avg. acc. (%)010203040Acc. on knowledge (%)=25% =50% =56.25% =62.5% w/o subsampling r=0 r=0.1r=0.2 r=0.3w/ subsampling, r=0.2 downstream avg. acc. drop > 2% (a) 410M, trained from scratch on the mixture of FineWeb-Edu and SynBio-1.28M. 0100020003000Num. facts learned 0 8 16 24 32 T otal training tokens (B)2.262.282.30Pile val. loss loss increase>0.05w/o subsampling r=0.2 r=0.15 r=0.1 w/ subsampling, r=0.1 =25% =50% =75% (b) Training trajectory for applying subsampling to WikiBio. 0500010000Num. facts learned 0 8 16 24 32 T otal training tokens (B)2.262.282.30Pile val. loss loss increase>0.05w/o CKM r=0.2 r=0.15 r=0.1 w/ CKM, r=0.1 =10% =30% =60% (c) Training trajectory for applying CKM to WikiBio. Figure 11: Additional plots for mitigation strategies. B.3 Additional Results for Validating the Power-Law Relationship of Threshold Frequency and Model Size 101102103104105 Threshold popularity (log scale)100101102103104Non-embedding Params. (B, log scale)GPT-3.5-Turbo: 61BGPT-4: 514B GPT-4o: 226B GPT-4o-mini: 24BLlama-2 Qwen2.5 Llama-3Gemma-2 OlMo (predicted) OlMo (ground truth)GPT (predicted) Fitted 95% Prediction Interval Figure 12: For 410M models trained on the mixture of FineWeb-Edu and SynBio-1.28M, accuracy for r= 0.2 remains near zero even when we extend the training by 4 times.In Figure 12, we relax the constraint of training on the same data mixture and investigate the overall trend between model size and Pthres. We add the Llama-3 [Dubey et al., 2024] family, and evaluate both base and instruction-tuned models for all families, totaling 30 models. Interestingly, in Figure 12, log model size and logPthres also exhibit a linear relationship, with most models falling within the 95% confidence interval. We further use models from the OLMo [Groeneveld et al., 2024] family as a validation set, where predictions of the fitted power law closely match the ground truth. Potential Application: Inferring the Size of Proprietary Models. The identified power-law relationship offers a poten- tial method for estimating the size of proprietary models, such GPTs. As a preliminary attempt, we estimate the threshold popularity for GPT-3.5-Turbo, GPT-4, GPT-4o, and GPT-4o-
https://arxiv.org/abs/2505.18091v1
mini. Applying the fitted power law yields size predictions of 61B, 514B, 226B, and 24B, respectively. The 95% confidence intervals are 12–314B, 80–3315B, 39–1313B, and 5–118B, respectively. 20 B.4 Detailed Performance on SynBio In Table 1(a), we detail the accuracy of each attribute for 70M models trained on the mixture of FineWeb-Edu and SynBio-320k with r∈ {0.2,0.4,0.8}, trained for 64B, 32B, and 16B tokens respectively. We notice that the accuracy for birth date is lower than other attributes. This can be attributed to the complexity of precisely recalling the combined elements of day, month, and year information, which together form a much larger domain than other attributes. To maintain clarity and conciseness, we omit the detailed performance in other 70M experiments, as this pattern persists across them. Furthermore, we present the detailed performance of 410M models on SynBio-1.28M corresponding to Figure 9(a) in Table 1(b). We also provide the detailed performance of 1B models on SynBio-2.56M corresponding to Figure 9(c) in Table 1(c). Table 1: Detailed performance on SynBio. We report the accuracy (%) for each attribute averaged over five templates. (a) 70M model, pre-trained from scratch on the mixture of FineWeb-Edu and SynBio- 320k. r Birth date Birth city University Major Employer Avg. Random guess 0.00 0 .50 0 .33 1 .00 0 .38 0 .44 0.2 0 .00 0 .63 0 .43 1 .12 0 .38 0 .51 0.4 16 .96 45 .67 41 .03 50 .78 43 .93 39 .68 0.8 79 .76 88 .64 88 .55 90 .10 88 .30 87 .07 (b) 410M model, pre-trained from scratch on the mixture of FineWeb-Edu and SynBio-1.28M. N ρ (%) r Birth date Birth city University Major Employer Avg. Random guess 0.00 0 .50 0 .33 1 .00 0 .38 0 .44 - - 0.00 0 .00 0 .00 0 .00 0 .00 0 .00 0 .00 1.28M 100 0 .1 0 .00 0 .42 0 .33 1 .01 0 .21 0 .39 1.28M 100 0 .2 0 .00 0 .45 0 .34 1 .09 0 .22 0 .42 1.28M 100 0 .3 0 .00 0 .49 0 .35 1 .14 0 .25 0 .45 320k 25 0 .2 22 .34 23 .98 23 .64 24 .03 23 .65 23 .53 640k 50 0 .2 27 .97 39 .66 38 .51 41 .50 39 .68 37 .46 720k 56.25 0 .2 28 .02 42 .94 42 .15 44 .07 41 .88 39 .81 800k 62.5 0 .2 0 .01 1 .16 0 .85 3 .19 0 .89 1 .22 (c) 1B model, continually pre-trained on the mixture of the Pile and SynBio-2.56M. Note that r= 0.4is early stopped due to its Pile validation loss increasing beyond the acceptable range. N ρ (%) rTraining tokens (B)Birth date Birth city University Major Employer Avg. Random guess 0.00 0 .50 0 .33 100 0 .38 0 .44 Pythia-1B-100k-ckpt 0.00 0 .00 0 .00 0 .00 0 .00 0 .00 2.56M 100 0 .2 64 0 .01 0 .46 0 .33 0 .98 0 .21 0 .39 2.56M 100 0 .4 24 0 .05
https://arxiv.org/abs/2505.18091v1