text
string
source
string
sources for future research in multimodal learning and temporal reasoning, enabling broader advances across a range of video understanding applications. 9 References [1]Satanjeev Banerjee and Alon Lavie. Closer look at summarization evaluations. In Proceedings of the workshop on empirical modeling of semantic equivalence and entailment , pages 1–8, 2005. 7 [2]Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang, Jinhui Tang, and Jing Liu. Valor: Vision-audio-language omni-perception pretraining model and dataset. arXiv preprint arXiv:2304.08345 , 2023. 3, 4, 5 [3]Sihan Chen, Handong Li, Qunbo Wang, Zijia Zhao, Mingzhen Sun, Xinxin Zhu, and Jing Liu. Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset. Advances in Neural Information Processing Systems , 36, 2024. 3, 4, 5 [4]Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei. Beats: Audio pre-training with acoustic tokenizers. 2022. 5, 13 [5]Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476 , 2024. 3 [6]Konstantinos Drossos, Samuel Lipping, and Tuomas Virtanen. Clotho: An audio captioning dataset. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 736–740. IEEE, 2020. 12, 13 [7]Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075 , 2024. 17 [8]Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision , pages 5267–5275, 2017. 2, 3, 5, 7, 8 [9]Tiantian Geng, Teng Wang, Jinming Duan, Runmin Cong, and Feng Zheng. Dense-localizing audio-visual events in untrimmed videos: A large-scale benchmark and baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22942–22951, 2023. 5 [10] Tiantian Geng, Jinrui Zhang, Qingni Wang, Teng Wang, Jinming Duan, and Feng Zheng. Longvale: Vision-audio-language-event benchmark towards time-aware omni-modal perception of long videos. arXiv preprint arXiv:2411.19772 , 2024. 2, 3, 4, 5, 8, 16 [11] Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, and Xi Chen. Trace: Temporal grounding video llm via causal event modeling, 2024. 2, 3, 5, 7, 8, 13, 17 [12] Yuchen Guo, Linchao Liu, Xin Li, and Ping Luo. Vtg-llm: Efficient temporal grounding in long videos with compressed visual cues. In arXiv preprint arXiv:2401.07684 , 2024. 2, 3, 5, 8 [13] Fabian Caba Heilbron and Juan Carlos Niebles. Activitynet captions: A dense-captioning dataset for evaluating understanding of complex video activities. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 5246–5255, 2015. 2, 5, 7, 9 [14] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision , pages 5803–5812, 2017. 2, 3 [15] Junjie Huang, Ming Wu, Linchao Li, Yi Zhu, Yu Chen, Siwei Yan, Yi
https://arxiv.org/abs/2505.18110v1
Liu, and Ping Luo. Vtimellm: Compression of time into a latent embedding for efficient video-language modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16694–16704, 2023. 2, 3, 7, 8 [16] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Hel- yar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. 4, 14 [17] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. 13 [18] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In Proceedings of the IEEE international conference on computer vision , pages 706–715, 2017. 3 [19] Jiabo Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tallyqa: Answering complex counting questions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14168–14178, 2021. 2 [20] Chin-Yew Lin and Franz Josef Och. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd annual meeting of the association for computational linguistics (ACL-04) , pages 605–612, 2004. 7 [21] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 12, 13 [22] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , pages 311–318, 2002. 7 10 [23] Yiyuan Qian, Linchao Liu, Xin Li, and Ping Luo. Momentor: Advancing video understanding with temporal reasoning in large language models. In arXiv preprint arXiv:2401.03923 , 2024. 2, 3, 8 [24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. 5 [25] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. 13 [26] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022. 5, 13 [27] Junjie Ren, Can Li, Ming Zhao, Jinhui Liu, Junchi Yang, and Jian Wang. Timechat: A time-aware large language model for video question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16726–16736, 2023. 3, 8 [28] Chen Sun, Austin Myers, Carl V ondrick, Kevin Murphy, and Cordelia Schmid. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16632–16642, 2022. 2 [29] Cassia Valentini-Botinhao et al. Noisy speech database for training speech
https://arxiv.org/abs/2505.18110v1
enhancement algorithms and tts models. University of Edinburgh. School of Informatics. Centre for Speech Technology Research (CSTR) , 2017. 12, 13 [30] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4566–4575, 2015. 7 [31] Hao Wang, Linchao Liu, Xin Li, and Ping Luo. Hawkeye: Visually explainable reasoning in video question answering. In arXiv preprint arXiv:2401.04705 , 2024. 8 [32] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, et al. Internvideo2: Scaling video foundation models for multimodal video understanding. arXiv preprint arXiv:2403.15377 , 2024. 3, 4 [33] Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, and Xu Yang. Number it: Temporal grounding videos like flipping manga. arXiv preprint arXiv:2411.10332 , 2024. 8 [34] Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, and Junyang Lin. Qwen2.5-omni technical report. arXiv preprint arXiv:2503.20215 , 2025. 2, 3, 8 [35] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 4 [36] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858 , 2023. 3 [37] Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, and Chunyuan Li. Video instruction tuning with synthetic data, 2024. 12, 13 [38] Bolei Zhou, Yu Guo, Meng Zhang, Xiongwei Wang, Siyuan Pu, Yuan Wu, Yan Zhang, Yan Wang, and Li Li. Recent advances in video understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2023. 2 11 Contents of Appendix A Implementation Details 12 A.1 Training Recipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 A.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 A.3 Detailed training settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 B Data Format 14 B.1 Training Generator and Judger . . . . . . . . . .
https://arxiv.org/abs/2505.18110v1
. . . . . . . . . . . . . . . . . . 14 B.2 Training data format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 C Additional Experiments. 15 C.1 Ablation study on Segment Captioning . . . . . . . . . . . . . . . . . . . . . . . . 15 C.2 Omni-modal dataset & General understanding dataset . . . . . . . . . . . . . . . . 16 C.3 Pseudo-code of Query-Based Connector . . . . . . . . . . . . . . . . . . . . . . . 17 D Case studies on diferent scenarios 18 A Implementation Details A.1 Training Recipe Our training process follows a structured three-stage approach: Feature Alignment, Connector Generalization, and Instruction Tuning g—to progressively equip the model with strong multimodal and temporal reasoning capabilities. The specific components trained at each stage are illustrated in Figure 4. Feature Alignment. In the first stage, only the Query-Based Connector and LM Head are set as trainable, while all other components remain frozen. This phase employs single-modality inputs, enabling the model to gain an initial understanding of each of the three modalities individually. It also helps the model learn to assign weights effectively in the absence of multimodal context, promoting a focused grasp of modality-specific features without interference from other components. Connector Generalization. During the second stage, we incorporate mixed-modality data and allow training for the Query-Based Connector, Time Encoder, Time Head, and LM Head, while keeping the LLM backbone fixed. This phase equips the connector to handle weight allocation across multiple modalities, thereby enhancing its generalization beyond isolated modalities. Simultaneously, training the Time Encoder and Time Head introduces the model to temporal structure, laying the groundwork for capturing inter-modality dynamics over time. Instruction Tuning. In the final stage, we freeze the Query-Based Connector and train the remaining components—including the Time Encoder, Time Head, LM Head, and the LLM backbone—using mixed-modality inputs. By keeping the connector fixed, we retain the modality alignment learned in previous stages. This step concentrates on refining temporal reasoning and language understanding capabilities, strengthening the LLM’s ability to interpret and process multimodal, temporally-sensitive queries across diverse scenarios. A.2 Datasets In alignment with the objectives of the three-stage training process outlined earlier, we employ varying datasets and data volumes at each stage. The overarching goal is to enhance the model’s capacity for temporal video understanding while retaining robust general video understanding abilities. An overview of the datasets used in each stage is provided in Table 6. Table 6: Datasets and sample sizes used across the three training stages. Stages Datasets Total Quantity Stage 1 Clotho [6], LLaV A-LCS558K [21], Valentini-Botinhao Speech Dataset [29] 600K Stage 2 TriSense-2M (880K), LLaV A-Video-178K (120K) [37] 1M Stage 3 TriSense-2M (1.12M), LLaV A-Video-178K (380K) [37] 1.5M
https://arxiv.org/abs/2505.18110v1
12 Stage1 . For the initial stage, we use a combination of the Clotho [6],LLaV A-LCS558K [21], and Valentini-Botinhao Speech Dataset [29] as training dataset: •Clotho is an audio captioning dataset containing 4,981 audio clips, each paired with five unique captions, totaling 24,905 annotations. The audio clips range from 15 to 30 seconds, and each caption consists of 8 to 20 words. •LLaV A-LCS558K is a concept-balanced multimodal dataset comprising 558,000 image-text pairs, annotated using BLIP-generated captions. It is designed to support feature alignment during the pretraining of vision-language models. •Valentini-Botinhao Speech Dataset is a parallel corpus of clean and noisy speech record- ings. It is widely used in training and evaluating speech enhancement and text-to-speech (TTS) systems, featuring 48kHz audio from multiple speakers under various noise condi- tions. Stage 2 and 3 . For these stages, we adopt our newly proposed TriSense-2M dataset, applying a 9:1 training-to-testing split. This results in 1.9 million training samples and 0.1 million test samples. The training data is further partitioned into approximately 880K samples for Stage 2 and 1.12M samples for Stage 3. To ensure the model also retains general video understanding capabilities , we supplement the training data with a portion of LLaV A-Video-178K [37], which includes video captioning, open- ended QA, and multiple-choice QA tasks. This mixed-task dataset helps the model develop broader understanding skills beyond temporal reasoning. To avoid massive evaluation time, we extract 11,415 challenging samples from the 0.1M test set using two filtering criteria: 1)The majority of events should occur in the middle portion of the video rather than the beginning. 2)Captions must contain at least 20 words. Evaluation is conducted using a single A100 SXM4 80GB GPU with a batch size of 1, requiring approximately 8–10 hours to complete. All models in comparison are evaluated on this same test subset, using their officially recommended hyperparameters (e.g., number of frames, temperature, top-p, etc.) A.3 Detailed training settings Our multimodal framework incorporates dedicated encoders for each modality. For the visual modality, we adopt openai/clip-vit-large-patch14-336 [25]; for audio and speech modalities, we employ BEATs_iter3+ (AS2M) (cpt2) [4] and Whisper-large-V3 [26], respectively. As for the large language model (LLM) backbone, we select Mistral-7B [17], initialized from TRACE [11], instead of using other LLM backbones. , rather than using other LLM backbones. This choice is motivated by TRACE’s prior training on large-scale temporal understanding data, which equips it with stronger temporal reasoning capabilities . The maximum context length is configured to 4096 tokens. Table 7: Training configurations and hyperparameters by stage. Settings Stage 1 Stage 2 & Stage3 Computation 4 ×A100SXM4-80GB. 16 ×A100SXM4-80GB Vision Encoder clip-vit-large-patch14-336 clip-vit-large-patch14-336 Audio Encoder BEATs_iter3+ (AS2M) (cpt2) BEATs_iter3+ (AS2M) (cpt2) Speech Encoder Whisper-large-V3 Whisper-large-V3 DeepSpeed Stage Zero2 Offload Zero2 Offload LLM Backbone Mistral-7B-v0.2 Mistral-7B-v0.2 Batch Size 512 128 & 256 Num Frames 1 64 Frame Sample Uniform Uniform Train Epochs 2 1 Learning Rate 1e-3 5e-6 LR Scheduler Cosine Cosine Model Max Length 4096 4096 Training Duration 10 Hours 3.5 days & 7 days 13 During training, videos are resampled at 1 frame per second (fps) to improve efficiency—this
https://arxiv.org/abs/2505.18110v1
step is omitted during inference to retain full fidelity. This resampling reduces input redundancy and accelerates training. InStage 1 , we train the model with a batch size of 512 and single-frame input, completing within 10 hours using 4 ×A100SXM4-80GB GPUs. Stage 2 and 3 are conducted on 16 ×A100SXM4-80GB GPUs with batch sizes of 128 and 256, respecitvely. Stage 2 requires approximately 3.5 days to complete, and stage 3 requires 7 days to finish. We use DeepSpeed Zero2 because the BEATs model does not function properly under Zero3 settings. Further details on datasets and hyperparameters are provided in 7. B Data Format B.1 Training Generator and Judger This section describes the data generation and manual filtering process used to prepare training data for both the Generator and the Judger . To ensure efficiency and quality in omni-modal caption generation, we first utilize GPTo1 [ 16] to produce high-quality annotation samples for Supervised Fine-tuning (SFT). The specific prompts used for this process are shown in 5. These prompts serve a dual purpose: they are used to generate training data via GPT and also function as system prompts during the SFT of both the Generator and the Judger. To further enhance caption quality, we implement a two-stage scoring mechanism. After captions are generated, GPT conducts a self-evaluation. Then, a separate GPT instance provides an additional evaluation to filter out low-quality samples. Following this automated scoring, we conduct manual sampling to verify consistency and ensure a high quality standard is met. Data is generated in batches of 1,000 samples. From each batch, we randomly select 500 samples for manual review to evaluate the generated content and the reliability of GPT’s scoring. If over 80% of the reviewed samples meet our quality criteria, the batch is retained; otherwise, it is discarded. Ultimately, we curate 10,000 training samples for the Generator and 3,000 samples for the Judger. Both models are trained for 3 epochs to establish effective captioning and judging capabilities. You are a helpful assistant designed to output CAPTIONS and JSON. Given three distinct captions describing audio, visual, and speech scenarios, please generate omni-modal captions for the following combinations: Audio-Visual-Speech (AVS), Visual-Speech (VS), and Audio-Visual (AV). Input Examples: Audio: {audio\_caption} Visual: {visual\_caption} Speech: {speech\_caption} You should generate captions according to the following rules: 1. The generated captions must preserve all essential information from the original captions. 2. Do not introduce any content that is not present in the original captions. 3. Do not copy verbatim from the original captions; instead, paraphrase the key information and incorporate it into the new caption. 4. Each generated caption must not exceed 200 words. When you generate captions, assign a score for yourself from 1 to 5. If the you think the quality is very poor, assign a score of 1. If the generated captions satisfy approximately 80% of the above criteria, assign a score of 3. If they fully satisfy all criteria (100%), assign a score of 5. Please think carefully and provide your answer in JSON format as follows: { "AVS": "", "AV": "", "VS": "", "Score":
https://arxiv.org/abs/2505.18110v1
""}. Note that only one caption should be provided for each of AVS, AV, and VS. You MUST respond in JSON format.You are a helpful assistant designed to evaluate caption quality. Given three original captions and three generated omni-modal captions, please assess the quality of the generated captions based on the following criteria: 1. The generated captions must include all key information from the original captions. 2. No new content may be introduced beyond what is present in the original captions. 3. The generated captions must not copy text directly from the original captions; instead, they should paraphrase and incorporate the essential information. 4. Each generated caption must not exceed 200 words. Input Examples: Audio: {audio_caption} Visual: {visual_caption} Speech: {speech_caption} AVS: {audio-visual-speech_caption} VS: {visual-speech_caption} AV: {audio-visual_caption} Assign a score from 1 to 5. If the quality is very poor, assign a score of 1. If the generated captions satisfy approximately 80% of the above criteria, assign a score of 3. If they fully satisfy all criteria (100%), assign a score of 5. Please analyze carefully and provide your evaluation in the following JSON format: { "AVS": "", "AV": "", "VS": "", "Score": "" }Note that you should provide only one score and one caption for each of AVS, AV, and VS. You MUST respond in JSON format. Figure 5: Prompts used for training the Generator and Judger. The left prompt guides GPT in generating omni-modal captions for the Generator using audio, visual, and speech inputs. The right prompt is used to train the Judger by instructing GPT to assess the quality of generated captions based on coverage, accuracy, and paraphrasing. During data creation, samples are randomly selected and manually filtered to ensure high-quality training data. 14 B.2 Training data format This section outlines the data format used for training TriSense. We adopt a ShareGPT-style format, where each training sample consists of 8 conversation rounds, each corresponding to a different modality combination. The tasks and settings within these rounds are randomized—for instance, one round might involve a VS-SC task, while the next could be an A VS-SC or V-MR task. Following the approach in TRACE, we use special tokens such as 〈sync〉and〈time〉to signal the model to switch between different prediction heads. An example of the data structure is illustrated in 6. { "video": "zEqicUE2R0I.mp4", "conversations": [ { "from": "human", "value": "<video>\nFrom 1070.8 to 1081.9, please detail the visual-sound event, highlighting any essential visuals and sounds." }, { "from": "gpt", "value": "<sync><time><time><time><time><time><time><time><time><time><time><time><time><time><time>A man is passionately speaking on stage while the audience laughs and talks in the theater. The crowd is engaged and responsive." }, { "from": "human", "value": "Could you provide a summary of the video between [1070.8, 1081.9], focusing on visual, audio, and speech elements?" }, { "from": "gpt", "value": "<sync><time><time><time><time><time><time><time><time><time><time><time><time><time><time>A person is performing on stage, and the audience is laughing and talking in a theater, but the speech subtitle mentions that the person has to hide because they look like a rat from the movie Ratatouille." }, { "from": "human", "value": "<sync><time><time><time><time><time><time><time><time><time><time><time><time><time><time>Please find the precise time in seconds when the content described
https://arxiv.org/abs/2505.18110v1
as 'a man is singing on stage' takes place in the video, and report this timestamp." }, { "from": "gpt", "value": "<sync><time><time><time><time><time><time><time><time><time><time><time><time><time><time>a man is singing on stage" } <Remaining conversations......> ] "times": [[1070.8, 1081.9], [1070.8, 1081.9], [1070.8, 1081.9], <remaining times......>] } Dataset Format Example of ShareGPT Figure 6: Example of ShareGPT-style annotation format used during training. Each sample includes multi-turn conversations over video segments with synchronized modality cues. Only the first three rounds are shown due to space limitations. C Additional Experiments. C.1 Ablation study on Segment Captioning We present ablation results for the Segment Captioning task in Table 8. In the Training Stages section, we observe that even though Stage 1 does not involve temporal modeling, the model still demonstrates some effectiveness on segment captioning. This is attributed to the modality-text alignment enabled by paired training, which facilitates basic understanding across modalities. After Stage 2 , where the training incorporates the Query-Based Connector, the temporal module, and the LM head (with the LLM backbone still frozen), the model reaches about 70–80% of the full model’s performance, indicating the benefit of temporal component training. 15 In the Connector ablation, removing adaptive weighting leads to degraded performance. The Addition variant and the first three configurations of the Fixed Weights setup show weak results, with Addition performing the worst. However, similar to findings in the Moment Retrieval task, assigning a fixed weight of 1 to visual tokens in visual-only settings produces a minor performance boost. In the Frame Number analysis, increasing the number of input frames yields moderate gains in most cases, echoing the trend seen in Section 5.2. However, this improvement is relatively limited. A potential explanation lies in the inherent differences between Segment Captioning and Moment Retrieval. While both require temporal reasoning, Segment Captioning relies more on spatial information , and increasing frames may introduce spatial redundancy that clashes with the model’s 64-frame training setup. Conversely, Moment Retrieval benefits more from longer temporal context, where additional frames enrich sequential understanding, resulting in more substantial performance improvements. Table 8: Ablation results on Segment Captioning. We analyze the effects of training stages, connector design, and input frame count across four modality settings (A VS, VS, A V , V). Metrics include BLEU-4 (B), METEOR (M), ROUGE-L (R), and CIDEr (C). Model Frame NumberA VS-SC VS-SC A V-SC V-SC B M R C B M R C B M R C B M R C Training Stages Stage1 Only 64 0.0 1.5 4.3 0.1 0.0 1.4 4.1 0.1 0.0 1.3 4.1 0.1 0.0 1.3 4.0 0.1 Stage1+2 64 2.1 9.3 19.8 6.6 1.8 8.6 20.2 6.7 3.0 11.1 21.5 8.2 5.3 6.2 11.8 13.8 Connector Addition 64 1.6 9.9 18.0 5.8 1.8 9.1 19.3 5.8 3.8 11.2 22.0 11.5 5.7 11.1 28.5 21.9 Fixed Weights 64 3.1 9.8 19.4 6.6 2.4 9.3 20.7 7.9 4.3 11.8 26.1 15.3 7.4 12.7 30.6 36.7 Frame Number TriSense (7B) 32 3.2 9.7 19.9 7.7 2.1 9.3 19.5 7.9 3.4 11.1 22.5 9.6 6.3 11.7 29.8 29.2 TriSense (7B) 64 3.4 10.1 20.1 8.3 3.0 10.0 22.2 11.8 5.3
https://arxiv.org/abs/2505.18110v1
12.2 26.3 15.4 7.3 12.6 30.7 36.3 TriSense (7B) 128 3.4 10.2 20.2 8.5 3.1 9.9 22.8 11.5 5.4 12.3 26.7 15.4 7.3 12.8 30.8 36.1 C.2 Omni-modal dataset & General understanding dataset We conduct additional experiments on the public Omni-Modal benchmark LongV ALE [ 10]. Long- V ALE is designed for event understanding across vision, audio, and language modalities, comprising 105,000 omni-modal events with precise temporal annotations and relation-aware captions, collected from 8,400 high-quality long-form videos. The Omni-VTG and Omni-SC tasks named from the offi- cial report of LongV ALE, which are the same as A VS-MR andA VS-SC in our paper. As summarized in Table 9, our zero-shot performance on the Moment Retrieval task is comparable to LongV ALE’s performance, even though their model is trained on the same dataset. Although there is a larger gap in the Segment Captioning task, we believe this is due to pattern differences in captioning styles between our SC training data and LongV ALE’s SC data. Such differences in caption patterns can lead to noticeable drops across all four evaluation metrics. Table 9: Performance on public Omni-Modal benchmark LongV ALE [ 10]."*"indicates this model is trained on the LongV ALE dataset. ZSandFTrepresent zero-shot andfine-tuned , respectively. The best and second results are highlighted in bold and underlined, respectively. ModelOmni-VTG (A VS-MR) Omni-SC (A VS-SC) R@0.3 R@0.5 R@0.7 mIoU B M R C VideoChat (ZS) 2.2 0.9 0.4 3.0 0.5 9.6 0.0 8.2 VideoChatGPT (ZS) 4.9 2.9 0.9 5.0 0.4 14.0 0.9 5.9 VideoLLaMA (ZS) 2.5 1.1 0.3 1.9 0.9 11.5 0.1 8.9 PandaGPT (ZS) 2.5 1.0 0.3 2.2 0.6 14.9 0.3 8.9 NExT-GPT (ZS) 4.3 1.9 0.7 4.0 0.4 10.2 0.0 8.1 TimeChat (ZS) 5.8 2.6 1.1 5.2 1.2 16.1 1.6 10.0 VTimeLLM (ZS) 7.5 3.4 1.3 6.4 1.0 14.5 1.6 5.5 LongV ALE (FT) * 15.7 8.6 3.9 11.0 5.6 22.4 20.3 10.9 TriSense (ZS) 14.8 9.3 4.7 11.2 4.8 21.9 18.8 10.4 16 For general video understanding evaluation, we report results on VideoMME [7]—a large-scale benchmark designed to assess multimodal large language models (MLLMs) in video analysis. VideoMME covers a wide range of visual domains, temporal scales, and modalities, including 900 videos (totaling 254 hours) and 2,700 human-annotated QA pairs . As shown in Table 10, our model not only demonstrates significant advantages in multimodal scenarios but also performs competitively in general understanding tasks. It is worth noting that our model uses much less general understanding data (500K) compared to TRACE-uni [11], which uses 0.9M , as reported in their official paper. Table 10: Zero-shot performance on general understanding dataset Video-MME [ 7]. Our model only uses 55% of the general understanding data compared to TRACE-uni. Model VideoMME (Overall Scores w/o Subtitles) VideoChat2 (7B) 33.7 Video-LLaV A (7B) 39.9 VideoLLaMA2 (7B) 46.6 TRACE (7B) 43.8 TRACE-uni (7B) 49.6 TriSense (7B) 48.7 C.3 Pseudo-code of Query-Based Connector Algorithm 1 Forward Pass of Query-Based Connector Require: A, S, V ∈RB×F×T×D,Q∈RB×L×D▷modality & query features Ensure: Z∈RB×L×D 1:function FORWARD (A, S, V, Q ) 2: forX∈ {A, S, V }do ▷2-D sinusoidal PE 3: X←X+PE(X) 4:
https://arxiv.org/abs/2505.18110v1
X←reshape (X) ▷[L, B, D ] 5: end for 6: Q←reshape (Q) ▷[L, B, D ] 7: A′, S′, V′←CrossAttn (A, Q),CrossAttn (S, Q),CrossAttn (V, Q) 8: A′, S′, V′←LayerNorm (A′),LayerNorm (S′),LayerNorm (V′) 9: ˆS←AW(A′, S′, V′) ▷Alg. 2 10: ˆS←LayerNorm (ˆS) 11: Z←2xMLP (ˆS) +ˆS ▷ two-layer MLP 12: Z←LayerNorm (Z) 13: return Z 14:end function Algorithm 2 Adaptive Weighting Require: SA, SS, SV∈RB×L×D▷query-relevant features Ensure: ˆS∈RB×L×D▷fused sequence 1:function FDG( SA, SS, SV) ▷ τ: temperature 2: ma, ms, mv←globpool (SA,1),globpool (SS,1),globpool (SV,1) 3: F←stack(ma, ms, mv) ▷[B,3, D] 4: p←softmax (1xMLP (F)/τ,dim= 1) ▷[B,3] 5: ˆS←p1⊙SA+p2⊙SS+p3⊙SV 6: return ˆS 7:end function 17 D Case studies on diferent scenarios We provide case studies of TriSense in various scenarios in Figures 7 to 12. 0s 10s...... Speech: Yes we managed to get it 3-1, that was absolutely horrific for a finish! But DeCosta get his second’s of today’s game, what a player this could be! Sound: Passionate and uplifting music. 559.5s 567.6s...... 700s 710sAVS-SC In the time frame [559.5, 567.6], please describe the audio-visual-speech event visible in the clip. Ground Truth: A boy is playing a video game and talking about it, and a player named DeCosta scored his second goal of the game, but it was a tough finish. A man, wearing a black cap and a black t-shirt, stands in a cluttered garage, enthusiastically explaining to the camera how he plans to install a new set of tires on his truck. LongVALE Qwen2.5-Omni TriSense (Ours)Well, in that part of the video, it's just a guy talking. He's not doing anything else visually. He's just speaking. What do you think about that? "timestamps": [[559.5, 567.6]], "captions": ["A man is playing a football video game and talking about it while someone is speaking in the background, and the speaker mentions a player named Bobby and a score of 3 to 1"] Modality Weights: Audio=0.1854, Speech=0.1038, Vision=0.7108Human Figure 7: Case study of TriSense on A VS-SC task. 0s 10s...... 475.2s 477.4s...... 700s 710sV-SC At the range [475.2, 477.4], please summarize the video’s content, focusing strictly on the visual cue. Ground Truth: a man is slicing an avocado on a cutting board A man is carefully stirring a pot of thick, reddish-brown sauce on the stove, as he explains the process of making a tomato sauce. LongVALE Qwen2.5-Omni TriSense (Ours)At that time, there's a text that says \"JOIN THE NOTIFICATION SQUAD!\" on the screen. It's probably an encouragement for viewers to subscribe and get notifications for the channel. "timestamps": [[475.2, 477.4]], "captions": ["a man is cutting a piece of avocado on a cutting board"] Modality Weights: Audio=0.1191, Speech=0.0579, Vision=0.8231Human Sound: no background sound/music Speech: Cut around the stone... Figure 8: Case study of TriSense on V-SC task. 18 0s 10s...... Speech: You'll love that it can capture for 4K video at 24 frames a second. First still capture is 11 frames a second with the mechanical shutter. And up to 40 frames a second with the electronic shutter.Sound: Rhythmic and dynamic music. 216.3s 223.4s...... 250s 255sAVS-MR Query: Locate the moment where ’The man jumping
https://arxiv.org/abs/2505.18110v1
in the air, with the speaker saying that mechanical shutter is set to 11 frames per second, while passionate music is playing at the background. The speaker then says \’up to 40 frames per second, mechanical shutter and electronic shutter.\’’, return precise timestaps. Ground Truth: [216.3, 223.4] From 03 to 04 LongVALE Qwen2.5-Omni TriSense (Ours)[200, 250] "timestamps": [[215.8, 223.8]], "captions": ["The man jumping in the air, with the speaker saying that mechanical shutter is set to 11 frames per second, while passionate music is playing at the background. The speaker then says \’up to 40 frames per second, mechanical shutter and electronic shutter.\’"] Modality Weights: Audio=0.2080, Speech=0.0829, Vision=0.7091HumanFigure 9: Case study of TriSense on A VS-MR task. 0s 10s...... Speech: This one is also shipped to us, but if we knew the status of this Abino, we would not have it shipped.Sound: silent background sound 189.3s 197.9s...... 970s 980sVS-MR Query: Determine the precise time in seconds when 'A person describes a video about an Albino snake while showing a large yellow snake inside a cage. The snake is crawling around in its enclosure as people talk.' happens in the video, and report this exact moment. Ground Truth: [189.3, 197.9] From 00 to 02 LongVALE Qwen2.5-Omni TriSense (Ours)The content you're asking about appears in the video starting at 180.0 seconds and ends at 240.0 seconds. "timestamps": [[188.2, 197]], "captions": ["A person describes a video about an Albino snake while showing a large yellow snake inside a cage. The snake is crawling around in its enclosure as people talk.'"] Modality Weights: Audio=0.1035, Speech=0.1842, Vision=0.7123Human Figure 10: Case study of TriSense on VS-MR task. 19 0s 10s...... Speech: This is our Android wish phone, I want to see if these (phones) are alive. Oh!!! It started smoking... Sound: Crackling sound caused by a screwdriver smashing a phone 659.7s 666.1s...... 970s 980sAV-MR Query: Locate the exact moment in the video that corresponds to the textual query 'A man is in a studio speaking into a microphone, discussing an electronic device while making sounds with his hand and laughing.', and provide the specific time in seconds when this content is presented." Ground Truth: [659.7, 666.1] From 00 to 01 LongVALE Qwen2.5-Omni TriSense (Ours)[0.00, 10.00] "timestamps": [[658.1, 665.8]], "captions": ["A man is in a studio speaking into a microphone, discussing the features of an electronic device while making sounds with his hand and laughing."] Modality Weights: Audio=0.2674, Speech=0.0238, Vision=0.7088Human Figure 11: Case study of TriSense on A V-MR task. 0s 10s 60s 70sGeneral Understanding What is the woman holding on her hand between [0.0, 5.0]? A. a pen B. a peice of paper C. a book D. a phone Please provide your answer by stating the letter followed by the full option. 'timestamps': [[]], 'captions': ['B.a peice of paper '] Modal Weights: Audio=0.1860, Speech=0.0485, Vision=0.7656TriSense Human TriSense'timestamps': [[]], 'captions': ['A.a pen '] Modal Weights: Audio=0.2004, Speech=0.0502, Vision=0.7495Human What is the man holding on his hand between [60.0, 65.0]? A.a pen B.a peice of paper C.a book D.a phone Please provide your answer by stating the letter followed by
https://arxiv.org/abs/2505.18110v1
Bridging Supervised Learning and Reinforcement Learning in Math Reasoning Huayu Chen1,2Kaiwen Zheng1,2Qinsheng Zhang2Ganqu Cui1Yin Cui2 Haotian Ye2,3Tsung-Yi Lin2Ming-Yu Liu2Jun Zhu1†Haoxiang Wang2 1Tsinghua University2NVIDIA3Stanford University https://research.nvidia.com/labs/dir/Negative-aware-Fine-Tuning Abstract Reinforcement Learning (RL) has played a central role in the recent surge of LLMs’ math abilities by enabling self-improvement through binary verifier signals. In contrast, Supervised Learning (SL) is rarely considered for such verification-driven training, largely due to its heavy reliance on reference answers and inability to reflect on mistakes. In this work, we challenge the prevailing notion that self- improvement is exclusive to RL and propose Negative-aware Fine-Tuning (NFT) — a supervised approach that enables LLMs to reflect on their failures and improve autonomously with no external teachers. In online training, instead of throwing away self-generated negative answers, NFT constructs an implicit negative policy to model them. This implicit policy is parameterized with the same positive LLM we target to optimize on positive data, enabling direct policy optimization on all LLMs’ generations. We conduct experiments on 7B and 32B models in math reasoning tasks. Results consistently show that through the additional leverage of negative feedback, NFT significantly improves over SL baselines like rejection sampling fine-tuning, matching or even surpassing leading RL algorithms like GRPO and DAPO. Furthermore, we demonstrate that NFT and GRPO are actually equivalent in strict-on-policy training, even though they originate from entirely different theoretical foundations. Our experiments and theoretical findings bridge the gap between SL and RL methods in binary-feedback learning systems. 1 Introduction The recent surge in math reasoning abilities of Large Language Models (LLMs) is largely driven by a fundamental shift in their learning paradigm, from imitation to self-improvement [ 14,57,36,9]. Instead of relying on reference answers supplied by human annotators or stronger models [ 29,35], the new paradigm requires only a question dataset with a binary verifier to judge the correctness of self-generated answers. By reflecting on their own generation, LLMs can improve autonomously. This approach not only eliminates the need for costly data annotation but also removes competence ceilings imposed by external teachers, offering a promising path toward general intelligence [ 14,34]. Reinforcement Learning (RL) appears to be a natural fit for such verification-driven training. Specific algorithms like PPO [ 40] and GRPO [ 41] are explicitly designed to maximize reward signals, which can conveniently take the form of a binary verifier outcome. In contrast, Supervised Learning (SL) is rarely considered for realizing self-improvement. A common view holds that SL is inherently designed to mimic external teachers by memorizing the positive training data, rendering it unsuitable for self-reflective learning from negative mistakes [11]. †Corresponding author. Preprint.arXiv:2505.18116v2 [cs.LG] 28 May 2025 On-PolicyDense RewardsBinary LabelsLeverage Negative DataPositive OnlySupervised LearningReinforcement LearningOff-PolicyEquivalent gradient when on-policyConsistent Maximum Likelihood ObjectiveRFTGRPOPPOREINFORCE NFTFigure 1: A spectrum of online algorithms for LLM fine-tuning. NFT bridges reinforcement learning and supervised learning methods through the leverage of negative feedback via supervision. In this work, we challenge the prevailing notion that self-improvement is exclusive to RL, and demonstrate it can be similarly achieved within the supervised learning paradigm. We start with a simple SL baseline: Rejection sampling Fine-Tuning (RFT) [
https://arxiv.org/abs/2505.18116v2
62,15]. At each iteration, an LLM generates answers to questions. A verifier helps reject all negative answers. The remaining positive ones are compiled into a dataset to fine-tune the LLM itself in a supervised manner. RFT has been demonstrated effective by various works [ 3,32,59,43,46,53]. However, it prevents any learning from negative feedback. LLMs are encouraged to reinforce what they already perform well, rather than reflect on their mistakes — an ability we believe critical for achieving general intelligence. To overcome this limitation, we propose Negative-aware Fine-Tuning (NFT), an online learning algorithm that enables LLMs to learn from their negative generations (Sec. 3). Like RFT, NFT fine-tunes a positive LLM on positive answers via supervision. Crucially, instead of throwing away negative answers, NFT also constructs an implicit negative policy to model them. This implicit policy is parameterized with the same positive LLM we target to optimize on positive data, enabling direct policy optimization on all LLMs’ generations (Figure 2). NFT has minimal memory overhead, as only a single model is maintained throughout training. To understand the connection between NFT and RL approaches, we conduct an in-depth comparison between NFT and GRPO (Sec. 4). Surprisingly, we find the two methods are actually equivalent in strict-on-policy training, despite that they originate from entirely different theoretical frameworks (Figure 1). Notably, the “advantage normalization” characteristic of GRPO is already implicitly reflected in NFT’s loss function. Their main difference arises in off-policy settings, regarding different strategies for clipping model gradients when the learned policy deviates from the old policy. These observations suggest a strong connection between SL and RL in binary-feedback learning systems. We evaluate NFT on 7B and 32B Qwen models and report two key findings: 1. Supervised Learning alone can significantly enhance LLMs’ math reasoning with no external teachers. NFT matches or even surpasses state-of-the-art RL algorithms like GRPO [ 41] and DAPO [ 58]. 2. The performance gap between SL and RL in online training largely stems from SL’s past inability to leverage negative feedback, rather than any inherent superiority of RL. Through the additional leverage of negative data, NFT substantially bridges the performance gap between SL and leading RL algorithms. 2 Background 2.1 Maximum Likelihood vs. Policy Gradient Supervised Learning essentially aims to learn a model πθ(a|q)to fit the data distribution π(a|q). This can be realized by employing the maximum-likelihood objective: max θEa∼π(a|q)logπθ(a|q)⇔min θDKL[π(a|q)∥πθ(a|q)]. In LLM fine-tuning, qusually means the question prompt, and athe answer. To perform Maximum- likelihood training, we need a dataset D={q,a∼π(a|q)}to draw training samples from. Reinforcement Learning , by contrast, maximizes pre-defined reward r(q,a)fora∼πθ(a|q): max θJ(θ) :=Ea∼πθ(a|q)r(q,a). Directly back-propagating through J(θ)is non-trivial, as r(·)can be an arbitrary scalar function whose gradient is unknown. Luckily, ∇θJ(θ)can be estimated, making policy optimization feasible. ∇θJ(θ) =Ea∼πθ(a|q)∇θ[r(q,a) logπθ(a|q)]. (1) 2 𝜋Answers 𝜋𝜃+ 𝜋𝜃+ s−𝑟𝑞1−𝑟𝑞𝜋𝜃+𝜋11−𝑟𝑞𝜋𝜃+ 𝜋𝜋𝜃−Target Positive Policy Implicit Negative Policy Data Generation Acc:𝑟𝑞 𝒒uⅇstion𝑎+𝑎−𝜋𝜃+(𝑞,𝑎+)𝜋𝜃−(𝑞,𝑎−)𝒂nswⅇrs max𝜃log(·)𝐃𝐚𝐭𝐚GenerationFigure 2: Illustration of the NFT algorithm. Data Collection: An LLM πgenerates answers to a set of math questions. Generation results are split into two sub-datasets based on their answer correctness. Policy Optimization: By constructing an implicit policy for
https://arxiv.org/abs/2505.18116v2
modeling negative data, NFT enables direct policy optimization on both positive and negative answers via maximum-likelihood training. Eq. 1 is known as the Policy Gradient (PG) or REINFORCE algorithm [ 51,44]. In sequential decision-making problems such as language reasoning, acan be interpreted as the token decision for each step t, and r(q,a)can be replaced with advantage functions A(q,a)[39, 10]. 2.2 Math Reasoning RL: From Policy Gradient to GRPO Eq. 1 requires training to be on-policy , where πθcan only be updated a single time after data collection. To break this limitation, importance sampling can be applied [ 40]. Suppose the policy for collecting the RL dataset is denoted as πold, we have ∇θJ(θ) =Ea∼πold(a|q)πθ(a|q) πold(a|q)r(q,a)∇θlogπθ(a|q) =Ea∼πold(a|q)[r(q,a)∇θRθ(q,a)], (2) where Rθ(q,a) :=πθ(a|q) πold(a|q)is the likelihood Ratio between two policies. In math reasoning tasks, PPO [ 40] and subsequent GRPO [ 41] algorithms further apply gradient clipping to constrain the distance between πθandπold: LGRPO(θ) =−X q,a∼πoldX tminh Rt θ(q,a)ˆAq,a,clip(Rt θ(q,a),1−ϵ′,1 +ϵ′)ˆAq,ai , (3) where Rt θ(q,a) :=πθ(at|q,a<t) πold(at|q,a<t), and ˆAq,ais the estimated advantage value. Note that we have dropped some auxiliary loss terms, such as KL and entropy regularization, as they have been pointed out to be unnecessary by recent studies like DAPO [58]. GRPO proposes an efficient and effective way to estimate ˆAq,a. Collect Kanswers a1:Kand their binary reward r1:K∈ {0,1}for each question. The advantage is defined to be the normalized reward: ˆAq,a:= r(q,a)−mean{r1:K} /std{r1:K}. (4) Later studies [ 31] suggest removing the stdterm from Eq. 4, and keeping mean normalization only. 3 Method 3.1 Problem Setup Dataset. Given a set of Nmath questions {q1:N}, a pretrained LLM π(a|q), and a verifier for judging the correctness. In every iteration, we generate a dataset D={q,a1:K∼π, r1:K}1:N, where r∈ {0,1}is the correctness label, and Kthe number of answers collected for each question. Dataset D ∼πcan be split into two subsets: D+andD−.D+contains all positive answers, and D− contains the rest negative ones. We denote the underlying answer distribution of D+asπ+(·|q). Learning Target. We want to optimize the old policy πinto a new policy π+ θ≈π+. The target π+(a|q)can be formalized using Bayes’ Rule: π+(a|q) :=π(a|q, r=1) =π(a|q)p(r=1|q,a)P Aπ(a|q)p(r=1|q,a), (5) 3 𝜋1 Data Generation 𝜋𝜃+(ȁ𝑎+𝑞)𝜋𝜃−(ȁ𝑎−𝑞)𝜋ȁ𝑎𝑞𝜋𝑟𝑞𝜋+ (1−𝑟𝑞)𝜋−𝝅ȁ𝒂𝒒𝑟𝑞𝝅+ȁ𝒂𝒒 (1−𝑟𝑞)𝝅−(ȁ𝒂𝒒)Generation PolicyTarget Positive PolicyNegative Policy𝝅∗𝜋0𝜋2𝜋3𝝅𝝅−𝝅+Figure 3: Left: Policy Splitting. The generation policy can be split into a positive policy and a negative policy, and re-expressed as their linear combination. Right: Policy Improvement. By iteratively optimizing towards its positive split, an LLM policy π0can improve continuously. where Ameans all possible language space for a. Discussion. An obvious solution for learning π+is to fine-tune solely on correct answers ( D+) and discard D−totally (RFT) [ 15,53]. However, this approach prevents the model from any learning on its negative feedback ( D−). We posit that the ability to reflect on one’s own failures is not merely desirable, but central to general intelligence, marking a shift from pure imitation to self- reflective learning. Though traditionally viewed as a distinctive strength of RL [ 11,63], we ask: Can self-reflective improvement be similarly achieved within the SL paradigm? 3.2 Direct Optimization of Language Models with Negative Answers In
https://arxiv.org/abs/2505.18116v2
this section, we discuss how to leverage negative data D−to directly optimize π+ θ. Despite seemingly impossible at first thought, we find the target policy π+and the negative policy π−are tightly coupled, making feasible training π+ θdirectly from D−. First, we formalize the definition of the negative policy π−similar to Eq. 5 π−(a|q) :=π(a|q, r=0) =π(a|q)[1−p(r=1|q,a)]P Aπ(a|q)[1−p(r=1|q,a)]. (6) Combining Eq. 5 and Eq. 6, we make a key observation that rqπ+(a|q) + [1−rq]π−(a|q) =π(a|q), (7) where rq:=P Aπ(a|q)p(r=1|q,a) =p(r=1|q)is the correctness rate of LLM πover a question q. In practice, rq≈mean{r1:K}can be estimated using the KMonte Carlo rewards in dataset D. Implicit negative policy. Eq. 7 reveals a tight coupling between π+andπ−(Figure 3). Given that we already have πas the pretrained LLM and rqis estimable, learning π−from negative data should, in principle, shape the target policy π+ θ, in a manner analogous to SL on positive data. To realize this idea, we construct an implicit negative policy, denoted π− θ, by re-parameterizing the target policy π+ θusing the relationship in Eq. 7: π− θ(a|q) :=π(a|q)−rqπ+ θ(a|q) 1−rq. Thus, training π− θon negative answers directly leads to optimizing the underlying positive LLM π+ θ(Figure 2). We have the following guarantee: Theorem 3.1 (Policy Optimization with Negative Answers ).Consider the maximum-likelihood objective for training the implicit negative policy π− θ: max θEp(q)π−(a|q) logπ− θ(a|q) ⇔min θ −E(q,a)∼D−logπ(a|q)−rqπ+ θ(a|q) 1−rq (8) Assuming unlimited data and model capacity, the optimal solution for solving Eq. 8 is ∀q,a:π+ θ(a|q) =π+(a|q) Proof in Appendix A. Theorem 3.1 demonstrates the feasibility of policy optimization on negative data only. To further utilize positive data, we additionally conduct supervised training on D+: LNFT (a,q,r)∼D(θ) =r −logπ+ θ(a|q) π(a|q) + (1−r) −log1−rqπ+ θ(a|q) π(a|q) 1−rq  (9) 4 Algorithm 1 Negative-aware Fine-Tuning (NFT) 1:Input: Language model π, prompt set q1:N, verifier r(·). 2:Defmax_v (x, ϵ): //Straight-through Max Operator 3: Return stop_gradient [max(x, ϵ)−x] +x //Clip Value while Keeping Gradient 4:foreach iteration do 5: foreach sampled prompt qdo 6: Generate Kanswers a1:Kand verify their correctness r1:K// Data Collection 7: Calculate correctness rate ˆrq=mean{r1:K}and token-level likelihood {π(at|q,a<t)1:|a|}1:K 8: D ← { q,ˆrq,a1:K, r1:K, π1:K t}If0< rq<1 // Prompt Filtering 9: end for 10: Initialize π+ θ←π 11: foreach mini batch {q,a, r,ˆrq, πt}inDdo 12: Rt θ(q,a) =π+ θ(at|q,a<t) π(at|q,a<t),∀t // Positive Likelihood Ratio 13: Ifr = 0: 14: Rt θ(q,a) = (1 −ˆrqRt θ(q,a))/(1−ˆrq) // Implicit Negative Likelihood Ratio 15: Rt θ(q,a) =max_v [Rt θ(q,a), ϵ] // Clip Negative Likelihood Ratio 16: θ←θ+λ∇θP tlogRt θ(q,a)(Eq. 10) // Maximum Likelihood Training 17: end for 18: π←π+ θ, start the next iteration 19:end for Note that we have subtracted a baseline term −logπ(a|q)from the loss. Since this term is unrelated toθ, it does not affect the loss gradient and thus the optimal solution. π(a|q)is the old data likelihood before optimizer update. At the start of training, we have π+ θ=πsuch that Lθ (a,q,r)= 0. We name our method Negative-aware Fine-Tuning (NFT) as it enables the additional leverage of negative data for policy optimization compared with RFT. NFT is memory-efficient. In practice, we keep only a single
https://arxiv.org/abs/2505.18116v2
model copy in memory. The old policy likelihood π(a|q)can be pre-computed during data generation. 3.3 Practical Algorithm We introduce several improvements over Eq. 9 and propose a practical objective of NFT: LNFT D(θ) =−X q,a,rω(q)X t rlogRt θ(q,a) + (1 −r) log max_v (1−ˆrqRt θ(q,a) 1−ˆrq, ϵ) (10) where Rt θ(q,a) =π+ θ(at|q,a<t) π(at|q,a<t),and ˆrq=1 KX a|qr(q,a). Pseudo code is in Algorithm 1. Below, we explain our key design choices. Token-level loss. Eq. 9 essentially deals with sequence data, where answer likelihood π(a|q) =Q tπ(at|q,a<t)is heavily correlated with answer length. This introduces high variance in gradient estimation and causes numerical instabilities during training. Following existing approaches [ 40,31, 58], we view each token decision as an individual unit and sum up their loss in Eq. 10. Clipping negative likelihood ratio. The negative loss calculation in Eq. 10 involves a logarithm whose argument must remain positive, imposing (1−ˆrqRt θ)/(1−ˆrq)>0. When Rt θis unoptimized, this requirement may not be satisfied, potentially leading to training collapse. We therefore enforce a minimum value of ϵ >0for the negative likelihood ratio. To preserve gradient flow after clipping, we further apply straight-through gradient estimation [4, 47]. Implementation detail is in Algorithm 1. Prompt weighting. To focus training on more informative instances, we weight each prompt qby ω(q)and assign higher importance to hard questions with a low correctness rate rq. An ablation study is posted in Sec. 5.4. This design also helps align NFT with RL algorithms like GRPO. We discuss the details in Sec. 4. 5 4 Understanding the Gap between NFT and GRPO Despite originating from entirely different theoretical foundations, NFT and GRPO exhibit significant similarities. Notably, we find GRPO and NFT are equivalent in on-policy training . To understand this, we calculate and compare their loss gradients: Proposition 4.1 (Algorithm Gradient Comparision ).Suppose there are ˆrqKpositive answers and (1−ˆrq)Knegative ones for a given question q (a) GRPO Gradient: Consider only binary reward {0,1}in Eq. 3, GRPO loss gradient satisfies ∇θLGRPO D(θ) =−Xn rA+ q·I Rt θ(q,a)<1 +ϵ′ +(1−r)A− q·I Rt θ(q,a)>1−ϵ′o ∇θRt θ(q,a), (11) where A+ q=q 1−ˆrq ˆrqandA− q=−q ˆrq 1−ˆrqare respectively normalized advantages for answers. (b) NFT Gradient: Letω(q) =p (1−ˆrq)/ˆrq, NFT loss gradient satisfies ∇θLNFT D(θ) =−Xn rA+ q·1 Rt θ(q,a)+ (1−r)A− q·max1−ˆrqRt θ(q,a) 1−ˆrq, ϵ−1o ∇θRt θ(q,a). (12) 0 1.01/prime Rt Wgrad r=0 1.01 on-policyNFT GRPO 1.01+/prime 01.0 Rt Wgrad r=1 on-policyNFT GRPO Figure 4: Gradient weight for NFT and GRPO.All proofs are provided in Appendix A. Com- paring Eq. 11 and Eq. 12, the only difference between GRPO and NFT is their strategy for clipping model gradients when training data be- comes off-policy (Figure 4). GRPO simply ze- ros out the gradient when the learned policy πθ shifts far away from the old policy π, while NFT takes a “softer” decay schedule. Surprisingly, we find GRPO and NFT to be equivalent when training is totally on-policy, de- spite their distinctively different derivations: Proposition 4.2 (On-policy Gradient Equivalence ).Following the set up of Proposition 4.1 and let ϵ≤1, GRPO and NFT loss gradient are equivalent in on-policy training: Rt θ(q,a) = 1
https://arxiv.org/abs/2505.18116v2
= ⇒ ∇ θLNFT D(θ) =∇θLGRPO D(θ) Implicit group normalization. Proposition 4.1 shows the “normalized advantage” term is implicitly present within NFT’s loss function. This partially justifies the Group Normalization design choice for GRPO, which was initially introduced only as an empirical technique [ 41]. In Appendix A, we further demonstrate that by adjusting ω(q) = 1−ˆrq, NFT also aligns with Dr. GRPO [ 31]. Our findings suggest a strong connection between RL and SL frameworks in binary reward settings. 5 Experiments We seek to answer the following questions through our experiments. 1. How does NFT perform in comparison with existing RL algorithms such as GRPO? (Sec. 5.2) 2. How does negative data contribute to NFT’s performance gain? (Sec. 5.3) 3. Which empirical design choices are important to the effectiveness of NFT? (Sec. 5.4) 5.1 Experiment Setups Training. We perform online fine-tuning on Qwen2.5-Math-7B [57] and Qwen2.5-32B [56] to enhance their math abilities without relying on external teachers. The training dataset is the publicly available DAPO-Math-17k [58], which consists solely of math questions paired with ground-truth answers in integer form. During training, all models are fine-tuned for approximately 5,000 gradient steps with a batch size of 512. We fix the generation temperature to 1.0. Evaluation. We evaluate models on six validation benchmarks and report their average accu- racy: AIME 2024 ,AIME 2025 ,AMC 2023 [27],MATH500 [20],OlympiadBench [19], and Minerva 6 AIME24 Minerva Math OlympiadBench MATH500 Average20304050607080Accuracy (%) 0.221.334.769.0 35.1 26.730.937.976.8 43.1 16.736.040.983.8 44.3 20.035.841.280.5 44.4 26.738.642.179.2 46.6 43.3 30.141.080.0 48.6 32.040.847.383.2 50.8Qwen-Math-7B Qwen-DPO-R1-Zero PRIME-7B-Zero RAFT++Eurus-2-7B-PRIME Oat-7B-Zero NFT-7B-Zero (Ours)Figure 5: Comparison of the released NFT-7B with other zero-style math models of Qwen series. Table 1: NFT performs competitively compared with other algorithms. We report avg@32 for AIME24, AIME25, and AMC23 and avg@1 for others. Numbers within 1 % of the max are bolded. Model AIME24 MATH500 AIME25 AMC23 Olympiad Minerva Average Qwen2.5-Math-7B 13.3 69.0 5.5 45.8 34.7 21.3 31.6 Preference fine-tuning + DPO 29.8 79.8 13.8 83.2 48.0 39.0 48.9 Reinforcement fine-tuning + GRPO 30.2 80.4 17.1 79.5 51.8 38.2 49.5 + Dr. GRPO 31.8 83.4 15.7 80.2 49.6 38.2 49.8 + DAPO 33.1 81.6 18.7 85.0 49.9 39.3 51.2 Supervised fine-tuning + RFT 33.7 79.8 13.4 79.7 44.3 38.6 48.3 +NFT 32.0 83.2 18.3 88.5 47.3 40.8 51.7 Qwen2.5-32B 4.1 68.6 1.0 45.0 31.1 27.9 29.6 + DAPO 44.1 89.2 33.4 90.9 54.1 47.5 59.9 + RFT 29.9 86.2 19.1 92.4 45.3 44.1 52.8 +NFT 37.8 88.4 31.5 93.8 55.0 48.9 59.2 Math [26]. Validation is conducted using a top-p value of 0.7. Validation temperature is 1.0for 7B models and 0.6for 32B models. We use math-verify [24] as the verifier for training validation, andsimpleRL verifier [64] for final evaluation. Baseline methods. We compare against a set of online fine-tuning algorithms, including Iterative DPO [ 38,52,18], GRPO [ 41], Dr. GRPO [ 31], DAPO [ 58], and RFT [ 15,62]. DAPO and RFT are highlighted below. Details for other algorithms are in Appendix B. DAPO is a variant of GRPO that has achieved state-of-the-art AIME performance on 32B models. Our NFT implementation is
https://arxiv.org/abs/2505.18116v2
adapted from the official DAPO codebase, based on the VeRL framework [42]. NFT inherits most of DAPO’s hyperparameters and design choices, including dynamic data sampling, token-level loss normalization, and no KL regularization. RFT is a simple but effective SL baseline that only fine-tunes LLMs on positive answers and throws away negative data. In our implementation, the main difference between RFT and NFT is that RFT zeros out negative data loss and keeps a constant prompt weight ω(q) = 1 during training. 5.2 NFT Performance Evaluation Model comparison. By applying NFT to Qwen2.5-Math-7B, we release NFT-7B-Zero (Figure 5). NFT-7B-Zero achieves competitive performance on all benchmarks compared to other zero-style 7B math models [ 13,31,53,52]. This provides strong empirical evidence for the effectiveness of the NFT algorithm and demonstrates that SL alone can enable effective self-improvement in math tasks. Algorithm comparison. To isolate the contribution of the algorithm itself, we benchmarked various online algorithms using identical training data, infrastructure, and general hyperparameters (Table 1). Results show that NFT matches or even surpasses state-of-the-art methods such as DAPO. Figure 6 and 11 present training curves across multiple runs. NFT exhibits convergence speed and final performance on par with DAPO, further supporting its stability. 7 0 1000 2000 3000 4000 5000 Gradient Step35404550Validation Accuracy NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step304050607080Training Accuracy NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step20253035AIME 2024 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step7.510.012.515.017.520.0AIME 2025 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step70758085MATH500 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step60708090AMC 2023 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step3035404550OlympiadBench NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step25303540Minerva Math NFT (ours) RFT DAPOFigure 6: Training and validation accuracy curves for 7B experiments. We conducted 3-4 random and independent experiments for each algorithm and report their mean ±std results. 0 1000 2000 3000 4000 5000 Gradient Step30354045505560 Positive Data ContributionNegative Data ContributionValidation Accuracy RFT DAPO NFT (ours) Figure 7: Average accuracy across 6 benchmarks for 32B experiments. More curves in Appendix C. 5.3 Benefits of Negative Data 0 2000 4000 Gradient Step0.000.250.500.751.00Entropy NFT (ours) RFT DAPO 0 2000 4000 Gradient Step0.00.51.0Entropy NFT (ours) RFT DAPO Figure 8: Entropy curves for 7B and 32B runs.Negative feedback enhances performance and exploration. Table 1 shows that NFT con- sistently outperforms RFT by a clear margin, highlighting the benefit of incorporating nega- tive feedback during training. Notably, we ob- serve a clear divergence in training dynamics between RFT and NFT. Across both 7B and 32B model settings, RFT tends to reduce en- tropy over time, whereas NFT and RL methods like DAPO encourage increasing entropy (Figure 8). This behavior suggests more active exploration [58], potentially leading to the performance gap between NFT and RFT. Negative feedback becomes increasingly important in larger models. The performance gap between RFT and NFT widens over training in 32B experiments (Figure 11), while the trend is less obvious for 7B. Similarly, the
https://arxiv.org/abs/2505.18116v2
DeepSeek-R1 report [ 14] also notes RL offers greater benefits over SFT in larger models. A potential explanation could be the increasing importance of negative data. RFT remains a strong baseline. Although surpassed by numerous algorithms, RFT still deserves attention due to its extreme simplicity. In 32B settings (Figure 11), learning from positive data (RFT) contributes to 80% of the total gain achieved by our best-performing model, while negative data only accounts for the remaining 20%. These findings echo recent studies [ 63,53,30,66,50], which suggest RL primarily amplifies existing capabilities in large models rather than fostering new skills. How to exploit negative feedback remains an open challenge with heavy potential. 8 5.4 Ingredients Behind NFT’s Effectiveness We discuss two empirical design choices that notably help NFT achieve strong performance. 0.2 0.4 0.6 0.8 1.0 rq01234(q) (q)=1 (q)=1rq (q)=(1rq)/rq NFT RFT0.400.450.500.55Validation Accuracy(q)=1 (q)=1rq (q)=(1rq)/rq Figure 9: Effect of prompt weighting.Prioritize harder questions. We find that assign- ing higher weights to difficult questions with a low answer correctness rate ˆrqcan enhance model performance. We achieve this mainly by selecting ω(q)in Eq. 10 from 3 choices: (1) ω(q) = 1 . (2)ω(q) = 1 −rq, which aligns NFT with Dr. GRPO in on-policy training (Sec. 4). (3) ω(q) =p (1−rq)/rq, which aligns NFT with GRPO. Figure 9 visualizes different ω(q), and their effect for NFT and RFT. We find choices (2) and (3) perform similarly, both outperforming constant weighting choice (1). 0 1.0 Rt Wgrad r=0 1 =1.0 =2.0 =0.5 0.1 0.25 0.5 1.0 2.0 4.0 485052Validation Accuracy Figure 10: Effect of negative ratio clip value ϵ.Avoid overpenalizing mistakes. The clip value ϵof NFT (Eq. 10) sets an upper bound on the penalty weight when the likelihood ratio Rt θfor negative answers increases. When ϵis small (e.g., near zero), the algorithm empirically assigns high penalties to rising likelihoods of incorrect answers (Figure 10). However, our experiments show that overly aggressive penalization with ϵ→0de- grades overall performance. We thus adopt a de- fault setting of ϵ= 1.0. 6 Related Works Reinforcement Learning with Verifiable Rewards (RLVR) has advanced the frontier of LLM reasoning [14,36,45,9]. Compared with previous RL practices that rely on strong reward models [ 49,60,65] to simulate human feedback [ 37,10,13], RLVR turns to a ground truth verifier for providing reliable binary supervision [ 25,41]. Moreover, unlike preference-based learning algorithms such as DPO [38,5,2,16,48,6,21,54], RLVR does not require paired preference data, rendering it more flexible and memory-efficient. Despite the demonstrated effectiveness of RL algorithms in verification- driven training [ 28,1,22,58,12,61,55], recent studies suggest that supervised learning (SL) may also suffice for achieving self-improvement in LLMs [ 15,53]. Our method further addresses SL’s inability to incorporate negative feedback [ 23], bridging both the theoretical and the performance gap between the two fields, and can be easily adapted to other language paradigms such as masked LMs [17, 68, 33]. A key design of NFT involves implicit policy modeling for direct policy optimization. This design, emphasizing direct optimization via implicitly defined models, shares conceptual similarities with some existing approaches. In preference-based training, DPO [ 38] introduces an
https://arxiv.org/abs/2505.18116v2
implicit reward model parameterized by the policy network to allow optimizing policies directly. Recent visual modeling efforts also leverage implicit conditional or residual models parameterized by generation networks to avoid guided sampling [8, 7] or enhance quality [67]. 7 Conclusion In this work, we introduce Negative-aware Fine-Tuning (NFT), a supervised approach that enables LLMs to learn from their own negative generations. In online training, NFT substantially improves upon supervised learning baselines through the additional leverage of negative feedback, achieving performance comparable to leading RL algorithms like GRPO. Notably, we unveiled a theoretical equivalence between NFT and GRPO under strict-on-policy conditions, despite their disparate theoretical foundations. These findings highlight the robust capability of supervised learning for verification-driven self-improvement and significantly bridge the conceptual and practical gap between SL and RL paradigms in binary-feedback learning systems. 9 Acknowledgments We thank Wei Xiong, Zekun Hao, Yuxuan Tong, Lifan Yuan, Jiashu Xu, and Chang Zhou for the insightful discussion. References [1]Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, and Sara Hooker. Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms. arXiv preprint arXiv:2402.14740 , 2024. [2]Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences. In AISTATS , 2024. [3]Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 , 2022. [4]Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013. [5]Tianchi Cai, Xierui Song, Jiyan Jiang, Fei Teng, Jinjie Gu, and Guannan Zhang. Ulma: Unified language model alignment with demonstration and point-wise human preference. arXiv preprint arXiv:2312.02554 , 2023. [6]Huayu Chen, Guande He, Lifan Yuan, Ganqu Cui, Hang Su, and Jun Zhu. Noise contrastive alignment of language models with explicit rewards. In NeurIPS , 2024. [7]Huayu Chen, Kai Jiang, Kaiwen Zheng, Jianfei Chen, Hang Su, and Jun Zhu. Visual generation without guidance. In ICML , 2025. [8]Huayu Chen, Hang Su, Peize Sun, and Jun Zhu. Toward guidance-free ar visual generation via condition contrastive alignment. In ICLR , 2025. [9]Yang Chen, Zhuolin Yang, Zihan Liu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. Acereason-nemotron: Advancing math and code reasoning through reinforcement learning. arXiv preprint , 2025. [10] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In NeurIPS , 2017. [11] Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161 , 2025. [12] Xiangxiang Chu, Hailang Huang, Xiao Zhang, Fei Wei, and Yong Wang. Gpg: A simple and strong reinforcement learning baseline for model reasoning. arXiv preprint arXiv:2504.02546 , 2025. [13] Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin
https://arxiv.org/abs/2505.18116v2
Xu, Weize Chen, et al. Process reinforcement through implicit rewards. arXiv preprint arXiv:2502.01456 , 2025. [14] DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [15] Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. TMLR , 2023. [16] Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306 , 2024. [17] Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Mask-predict: Parallel decoding of conditional masked language models. arXiv preprint arXiv:1904.09324 , 2019. [18] Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexan- dre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, et al. Direct language model alignment from online ai feedback. arXiv preprint arXiv:2402.04792 , 2024. 10 [19] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008 , 2024. [20] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [21] Jiwoo Hong, Noah Lee, and James Thorne. Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691 , 2024. [22] Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262 , 2025. [23] Ermo Hua, Biqing Qi, Kaiyan Zhang, Yue Yu, Ning Ding, Xingtai Lv, Kai Tian, and Bowen Zhou. Intuitive fine-tuning: Towards simplifying alignment into a single process. arXiv preprint arXiv:2405.11870 , 2024. [24] Hynek Kydlí ˇcek. Math-verify: Math verification library, 2024. [25] Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. T \" ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124 , 2024. [26] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. In NeurIPS , 2022. [27] Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Huang, Kashif Rasul, Longhui Yu, Albert Q Jiang, Ziju Shen, et al. Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions. Hugging Face repository , 2024. [28] Ziniu Li, Tian Xu, Yushun Zhang, Zhihang Lin, Yang Yu, Ruoyu Sun, and Zhi-Quan Luo. Remax: A simple, effective, and efficient reinforcement learning method for aligning large language models. arXiv preprint arXiv:2310.10505 , 2023. [29] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [30] Zichen Liu, Changyu Chen, Wenjun Li, Tianyu Pang, Chao Du, and Min Lin. There may not be aha moment in r1-zero-like training—a
https://arxiv.org/abs/2505.18116v2
pilot study, 2025. [31] Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783 , 2025. [32] Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex Polozov, Christopher Meek, Dragomir Radev, and Jianfeng Gao. Learning math reasoning from self-sampled correct and partially- correct solutions. In ICLR , 2023. [33] Shen Nie, Fengqi Zhu, Chao Du, Tianyu Pang, Qian Liu, Guangtao Zeng, Min Lin, and Chongxuan Li. Scaling up masked diffusion models on text. arXiv preprint arXiv:2410.18514 , 2024. [34] NVIDIA. Cosmos-reason1: From physical common sense to embodied reasoning. arXiv preprint arXiv:2503.15558 , 2025. [35] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [36] OpenAI. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [37] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. In NeurIPS , 2022. [38] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. InNeurIPS , 2023. 11 [39] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In ICML , 2015. [40] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [41] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Y Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [42] Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. [43] Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492 , 2023. [44] Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NeurIPS , 1999. [45] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [46] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [47] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In NeurIPS , 2017. [48] Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, and Yuxin Chen. Beyond reverse kl: Generalizing direct preference optimization with diverse divergence constraints. arXiv preprint arXiv:2309.16240 , 2023. [49] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. [50] Yiping Wang, Qing
https://arxiv.org/abs/2505.18116v2
Yang, Zhiyuan Zeng, Liliang Ren, Lucas Liu, Baolin Peng, Hao Cheng, Xuehai He, Kuan Wang, Jianfeng Gao, et al. Reinforcement learning for reasoning in large language models with one training example. arXiv preprint arXiv:2504.20571 , 2025. [51] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine learning , 1992. [52] Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint. arXiv preprint arXiv:2312.11456 , 2023. [53] Wei Xiong, Jiarui Yao, Yuhui Xu, Bo Pang, Lei Wang, Doyen Sahoo, Junnan Li, Nan Jiang, Tong Zhang, Caiming Xiong, et al. A minimalist approach to llm reasoning: from rejection sampling to reinforce. arXiv preprint arXiv:2504.11343 , 2025. [54] Jing Xu, Andrew Lee, Sainbayar Sukhbaatar, and Jason Weston. Some things are more cringe than others: Iterative preference optimization with the pairwise cringe loss. arXiv preprint arXiv:2312.16682 , 2023. [55] Jianhao Yan, Yafu Li, Zican Hu, Zhi Wang, Ganqu Cui, Xiaoye Qu, Yu Cheng, and Yue Zhang. Learning to reason under off-policy guidance, 2025. [56] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [57] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [58] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. 12 [59] Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback. In NeurIPS , 2023. [60] Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, and Hao Peng. Free process rewards without process labels. arXiv preprint arXiv:2412.01981 , 2024. [61] Yurun Yuan, Fan Chen, Zeyu Jia, Alexander Rakhlin, and Tengyang Xie. Trajectory bellman residual minimization: A simple value-based method for llm reasoning, 2025. [62] Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 , 2023. [63] Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. [64] Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025. [65] Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240 , 2024. [66] Rosie Zhao, Alexandru Meterez, Sham Kakade, Cengiz Pehlevan, Samy Jelassi, and Eran
https://arxiv.org/abs/2505.18116v2
Malach. Echo chamber: Rl post-training amplifies behaviors learned in pretraining. arXiv preprint arXiv:2504.07912 , 2025. [67] Kaiwen Zheng, Yongxin Chen, Huayu Chen, Guande He, Ming-Yu Liu, Jun Zhu, and Qinsheng Zhang. Direct discriminative optimization: Your likelihood-based visual generative model is secretly a gan discriminator. In ICML , 2025. [68] Kaiwen Zheng, Yongxin Chen, Hanzi Mao, Ming-Yu Liu, Jun Zhu, and Qinsheng Zhang. Masked diffusion models are secretly time-agnostic masked models and exploit inaccurate categorical sampling. arXiv preprint arXiv:2409.02908 , 2024. 13 A Proof of Theorems Theorem A.1 (Policy Optimization with Negative Answers ).Consider the maximum-likelihood objective for training the implicit negative policy π− θ: max θEp(q)π−(a|q) logπ− θ(a|q) ⇔min θ −E(q,a)∼D−logπ(a|q)−rqπ+ θ(a|q) 1−rq Assuming unlimited data and model capacity, the optimal solution for solving Eq. 8 is ∀q,a:π+ θ(a|q) =π+(a|q) Proof. The proof is quite straightforward. First, we show that maximum-likelihood training leads to the optimal solution π− θ∗(a|q) =π−(a|q). max θEp(q)π−(a|q) logπ− θ(a|q) ⇔max θEp(q)π−(a|q) logπ− θ(a|q)−logπ−(a|q) ⇔min θEp(q)DKL π−(a|q)∥π− θ(a|q) Since DKL π−(a|q)∥π− θ(a|q) ≥0. The equality holds if and only if ∀q:π− θ=π−. We thus have ∀q,a:π− θ∗(a|q) =π−(a|q). (13) Next, we prove π+ θ∗=π+. Note that that during training, π− θis only an implicit policy defined by π+ θthrough π− θ(a|q) :=π(a|q)−rqπ+ θ(a|q) 1−rq. On the other hand, the negative data distribution π−has the same relationship with π+by Eq. 7. π−(a|q) :=π(a|q)−rqπ+(a|q) 1−rq. We have ensured 0< rq<1during training, combining these observations and Eq. 13, we have ∀q,a:π+ θ∗(a|q) =π+(a|q) Proposition A.2 (Algorithm Gradient Comparision ).Suppose there are ˆrqKpositive answers and(1−ˆrq)Knegative ones for a given question q (a) GRPO Gradient: Consider only binary reward {0,1}in Eq. 3, GRPO loss gradient satisfies ∇θLGRPO D(θ) =−Xn rA+ q·I Rt θ(q,a)<1 +ϵ′ +(1−r)A− q·I Rt θ(q,a)>1−ϵ′o ∇θRt θ(q,a), where A+ q=q 1−ˆrq ˆrqandA− q=−q ˆrq 1−ˆrqare respectively normalized advantages for answers. (b) NFT Gradient: Letω(q) =p (1−ˆrq)/ˆrq, NFT loss gradient satisfies ∇θLNFT D(θ) =−Xn rA+ q·1 Rt θ(q,a)+ (1−r)A− q·max1−ˆrqRt θ(q,a) 1−ˆrq, ϵ−1o ∇θRt θ(q,a). Proof. (a) GRPO Gradient: We first copy down the GRPO loss definition from Eq. 3. LGRPO D(θ) =−X q,a,tminh Rt θ(q,a)ˆAq,a,clip(Rt θ(q,a),1−ϵ′,1 +ϵ′)ˆAq,ai . 14 The normalized advantage can be estimated as ˆAq,a:= r(q,a)−mean{r1:K} /std{r1:K} where mean{r1:K}=1 K[ˆrqK∗1 + (1 −ˆrq)K∗0] = ˆrq and std{r1:K}=r 1 K[ˆrqK∗(1−ˆrq)2+ (1−ˆrq)K∗(0−ˆrq)2] =q ˆrq(1−ˆrq). When ais a positive answer, r(q,a) = 1 . We have A+ q=q 1−ˆrq ˆrq>0. LGRPO D+(θ) =−X q,a+,tmin Rt θ(q,a+),1 +ϵ′ˆAq,a+ ∇θLGRPO D+(θ) =−X q,a+,tA+ q· I Rt θ(q,a+)<1 +ϵ′ . (14) When ais a negative answer, r(q,a) = 0 . We have A− q=−q ˆrq 1−ˆrq<0. LGRPO D−(θ) =−X q,a−,tmax Rt θ(q,a−),1−ϵ′ˆAq,a− ∇θLGRPO D−(θ) =−X q,a−,tA− q· I Rt θ(q,a−)>1−ϵ′ . (15) Combining Eq. 14 and Eq. 15, we have ∇θLGRPO D(θ) =−Xn rA+ q·I Rt θ(q,a)<1 +ϵ′ +(1−r)A− q·I Rt θ(q,a)>1−ϵ′o ∇θRt θ(q,a), (a) NFT Gradient: We copy down the NFT loss definition from Eq. 10. LNFT D(θ) =−X q,a,tω(q) rlogRt θ(q,a) + (1 −r) log max_v (1−ˆrqRt θ(q,a) 1−ˆrq, ϵ) When ais a positive answer, r(q,a) = 1 . LNFT D+(θ) =−X q,a+,tω(q) logRt θ(q,a) =−X q,a+,ts 1−ˆrq ˆrqlogRt θ(q,a) =−X q,a+,tA+
https://arxiv.org/abs/2505.18116v2
qlogRt θ(q,a+) ∇θLNFT D+=−X q,a+,tA+ q1 Rt θ(q,a+)∇θRt θ(q,a+) (16) When ais a negative answer, r(q,a) = 0 . LNFT D−(θ) =−X q,a−,tω(q) log max_v (1−ˆrqRt θ(q,a−) 1−ˆrq, ϵ) =−X q,a−,ts 1−ˆrq ˆrqlog max_v (1−ˆrqRt θ(q,a−) 1−ˆrq, ϵ) 15 ∇θLNFT D−=−X q,a−,ts 1−ˆrq ˆrq max(1−ˆrqRt θ(q,a−) 1−ˆrq, ϵ)−1·−ˆrq 1−ˆrq· ∇θRt θ(q,a−) =−X q,a−,t−s ˆrq 1−ˆrq max(1−ˆrqRt θ(q,a−) 1−ˆrq, ϵ)−1· ∇θRt θ(q,a−) =−X q,a−,tA− q max(1−ˆrqRt θ(q,a−) 1−ˆrq, ϵ)−1· ∇θRt θ(q,a−) (17) Combining Eq. 16 and Eq. 17, we have ∇θLNFT D(θ) =−Xn rA+ q·1 Rt θ(q,a)+ (1−r)A− q·max1−ˆrqRt θ(q,a) 1−ˆrq, ϵ−1o ∇θRt θ(q,a). Remark A.3 ( Dr. GRPO ).The main practical difference between Dr. GRPO [ 31] and GRPO is that Dr. GRPO removes the std normalization term when estimating group-normalized advantages. Following Proposition 4.1, we simply need to set ω(q) = 1−ˆrqinstead of ω(q) =q 1−ˆrq ˆrqto align with the Dr. GRPO loss function. Proposition A.4 (On-policy Gradient Equivalence ).Following the set up of Proposition 4.1 and letϵ≤1, GRPO and NFT loss gradient are equivalent in on-policy training: Rt θ(q,a) = 1 = ⇒ ∇ θLNFT D(θ) =∇θLGRPO D(θ) Proof. The proof is simple. When on-policy, Rt θ(q,a) = 1 . Regarding positive answers a+, GRPO gradient (Eq. 14) and NFT gradient (Eq. 16) both become ∇θLGRPO D+(θ) =∇θLNFT D+(θ) =A+ q∇θRt θ(q,a+). Regarding negative answers a−, GRPO gradient (Eq. 15) and NFT gradient (Eq. 17) both become ∇θLGRPO D−(θ) =∇θLNFT D−(θ) =A− q∇θRt θ(q,a−). B Experiment Details General training setup. All algorithms are implemented based on the official DAPO codebase within the VeRL framework. We use a learning rate of 1e-6 with a linear warm-up schedule across all experiments. At each rollout step, we generate 16 answers for each of 512 sampled questions, then split the data into 16 mini-batches and train the policy network for 16 gradient steps. Models are trained for 320 rollout steps, totaling over 5,000 gradient steps. Unless otherwise specified, we follow DAPO’s default design choices, including dynamic data sampling, token-level loss normalization, and no KL regularization. For 7B training, we restrict context lengths to 4K and use 64 NVIDIA H100 GPUs. For 32B training, we restrict context lengths to 32K (DAPO) or 16K (others), and use 128-256 NVIDIA H100 GPUs. DAPO. We adopt a faithful implementation of the official DAPO codebase, keeping all hyperparame- ters untoched. NFT. Compared with DAPO, NFT modifies the context length to 16K for 32B training. We find this does not significantly affect performance, but noticeably reduces data collection time. Another differ- ence is that we remove the overlong reward shaping technique of DAPO to ensure a binary reward outcome. In our setting, truncated answers are treated as negative, which sufficiently discourages overlong answers. The negative ratio clipping parameter is set to ϵ= 1.0, and the prompt weight is defined as ω(q) = 1−rq. RFT zeros out negative data loss in NFT implementation and keeps a constant prompt weight ω(q) = 1 during training. 16 GRPO does not adopt the dynamic sampling technique proposed by DAPO. Rather, it simply leverages all data for training, even though the gradient for all-positive or
https://arxiv.org/abs/2505.18116v2
all-negative questions should be zeroed out automatically by the algorithm itself [ 58]. Other DAPO-related techniques are kept, such as setting positive clipping parameter to a higher value ϵ′ += 0.28. Since GRPO requires less time for sampling data, we train GRPO models for 580+ rollout steps instead of 320 steps, which roughly takes the same training time compared with DAPO experiments. Dr. GRPO is modified from our GRPO implementation. The only difference is the removal of the std normalization when computing group-normalized advantages. Iterative DPO . Comparing DPO with other RL algorithms in a head-to-head fashion is difficult because DPO requires paired data to calculate the training objective, while we sample 16 unpaired answers for each question. To solve this problem, we take the implementation of InfoNCA [ 6], a variation of the DPO algorithm that can handle K > 2responses per question: LInfoNCA (q,a1:K,r1:K)∼D(θ) =−KX k=1" r(k) PK i=1r(i)logeβRθ(q,ak) PK i=1eβRθ(q,ai)# InfoNCA is guaranteed to become DPO algorithm in K= 2settings. We ablate β∈ {0.1,0.01,0.02} and report the best averaged validation result. We find InfoNCA training to be unstable and add an SFT regularization term to the original loss function for stabilizing the training process. Validation details. Validation is performed with a top-p value of 0.7. The temperature is set to 1.0for 7B models and 0.6for 32B models, with context lengths matching the training configuration. We use math-verify [24] as the verifier during training validation, and simpleRL [64] for final evaluation. TheDAPO-17k benchmark consists solely of training questions whose ground truth answers are integers and includes both a prefix and a lastfix for each question. Accordingly, for validation on AIME and AMC questions, we adapt the prompt format to match the training pattern. For other benchmarks with non-integer answers, question prompts remain unmodified. C Additional Experiment Results 0 1000 2000 3000 4000 5000 Gradient Step30354045505560 Validation Accuracy NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step20406080 Training Accuracy NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step10203040 AIME 2024 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step0102030 AIME 2025 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step7075808590 MATH500 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step5060708090 AMC 2023 NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step30354045505560 OlympiadBench NFT (ours) RFT DAPO 0 1000 2000 3000 4000 5000 Gradient Step3035404550 Minerva Math NFT (ours) RFT DAPO Figure 11: Training and validation accuracy curves for 32B experiments. 17
https://arxiv.org/abs/2505.18116v2
arXiv:2505.18121v1 [cs.AI] 23 May 2025PROGRM: Build Better GUI Agents with Progress Rewards Danyang Zhang1,2†Situo Zhang1,2†Ziyue Yang1,2Zichen Zhu1,2 Zihan Zhao1,2Ruisheng Cao1,2Lu Chen1,2,3‡Kai Yu1,2,3‡ 1X-LANCE Lab, School of Computer Science MoE Key Lab of Artificial Intelligence, SJTU AI Institute Shanghai Jiao Tong University, Shanghai, China 2Jiangsu Key Lab of Language Computing, Suzhou, China 3Suzhou Laboratory, Suzhou, China {zhang-dy20,situozhang}@sjtu.edu.cn Abstract LLM-based (Large Language Model) GUI (Graphical User Interface) agents can potentially reshape our daily lives significantly. However, current LLM-based GUI agents suffer from the scarcity of high-quality training data owing to the difficulties of trajectory collection and reward annotation. Existing works have been exploring LLMs to collect trajectories for imitation learning or to offer reward signals for online RL training. However, the Outcome Reward Model (ORM) used in existing works cannot provide finegrained feedback and can over-penalize the valuable steps in finally failed trajectories. To this end, we propose Prog ressReward Model (PROGRM) to provide dense informative intermediate rewards by predicting a task completion progress for each step in online training. To handle the challenge of progress reward label annotation, we further design an efficient LCS-based (Longest Common Subsequence) self-annotation algorithm to discover the key steps in trajectories and assign progress labels accordingly. PROGRMis evaluated with extensive experiments and analyses. Actors trained with PROGRMoutperform leading proprietary LLMs and ORM-trained actors, illustrating the effectiveness ofPROGRM. The codes for experiments will be made publicly available upon acceptance. 1 Introduction Automatic Graphical User Interface (GUI) agents have great potential to reshape our daily lives by automating routine operations on GUI systems like computers and smartphones. Recently, surprised by the exceptional achievements of Large Language Models (LLM) in common-sense and reasoning domains, LLMs have been explored to improve the performance of GUI agents [ 37,36,24,35,40, 20, 43, 2, 16, 30, 17, 31, 13, 8, 11, 28]. Using off-the-shelf LLMs to perform GUI tasks through prompt-based methods often yields unsatis- factory results, as these models lack the ability to ground instructions to low-level actions or to make long-term decisions required for GUI tasks. While there has been a growing body of work on training LLMs to build GUI agents, such training typically relies on human-labeled tasks or trajectories. How- ever, annotating GUI tasks is labor-intensive, demands domain-specific expertise, and is extremely †Equal contribution. ‡Corresponding authors. Preprint. Under review. (a) Imitation Learning(b) Outcome Reward Model(c) Progress Reward Model Progress 𝑝𝑝 1 0 Supervision Agent ExecutionLabels𝛥𝛥𝑝𝑝𝛥𝛥𝑝𝑝Figure 1: Comparison of policy optimization methods. (a) Imitation Learning optimizes the agent’s policy using per-step expert labels. (b) ORM provides sparse rewards by updating the policy only based on the trajectory’s final success or failure. (c) PROGRM predicts progress value at each step, using the progress gain ( ∆p) as dense reward signal. difficult to scale. The scarcity of high-quality training data presents a major challenge in developing high-performance GUI agents. To address this challenge, many studies have explored ways to automatically synthesize GUI training data [ 42,22,14]. These approaches primarily leverage LLMs to generate new task instructions and collect trajectories produced by actor LLMs. NNetnav [ 12] and OS-Genesis [ 23] conclude task instructions after
https://arxiv.org/abs/2505.18121v1
collecting trajectories, while Explorer [ 14] further iterates on this process. However, these methods rely on imitation learning (see Figure 1(a)) and are not well-suited for online environments, where content changes dynamically and agents may encounter unseen scenarios. They lack the ability to learn from mistakes or benefit from online exploration to improve performance. Efficient online Reinforcement Learning (RL) requires reliable reward signals to accurately evaluate arbitrary GUI-based tasks. DistRL [ 26] employs an additional Vision-Language Model (VLM) as an autonomous evaluator to determine task success, while WebRL [ 16] trains a lightweight Outcome Reward Model (ORM). However, these methods rely solely on the final success status, unnecessarily penalizing all steps within trajectories that fail to achieve the goal but potentially include valuable intermediate actions [ 8], as shown in Figure 1(b). Additionally, the sparse reward signals from ORM significantly reduce exploration efficiency in RL training, particularly for long-horizon tasks typical in GUI interactions. To this end, we introduce the Prog ressReward Model ( PROGRM), an effective and efficient approach for generating reward signals at intermediate steps within RL training trajectories for GUI agents. Intuitively, a complex task is not accomplished in a single leap; instead, it is completed through a sequence of subtasks that progressively advance toward the final goal. As shown in Figure 1(c), at each stage, we can quantify how much of the overall task has been completed — this is the essence of progress . For example, when booking a flight, the process typically involves searching for available flights, selecting a suitable option, entering passenger details, and completing the payment. Each of these steps brings the agent closer to the goal, and PROGRM provides informative reward signals by estimating the progress achieved at each step, rather than waiting until the entire task is finished. This dense, intermediate reward signal enables more efficient and stable RL training, especially for long-horizon tasks commonly encountered in GUI environments. However, obtaining accurate progress reward labels poses a significant challenge for training reliable PROGRMmodels, as gold labels are commonly not directly available in raw trajectories. This problem is conceptually similar to the difficulties of process reward annotation faced in Process Reward Model (PRM) training in reasoning tasks. Human experts or Monte-Carlo search are commonly employed in the reasoning area to label rewards for intermediate reasoning steps [ 10,25,39]. However, such methods are prohibitively expensive and time-consuming for GUI agent tasks, where heavy simulators are involved and rapid rollback or efficient state restoration is often unsupported. Therefore, 2 to address the progress reward labeling challenge, we propose an efficient self-annotation algorithm that automatically identifies key steps within trajectories and assigns progress labels accordingly. Specifically, for a given task, we first discover the common patterns from the successful trajectories by computing their Longest Common Subsequences (LCS) and extract them as execution recipes . The recipes are then used to identify the key steps in unseen trajectories, and the progress labels can be efficiently assigned based on the identified key steps. The proposed progress labeling algorithm prevents expensive human-expert annotation and Monte-Carlo search, while it can
https://arxiv.org/abs/2505.18121v1
easily and sufficiently exploit the self-explored trajectories of agents. We evaluate the effectiveness of PROGRM on WikiHow task set [ 38], a real-world Android device navigation benchmark. Experimental results demonstrate that PROGRM-trained actors outperform leading proprietary LLMs for GUI tasks, including Claude-3.7-Sonnet, achieving significant su- periority in success rate. It also surpasses existing imitation learning and ORM-based online RL approaches. Furthermore, we show that PROGRM accurately captures the progresses agents make during navigation, highlighting its capability of providing meaningful intermediate feedback in complex environments. The key contributions of our work are as follows: •We propose PROGRM, a novel method that provides dense reward signals based on predicted progresses toward a goal, enabling effective online RL training for GUI agents. •We introduce an efficient LCS-based self-annotation algorithm to automatically generate progress labels for training P ROGRM. •Experimental results on real-world GUI benchmark demonstrate the superiority of PROGRM- trained actors to state-of-the-art proprietary models as well as imitation-learning and ORM-training agents. 2 P ROGRM: Progress reward model for GUI RL 2.1 Progress reward Progress is defined as the percentage of a task that has been completed. We introduce the progress function Prog∈[0,1], which estimates the progress ptgiven a state stduring the execution of a taskg: pt=Prog(st;g). (1) Based on the progress function Prog , we define the progress reward r(p) tfor state stas: r(p) t=Prog(st;g)−Prog(st−k;g), (2) where kis a hyperparameter that determines the length of the progress history. The progress reward awards the agent with the cumulative progress gain over the past ksteps. 2.2 Progress labeling Since the true progress values are not directly observable from raw interaction trajectories, an appropriate estimation method is required. A naive approach is to assign linear progress label to each step in a successful trajectory, for example, given a trajectory of length T, the progress at the t-th step is labeled as p∗ t=t/T. However, this approach assumes uniform progress gain throughout the trajectory, which is often not the case. The actor may take some exploration or useless actions that do not result in actual new progress towards task completion. To enable reliable progress estimation, such “hollow” steps need to be distinguished from the steps that actually make progress gains towards the final goal, which we call key steps . Besides, actors do not always fail at the exact episode beginning and can make partial progress even in a finally failed trajectory. In such cases, the naive linear progress labeling cannot provide meaningful labels and may cause underestimation of the task progress. To address these limitations, we propose an algorithm that automatically identifies key steps from successful trajectories and generates more refined progress labels accordingly. As shown in Figure 2, the proposed progress labeling algorithm consists of three stages: (1) Longest Common Subsequence (LCS) recipe library construction, (2) key step discovery, (3) and progress label assignment. 3 Grouping Grouped TrajectoriesLCS Successful Trajectories Recipe Library (a) Retrieved Recipe Trajectory to Annotate 1234 (b) Progress Labels Progress Gains (Rewards) (c)Figure 2: Progress labeling algorithm. The proposed labeling algorithm consists of three stages: (a) LCS recipe library construction, where trajectories sharing
https://arxiv.org/abs/2505.18121v1
similar core policies are grouped and the common pattern called recipe within each group is extracted by computing the group LCS; (b) Key Step Discovery, in which key steps (matched steps, highlighted in green; best viewed in color) are identified by matching each trajectory to the recipe with the highest completion ratio (see Eq. (4)); and (c) Progress Label Assignment, where progress labels for key steps are determined by their position within the recipe, and labels for non-key steps are inherited from their nearest preceding key step. Once progress values are assigned, per-step progress gains are used as rewards. LCS recipe library construction A natural assumption for key step discovery is that the successful trajectories for the same task goal share some common behavior patterns. Therefore, the common parts of successful trajectories are more likely to be the key steps. From such a perspective, we propose to extract the common parts of the successful trajectories and store them as recipes and use the stored recipes to discover the key steps in unseen trajectories. Specifically, we start by collecting all successful trajectories for task goal g, denoted as D(g)= {T1, T2,⋯, Tn}. Since there is often more than one optimal policy for completing a given task, we group the trajectories and assume that those within the same group share a common core policy. We then extract one recipe from each group. Grouping is performed according to LCS-based similarity, ensuring that the similarity between any two trajectories within a group exceeds a predefined threshold θL. The similarity between two trajectories is defined as: Sim(Ti, Tj)=SoftLCS (Ti, Tj) min{∣Ti∣,∣Tj∣}, (3) where SoftLCS denotes the customized soft LCS length between trajectories TiandTj. The proposed soft LCS algorithm replaces the exact matching in the traditional LCS algorithm with a soft match function, allowing different types of actions to be matched with varying weights. This enables the algorithm to more effectively handle actions that include natural language arguments, such as text typing. The detailed definition of the soft LCS function is provided in § A. After grouping, we extract recipe for each group by computing the trajectories’ LCS and attain a recipe library L(g)={L1, L2,⋯, Lm}. Key step discovery The second stage involves identifying the key steps within a given trajectory Tifor task g, using the constructed LCS recipe library L(g). For each trajectory, we first select the recipe Lj∈L(g)that maximizes the completion ratio, i.e., the proportion of the recipe matched with the trajectory to annotate. Denoting the LCS between TiandLjaslij, the completion ratio ( CR) is 4 formulated as CR(Ti;Lj)=∣lij∣ ∣Lj∣. (4) Here,∣lij∣denotes the length of the LCS, and ∣Lj∣is the length of the recipe. The steps in Tithat also appear in Ljare regarded as key steps, representing critical progress milestones within the trajectory. Progress label assignment Progress labels are then assigned separately for key steps and non-key steps. For each key step, its progress label is determined by its position within the matched recipe Lj, under the assumption that progress increases uniformly along the recipe. Specifically, if the λ-th key step in Ticorresponds to the κ-th
https://arxiv.org/abs/2505.18121v1
position in Lj, its progress label is given by p∗ kλ=κ/∣Lj∣. For non-key steps ( i.e., steps between two key steps), we assign the progress label of their nearest preceding key step, i.e., for a non-key step kλ<t<kλ+1, its progress label is p∗ t=p∗ kλ. For environments that provide milestone-style intermediate rewards, these rewards can be used directly to identify key steps. In this case, key steps correspond to those receiving milestone rewards, and progress labels can be assigned using the same approach as above. 2.3 Progress model training To develop a practical progress model, we combine a pretrained LLM with a multilayer perceptron (MLP) and apply a sigmoid activation to ensure the output is constrained between 0 and 1. The model is trained using the binary cross-entropy (BCE) loss, which is well-suited for optimizing normalized progress value predictions. Given a training dataset of interaction steps and their corresponding progress labels, D(p)={(gi, si, p∗ i)}, the progress model parameterized by θis optimized as follows: ˆpi=Prog([gi,ˆsi];θ), L(θ)=Ei∼D(p)[−p∗ ilog ˆpi−(1−p∗ i)log(1−ˆpi)].(5) Since GUI representations are often lengthy, we represent the state at step tusing the complete action history up to t−1combined with the most recent screen observation: ˆst=[a1, a2,⋯, at−1, ot]. (6) This form enables the progress model to effectively capture both task goal and sequential context necessary for accurate progress estimation. 2.4 Online RL training We adapt the REINFORCE++ algorithm [ 7] to multi-turn reinforcement learning. REINFORCE++ eliminates the need for a critic network and has demonstrated both efficiency and stability in training single-turn reasoning LLMs [ 29,21], making it well-suited for online RL training of LLM-based agents. To adapt to multi-turn training of GUI agents, We follow the token-level credit assignment approach for multi-turn language agents proposed by Wen et al. [27] to assign different reward discounts for inter-turn and intra-turn transitions. 3 Experiments 3.1 Experimental settings Environment We select the WikiHow benchmark [ 38] to evaluate the effectiveness of PROGRM. WikiHow is one of the few GUI interaction benchmarks that provide intermediate milestone rewards, which enables us to validate our proposed LCS-based key step discovery algorithm by comparing it against environment-reward-based key step discovery. The benchmark offers a canonical set of 577 annotated tasks for real-world GUI interactions within the WikiHow app. Of these, 150 tasks are used as the test set, while the remaining 427 tasks constitute the training set. Furthermore, according to Zhang et al. [38], the test set tasks are devided into three categories: (1) Cross-Page tasks, where the agent needs to follow instructions to complete a series of navigations among different pages, (2) In-Page tasks, where the agent needs to find a specific article and perform some in-page operations like bookmarking, sharing, rating, etc. according to the instructions, (3) and QA tasks, where the agent needs to find a specific article and answer some questions according to it. We follow this categorization and report results separately for the three types of tasks as well. 5 Table 1: Actor results on WikiHow task set. PROGRM Envdenotes PROGRMtrained with environment- reward-based progress labels and PROGRM LCSdenotes PROGRMtrained LCS-based progress
https://arxiv.org/abs/2505.18121v1
labels. The Average Cumulative Rewards (Rwd) and Success Rates (SR, %) are displayed. Both global average results and per-category results are listed. Actor Rwd SRCross-Page In-Page QA Rwd SR Rwd SR Rwd SR GPT-4o-mini 1.60 38.00 1.58 52.54 1.63 29.41 1.59 27.50 GPT-4.1-mini 2.16 52.00 2.13 71.19 2.47 47.06 1.81 30.00 Claude-3.7-Sonnet 2.38 56.00 2.27 77.97 2.78 58.82 2.03 20.00 Qwen2.5-7B 1.89 31.33 1.71 54.23 2.08 15.69 1.91 17.50 SFT 2.32 56.00 1.95 62.71 3.02 84.31 1.98 10.00 GUI-R1 2.33 58.00 1.93 62.71 3.04 86.27 2.02 15.00 w/ ORM 2.35 58.67 2.05 72.88 2.96 78.43 2.00 12.50 w/ P ROGRM LCS 2.37 59.33 2.14 69.49 2.94 82.35 1.99 15.00 w/ P ROGRM Env 2.39 62.00 2.12 72.88 3.04 88.24 1.95 12.50 Reward model training data We used Qwen2.5-7B [ 32] and GPT-4o-mini1to perform rollouts and collected 7,725 trajectories. To further augment the dataset and achieve a balanced distribution of steps between successful and failed trajectories, we applied data synthesis techniques, as detailed in § B. This resulted in a final dataset of 10,438 trajectories, comprising 5,729 successful and 4,709 failed cases. For ORM training, we use the overall success or failure label of each trajectory. For PROGRM, progress label for each step is generated using the pipeline described in § 2.2. Additional details regarding the reward model (RM) training data are provided in § C. Implementations We use Qwen2.5-7B as the base model for both reward models (RMs) and actors. Experiments are conducted with two variants of PROGRMs, trained using either environment-reward- based progress labels or LCS-based progress labels, denoted as PROGRM EnvandPROGRM LCS, respectively. Prior to reinforcement learning, the actor is initialized via supervised fine-tuning (SFT) for 10,000 steps using samples from the collected successful trajectories to acquire basic interaction abilities. We adapt the REINFORCE++ algorithm [7] to multi-turn reinforcement learning of LLM- based agents following Wen et al. [27] to add token-level credit assignment. We employ a remote environment server to enable parallel deployment of WikiHow alongside the RL trainer. The progress reward history length kis set to 1 in the main experiments. Baselines We use the trivial Outcome Reward Model (ORM) as the main baseline for comparison. PROGRMis evaluated against ORM by assessing the performance of RL-finetuned actors (see § 3.2) as well as several direct metrics (see § 3.3). We also include GUI-R1 [ 28], a step-level RL method that uses action-level matching with the ground truth as its reward signal, eliminating the need for reward models or hand-crafted GUI evaluation functions. For GUI-R1 training, 10,000 steps are sampled from the collected successful trajectories. In addition, we compare PROGRM-trained agents with a series of recent state-of-the-art proprietary models, including Claude-3.7-Sonnet2, GPT-4.1-mini3, and GPT-4o-mini4. 3.2 Results The main results are presented in Table 1. Actors trained with PROGRM achieves the highest average cumulative rewards and success rates, with PROGRM Envreaching 62.00%. This performance surpasses all the state-of-the-art proprietary models using prompting (e.g., Claude-3.7-Sonnet at 56.00%) as well as SFT actor trained with demonstrations (56.00%). These baseline models often 1https://platform.openai.com/docs/models/gpt-4o-mini 2https://www.anthropic.com/claude/sonnet 3https://platform.openai.com/docs/models/gpt-4.1-mini 4https://platform.openai.com/docs/models/gpt-4o-mini 6 Table 2: Accuracy of RM evaluations.
https://arxiv.org/abs/2505.18121v1
The numbers are percentages (%). Naive ORM holds an evidently higher false positive rate. RM #TP 1#FN;#TN1#FP;Prec1Rec1Acc1 ORM 54.00 4.67 19.33 22.00 71.05 92.04 73.33 PROGRM LCS 57.33 2.00 33.33 7.33 88.66 96.63 90.67 PROGRM Env 57.33 4.67 36.67 1.33 97.73 92.47 94.00 Table 3: Comparison of reward models (ORM, ORM Claude andPROGRM) in terms of key step progress prediction error, average predicted final step score, and model inference latency. RM Key Step Prog. Err .;Avg. Fin. Score Latency (s) ; ORM Claude 0.638 0.000 5.725 ORM Claude -CoT 0.177 0.593 8.531 ORM 0.171 0.595 0.050 PROGRM LCS 0.126 0.846 0.050 PROGRM Env 0.036 0.842 0.050 struggle to generalize to unseen environments and tend to become stuck, repeatedly outputting useless actions such as scrolling down. PROGRMalso outperforms ORM, especially on In-Page tasks. To further understand the advantages ofPROGRMover ORM, we compare the direct performance of the Reward Models (RM) in Table 2 by matching their predictions of success with the ground truth judgements from the environment. The results show that PROGRMconsistently achieves stronger correlation with groundtruth across all the metrics. In contrast, ORM exhibits a significantly higher false positive rate, which undermines its reliability in distinguishing between successful and unsuccessful trajectories. 3.3 Analysis and ablation study Analysis of per-category performances We observe that, except for the model trained with PROGRM Env, RL-finetuned models do not achieve higher scores on In-Page tasks compared to the SFT baseline. By looking through the actors’ trajectories, it is found that the new failures of In-Page tasks after being finetuned with ORM and PROGRM LCSlie in misactions, e.g., the instruction requires giving a thumb-up to an article, but the actor gives a thumb-down. Such misactions can be attributed to the misleading of RMs to some extent, as it is further noticed that ORM and PROGRM LCSmay assign such results a high score in some cases. This is consistent with the higher false positive rates of P ROGRM LCSand ORM in Table 2. For QA tasks, closed-source proprietary models achieve the best performances. Interestingly, the performance of finetuned models on QA tasks is lower than that of the base model prior to SFT. Completing a QA task requires the agent to generate an appropriate answer based on a reference article; thus, proprietary models with superior natural language capabilities excel in this category. In contrast, SFT focused on GUI-specific tasks somewhat diminishes the actor’s general language ability, resulting in a lower baseline for QA performance. Subsequent RL finetuning partially improves this, but does not fully restore the original performance level. We anticipate that additional RL fine-tuning steps may further enhance the actor’s capabilities on QA tasks. We additionally compare PROGRMwith ORM and a general-purpose evaluator [ 15] based on Claude- 3.7-Sonnet, referred to as ORM Claude , for progress estimation. The comparison includes progress estimation error for key steps, the average predicted score for final steps, and invocation latency. Results are presented in Table 3. We evaluate both ORM Claude -s with and without Chain-of-Thought (CoT) reasoning; the used prompts are provided in § G. Key step progress estimation
https://arxiv.org/abs/2505.18121v1
We collect actor trajectories on WikiHow and consider the steps receiving environment milestone rewards to be ground-truth key steps. The progress labels are then assigned to the key steps by assuming the progress gains are even among them. Then we leverage various types of RMs to estimate the progress of these key steps, and calculate the mean absolute error. 7 Article Not Found Useless Repetition Other01020304050(%)SFT ORM ProgRMEnv ProgRMLCSFigure 3: Actor failure mode analysisTable 4: Ablation study about history length kof progress reward and training steps. k Train. Step .Rwd SR (%) k=1∼20K 2.39 62.00 k=3∼20K 2.30 54.00 k=1∼30K 2.43 67.33 The results are presented in the second column of Table 3. PROGRM achieves the lowest estimation errors among all the reward models evaluated, indicating its ability to produce more accurate progress estimations and provide finer-grained guidance during actor training. Notably, the estimation error of P ROGRM LCSis significantly higher than that of P ROGRM Env, suggesting that there are still gaps between LCS-based and environment-reward-based key step discovery. Further optimization of automatic key step discovery algorithms remains an important direction for future work. Naively trained ORM acquires a comparable estimation error with an out-of-box general-purpose evaluator with CoT, not showing an evident advantage. This phenomenon proves that trivial ORM training cannot endow the RM with the capability of GUI task progress estimation. Final step score prediction Different types of reward models are used to predict scores for the final steps of trajectories, as shown in Table 3. PROGRM EnvandPROGRM LCSyield similar average final scores. Likewise, the average final score predictions for ORM and ORM Claude -CoT are also comparable. The average score predicted by ORM is noticeably lower than that of PROGRM. This is because ORM tends to assign scores close to either 0 or 1, effectively functioning as a binary indicator of trajectory success or failure. In contrast, PROGRMestimates the agent’s cumulative task progress, so the scores for failed trajectories are not necessarily close to zero. It is important to note that even within a failed trajectory, there may be positive steps that contribute toward the task goal. In such cases, the coarse 0–1 scoring provided by ORM fails to appropriately reward these intermediate achievements, inadequately penalizing or ignoring the agent’s partial progress. In contrast, PROGRM can assign moderate credit to these steps, encouraging more effective and efficient exploration during RL training. Additionally, we observe that ORM Claude is unable to generate meaningful scores and is therefore unsuitable for agent evaluation or training. Invocation Latency The invocation latencies for different types of reward models are shown in Table 3. The lightweight, self-hosted RMs are efficient and well-suited for online training, with response times suitable for practical use. In contrast, invoking ORM Claude incurs significantly higher latency. Even without Chain-of-Thought (CoT) reasoning, ORM Claude requires several seconds to return a response, and the latency increases further with CoT enabled. This makes accessing general-purpose evaluators such as ORM Claude entirely impractical for online training. Actor failure mode analysis We summarize two typical modes of failure of the actors, i.e., “article not found”
https://arxiv.org/abs/2505.18121v1
and “useless repetition”. “Article not found” is referred to as the error where the agent fails to figure out the proper search keywords to reach a target article page in WikiHow app. “Useless repetition” indicates that the agent repeats some useless actions without achieving any actual progress to complete the task. Statistics are performed on these error modes and depicted in Figure 3. Compared to SFT and ORM-trained actors, the “useless repetition” failures of PROGRM-trained actors decrease most remarkably. By training with PROGRM, the actor learns to perform actions that can result in most progress gains and prevent useless repetitive actions that bring no new progress. Ablation study about history length kand training steps We conduct ablation study about the history length kof progress reward (see § 2.1) used in RL training. As shown in Table 4, increasing history length degrades the actor’s performance significantly, revealing that increasing history length is not suitable for the specific GUI interaction tasks. Progress reward with history length k>1 gives a step the credit for the cumulative progress gain of kcontiguous actions. This may be useful for some cases where per-step progress gain is little while a group of contiguous actions can result in a relatively meaningful progress gain. Such cases usually mean particularly long episodes or 8 overly atomic action space, which is not case for common GUI interaction environments. We further conduct a supplementary experiment by training the GUI agent with PROGRM Envfor more steps and demonstrate the result in Table 4. A longer training process further boosts the actor’s performance, increasing the success rate from 62.00% to 67.33% . This demonstrates the potential of PROGRMfor continuously enhancing actor performance. 4 Related works Auto-evaluation for GUI agents Strong capability of LLMs reveals the feasibility of enabling auto-evaluation of GUI interaction instead of hand-crafted evaluation functions. Pan et al. [15] systematically summarizes the framework of LLM/VLM-based auto-evaluators for GUI agents. This framework is popularly used in a series of instruction-first [ 42,22,14] and trajectory-first [ 12] data augmentation works. However, the reliance on expensive and high-latency super models makes it stressful to be afforded and leveraged in online training. In contrast, Qi et al. [16] adopts a more lightweight ORM in RL (Reinforcement Learning) training. Nevertheless, ORM training fails to exploit the intermediate steps in training trajectories and cannot predict accurate progress during interaction. Therefore, we present PROGRM to sufficiently exploit all the training steps and provide the actor with meticulous guidance by predicting episode progress. RL for LLM-based GUI agents GUI interaction is a typical decision-making problem and RL methods have been explored by the community. Bai et al. [2,1]mainly exploit static trajectory datasets to conduct offline learning. Zheng et al. [41] leverages a general-purpose evaluator to provide reward and train a Value Environment Model to avoid direct accessing an online GUI environment. WebRL [ 16] and DistRL [ 26] explore online training for GUI interaction tasks. Outcome reward models are used to produce rewards during online training. Except for normal trajectory-level RL, recent works Lu et al. [11], Xia and Luo [28] also
https://arxiv.org/abs/2505.18121v1
explore step-level RL for GUI tasks, inspired by the success of DeepSeek-R1 DeepSeek-AI et al. [5]on reasoning tasks. In this paper, we propose a new process reward model for GUI interaction tasks, PROGRM, to provide exquisite progress reward in RL training. Ideally, P ROGRM can be combined with any trajectory-level RL methods. ORMs and PRMs in reasoning tasks Outcome Reward Model (ORM) and Process Reward Model (PRM) has been widely used in reasoning tasks like mathematical problems, coding tasks, etc. for verification-guided generation [ 4,9,33,10], reinforcement learning [ 25,18], and preference learning [ 34]. ORM is generally trained according to the final answers of reasoning problems, which are commonly easy to obtain. Lightman et al. [10] trains PRM using human-annotated process labels, which are overly costly. Wang et al. [25] proposes to use Monte-Carlo search to estimate the likelihood of reaching the correct answer starting from an intermediate state. However, PRMs trained with such labels are more like a cumulative expected reward function (value function) rather than a reward function grading the instant state. Yuan et al. [34] proposes to derive a PRM from an ORM for preference learning. Similarly, the probability gain of reaching the correct answer during transition between two states is measured by an ORM and used as process reward in [ 18]. Unlike single-turn answer generation for reasoning tasks, GUI interaction is always in multiple turns. As a PRM for GUI agents, PROGRM measures the value of a complete interaction step rather than an incomplete state during a single generation. A new algorithm is also developed to discover key steps in trajectories and assign progress labels accordingly for P ROGRM. Progress reward Bruce et al. [3]proposes progress reward to measure the effectiveness of agents’ actions and guide agents’ exploration during RL training. The progress reward function is trained using hundreds of millions of human-playing steps on NetHack [ 6]. In contrast, PROGRMis dedicated to GUI interaction and trained with agent-explored trajectories, reducing the workload of human annotation. Qu et al. [18] derives progress reward from the perspective of minimizing cumulative regret and uses it to improve the efficiency of math problem solving. It leverages an ORM to measure the probability of reaching the correct answer from a partial solution. In comparison, PROGRM is designed to apply to a complete GUI interaction step and is trained with meticulous progress labels to attain more accurate progress estimation rather than directly borrowing an ORM. 9 5 Conclusion In this work, we introduce PROGRM, a novel reward model for GUI agents that provides fine- grained reward signals during online RL training by accurately estimating progress at each step. We also propose an efficient self-annotation algorithm to generate appropriate progress labels for PROGRM training. Agents trained with PROGRM outperform both proprietary LLM-based agents and those trained with conventional ORMs, demonstrating the strong effectiveness and potential of our approach. References [1]Hao Bai, Yifei Zhou, Li Erran Li, Sergey Levine, and Aviral Kumar. Digi-q: Learning vlm q-value functions for training device-control agents. In The Thirteenth International Conference on Learning Representations . [2]Hao Bai, Yifei Zhou,
https://arxiv.org/abs/2505.18121v1
Jiayi Pan, Mert Cemri, Alane Suhr, Sergey Levine, and Avi- ral Kumar. Digirl: Training in-the-wild device-control agents with autonomous re- inforcement learning. In Amir Globersons, Lester Mackey, Danielle Belgrave, An- gela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Informa- tion Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 , 2024. URL http://papers.nips.cc/paper_files/paper/2024/hash/ 1704ddd0bb89f159dfe609b32c889995-Abstract-Conference.html . [3]Jake Bruce, Ankit Anand, Bogdan Mazoure, and Rob Fergus. Learning about progress from experts. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net, 2023. URL https://openreview.net/ forum?id=sKc6fgce1zs . [4]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR , abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168 . [5]DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. CoRR , abs/2501.12948, 2025. doi: 10.48550/ARXIV .2501.12948. URL https://doi.org/10.48550/arXiv.2501.12948 . [6]Eric Hambro, Roberta Raileanu, Danielle Rothermel, Vegard Mella, Tim Rocktäschel, Heinrich Küttler, and Naila Murray. Dungeons and data: A large-scale nethack dataset. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - Decem- ber 9, 2022 , 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/ 9d9258fd703057246cb341e615426e2d-Abstract-Datasets_and_Benchmarks.html . 10 [7]Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262 , 2025. [8]Li-Cheng Lan, Andrew Bai, Minhao Cheng, Ruochen Wang, Cho-Jui Hsieh, and Tianyi Zhou. Exploring expert failures improves llm agent tuning. arXiv preprint arXiv:2504.13145 , 2025. [9]Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making language models better reasoners with step-aware verifier. In Anna
https://arxiv.org/abs/2505.18121v1
Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 5315–5333. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.ACL-LONG.291. URL https://doi.org/10.18653/v1/2023. acl-long.291 . [10] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum? id=v8L0pN6EOi . [11] Zhengxi Lu, Yuxiang Chai, Yaxuan Guo, Xi Yin, Liang Liu, Hao Wang, Guanjing Xiong, and Hongsheng Li. UI-R1: enhancing action prediction of GUI agents by reinforcement learning. CoRR , abs/2503.21620, 2025. doi: 10.48550/ARXIV .2503.21620. URL https: //doi.org/10.48550/arXiv.2503.21620 . [12] Shikhar Murty, Dzmitry Bahdanau, and Christopher D. Manning. Nnetscape navigator: Complex demonstrations for web agents without a demonstrator. CoRR , abs/2410.02907, 2024. doi: 10.48550/ARXIV .2410.02907. URL https://doi.org/10.48550/arXiv.2410.02907 . [13] Tianyue Ou, Frank F. Xu, Aman Madaan, Jiarui Liu, Robert Lo, Abishek Sridhar, Sudipta Sengupta, Dan Roth, Graham Neubig, and Shuyan Zhou. Synatra: Turning indirect knowledge into direct demonstrations for digital agents at scale. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, edi- tors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 , 2024. URL http://papers.nips.cc/paper_files/paper/2024/hash/ a6a6891cf1dfc64d664f086cf5976e93-Abstract-Conference.html . [14] Vardaan Pahuja, Yadong Lu, Corby Rosset, Boyu Gou, Arindam Mitra, Spencer Whitehead, Yu Su, and Ahmed Awadallah. Explorer: Scaling exploration-driven web trajectory synthesis for multimodal web agents. CoRR , abs/2502.11357, 2025. doi: 10.48550/ARXIV .2502.11357. URL https://doi.org/10.48550/arXiv.2502.11357 . [15] Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine, and Alane Suhr. Au- tonomous evaluation and refinement of digital agents. CoRR , abs/2404.06474, 2024. doi: 10.48550/ARXIV .2404.06474. URL https://doi.org/10.48550/arXiv.2404.06474 . [16] Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Wenyi Zhao, Yu Yang, Xinyue Yang, Jiadai Sun, Shuntian Yao, Tianjie Zhang, Wei Xu, Jie Tang, and Yuxiao Dong. Webrl: Training LLM web agents via self-evolving online curriculum reinforcement learning. CoRR , abs/2411.02337, 2024. doi: 10.48550/ARXIV .2411.02337. URL https://doi.org/10. 48550/arXiv.2411.02337 . [17] Yujia Qin, Yining Ye, Junjie Fang, Haoming Wang, Shihao Liang, Shizuo Tian, Junda Zhang, Jiahao Li, Yunxin Li, Shijue Huang, Wanjun Zhong, Kuanye Li, Jiale Yang, Yu Miao, Woyu Lin, Longxiang Liu, Xu Jiang, Qianli Ma, Jingyu Li, Xiaojun Xiao, Kai Cai, Chuang Li, Yaowei Zheng, Chaolin Jin, Chen Li, Xiao Zhou, Minchao Wang, Haoli Chen, Zhaojian Li, Haihua Yang, Haifeng Liu, Feng Lin, Tao Peng, Xin Liu, and Guang Shi. UI-TARS: pioneering automated GUI interaction with native agents. CoRR , abs/2501.12326, 2025. doi: 10.48550/ARXIV .2501.12326. URL https://doi.org/10.48550/arXiv.2501.12326 . 11 [18] Yuxiao Qu, Matthew Y . R. Yang, Amrith Setlur, Lewis Tunstall, Edward Emanuel Beeching, Ruslan Salakhutdinov, and Aviral Kumar. Optimizing test-time compute via meta reinforcement fine-tuning. CoRR , abs/2503.07572, 2025. doi: 10.48550/ARXIV .2503.07572. URL https: //doi.org/10.48550/arXiv.2503.07572 . [19] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert- networks. In
https://arxiv.org/abs/2505.18121v1
Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019 , pages 3980–3990. Association for Computational Linguistics, 2019. doi: 10.18653/V1/D19-1410. URL https://doi.org/10.18653/v1/D19-1410 . [20] Paloma Sodhi, SRK Branavan, Yoav Artzi, and Ryan McDonald. Step: Stacked llm policies for web actions. In First Conference on Language Modeling . [21] Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji-Rong Wen. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. arXiv preprint arXiv:2503.05592 , 2025. [22] Hongjin Su, Ruoxi Sun, Jinsung Yoon, Pengcheng Yin, Tao Yu, and Sercan Ö. Arik. Learn-by- interact: A data-centric framework for self-adaptive agents in realistic environments. CoRR , abs/2501.10893, 2025. doi: 10.48550/ARXIV .2501.10893. URL https://doi.org/10. 48550/arXiv.2501.10893 . [23] Qiushi Sun, Kanzhi Cheng, Zichen Ding, Chuanyang Jin, Yian Wang, Fangzhi Xu, Zhenyu Wu, Chengyou Jia, Liheng Chen, Zhoumianze Liu, Ben Kao, Guohao Li, Junxian He, Yu Qiao, and Zhiyong Wu. Os-genesis: Automating GUI agent trajectory construction via reverse task synthesis. CoRR , abs/2412.19723, 2024. doi: 10.48550/ARXIV .2412.19723. URL https://doi.org/10.48550/arXiv.2412.19723 . [24] Junyang Wang, Haiyang Xu, Jiabo Ye, Ming Yan, Weizhou Shen, Ji Zhang, Fei Huang, and Jitao Sang. Mobile-agent: Autonomous multi-modal mobile device agent with visual perception. CoRR , abs/2401.16158, 2024. doi: 10.48550/ARXIV .2401.16158. URL https://doi.org/ 10.48550/arXiv.2401.16158 . [25] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 9426–9439. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.ACL-LONG.510. URL https: //doi.org/10.18653/v1/2024.acl-long.510 . [26] Taiyi Wang, Zhihao Wu, Jianheng Liu, Jianye Hao, Jun Wang, and Kun Shao. Distrl: An asynchronous distributed reinforcement learning framework for on-device control agents. CoRR , abs/2410.14803, 2024. doi: 10.48550/ARXIV .2410.14803. URL https://doi.org/10. 48550/arXiv.2410.14803 . [27] Muning Wen, Ziyu Wan, Weinan Zhang, Jun Wang, and Ying Wen. Reinforcing language agents via policy optimization with action decomposition. arXiv preprint arXiv:2405.15821 , 2024. [28] Xiaobo Xia and Run Luo. Gui-r1: A generalist r1-style vision-language action model for gui agents. arXiv preprint arXiv:2504.10458 , 2025. [29] Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2502.14768 , 2025. [30] Yiheng Xu, Zekun Wang, Junli Wang, Dunjie Lu, Tianbao Xie, Amrita Saha, Doyen Sahoo, Tao Yu, and Caiming Xiong. Aguvis: Unified pure vision agents for autonomous GUI interaction. CoRR , abs/2412.04454, 2024. doi: 10.48550/ARXIV .2412.04454. URL https://doi.org/ 10.48550/arXiv.2412.04454 . 12 [31] Yiheng Xu, Dunjie Lu, Zhennan Shen, Junli Wang, Zekun Wang, Yuchen Mao, Caiming Xiong, and Tao Yu. Agenttrek: Agent trajectory synthesis via guiding replay with web tutorials. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore,
https://arxiv.org/abs/2505.18121v1
April 24-28, 2025 . OpenReview.net, 2025. URL https://openreview.net/forum?id= EEgYUccwsV . [32] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. CoRR , abs/2412.15115, 2024. doi: 10.48550/ARXIV .2412.15115. URL https://doi.org/10. 48550/arXiv.2412.15115 . [33] Fei Yu, Anningzhe Gao, and Benyou Wang. Outcome-supervised verifiers for planning in mathematical reasoning. CoRR , abs/2311.09724, 2023. doi: 10.48550/ARXIV .2311.09724. URL https://doi.org/10.48550/arXiv.2311.09724 . [34] Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kai Zhang, Bowen Zhou, Zhiyuan Liu, and Hao Peng. Free process rewards without process labels. CoRR , abs/2412.01981, 2024. doi: 10.48550/ARXIV .2412.01981. URL https://doi.org/10.48550/arXiv.2412. 01981 . [35] Chaoyun Zhang, Liqun Li, Shilin He, Xu Zhang, Bo Qiao, Si Qin, Minghua Ma, Yu Kang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, and Qi Zhang. UFO: A ui-focused agent for windows OS interaction. CoRR , abs/2402.07939, 2024. doi: 10.48550/ARXIV .2402.07939. URL https://doi.org/10.48550/arXiv.2402.07939 . [36] Chi Zhang, Zhao Yang, Jiaxuan Liu, Yanda Li, Yucheng Han, Xin Chen, Zebiao Huang, Bin Fu, and Gang Yu. Appagent: Multimodal agents as smartphone users. In Naomi Yamashita, Vanessa Evers, Koji Yatani, Sharon Xianghua Ding, Bongshin Lee, Marshini Chetty, and Phoebe O. Toups Dugas, editors, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, CHI 2025, YokohamaJapan, 26 April 2025- 1 May 2025 , pages 70:1–70:20. ACM, 2025. doi: 10.1145/3706598.3713600. URL https://doi.org/10.1145/3706598. 3713600 . [37] Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, and Kai Yu. Large language models are semi-parametric reinforcement learning agents. In Alice Oh, Tris- tan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ f6b22ac37beb5da61efd4882082c9ecd-Abstract-Conference.html . [38] Danyang Zhang, Zhennan Shen, Rui Xie, Situo Zhang, Tianbao Xie, Zihan Zhao, Siyuan Chen, Lu Chen, Hongshen Xu, Ruisheng Cao, et al. Mobile-env: Building qualified evaluation benchmarks for llm-gui interaction. arXiv preprint arXiv:2305.08144 , 2023. [39] Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301 , 2025. [40] Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. Gpt-4v(ision) is a generalist web agent, if grounded. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. URL https://openreview. net/forum?id=piecKJ2DlB . [41] Jiani Zheng, Lu Wang, Fangkai Yang, Chaoyun Zhang, Lingrui Mei, Wenjie Yin, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, and Qi Zhang. VEM: environment-free exploration for training GUI agent with value environment model. CoRR , abs/2502.18906, 2025. doi:
https://arxiv.org/abs/2505.18121v1
10.48550/ARXIV .2502.18906. URL https://doi.org/10.48550/arXiv.2502.18906 . 13 [42] Yifei Zhou, Qianlan Yang, Kaixiang Lin, Min Bai, Xiong Zhou, Yu-Xiong Wang, Sergey Levine, and Li Erran Li. Proposer-agent-evaluator(pae): Autonomous skill discovery for foundation model internet agents. CoRR , abs/2412.13194, 2024. doi: 10.48550/ARXIV .2412.13194. URL https://doi.org/10.48550/arXiv.2412.13194 . [43] Zichen Zhu, Hao Tang, Yansi Li, Dingye Liu, Hongshen Xu, Kunyao Lan, Danyang Zhang, Yixuan Jiang, Hao Zhou, Chenrun Wang, et al. Moba: Multifaceted memory-enhanced adaptive planning for efficient mobile task automation. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations) , pages 535–549, 2025. A Details of soft & hard LCS algorithms The soft Longest Common Subsequence (LCS) algorithm is proposed to better handle actions with natural language arguments, which are unsuitable for direct exact match. It is derived from the standard “hard” LCS algorithm by replacing the exact equation with a soft match function. To be specific, given two sequences a={ai}m i=1andb={bj}n j=1, the LCS of aandb,LCS(a,b), and its length can be solved by dynamic programming. The Bellman equation is as follows. ∣LCS(a1∶i,b1∶j)∣={∣LCS(a1∶i−1,b1∶j−1)∣+1 ai=bj max{∣LCS(a1∶i−1,b1∶j)∣,∣LCS(a1∶i,b1∶j−1)∣} ai≠bj.(7) By replacing the hard match ( i.e., exact equation) with a soft match function f, we obtain the Bellman equation for soft LCS algorithm: SoftLCS (a1∶i,b1∶j)=max{SoftLCS (a1∶i−1,b1∶j−1)+f(ai, bj), SoftLCS (a1∶i−1,b1∶j),SoftLCS (a1∶i,b1∶j−1)}.(8) The soft match function fis defined according to the particular action space. Given two WikiHow actions, a=(typea,element a,text a),b=(typeb,element b,text b),fis defined as f(a, b)=⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩0 typea≠typeb SBERT(text a,text b)typea=typeb∈{INPUT ,ANSWER} ε typea=typeb=NOTHING 1[a=b] otherwise ,(9) where SBERT denotes computing text similarity with Sentence Transformer [ 19] and 0.4 is used for ε. This soft match function gives a soft weight for actions with free-form natural language arguments can lead to more finegrained similarity. Besides, the function penalizes the match of empty actions NOTHING so that more weights are assigned to the other actual actions and a more meaningful match can be obtained. Note that the match of NOTHING is not completely disabled as some of them may be necessary waiting that should be preserved. Another problem is how to compute LCS for multiple sequences with a group. As a direct application of dynamic programming to LCS computation of more than two sequences is too complex, we adopt the two-sequence LCS algorithm to achieve approximation, i.e., ̃LCS(a1,a2,⋯,an)=a1⊙a2⊙⋯⊙ an. (10) Here we adopt the left-associative binary operator ⊙to denote two-sequence LCS function for the convenience of expression. The similarity threshold for trajectory grouping θLis 0.6 in our experiments. B RM training data synthesis To supplement the collected trajectories and achieve a better data balance, we perform data synthesis based on collected agent trajectories. Failed trajectory synthesis Failed trajectories are synthesized in two ways, 1) combining mis- matched instruction and action trajectory, e.g., combining instruction for task awith execution trajectory of task b2) and leveraging a random walk trajectory. 14 T asks010203040506070#TrajectoriesSuccessful Trajectories Failed Trajectories(a) Trajectory number statistics of collected RM training data on WikiHow T asks02004006008001000#StepsSteps in Successful Trajectories Steps in Failed Trajectories (b) Step number statistics of collected RM training data on WikiHow Figure
https://arxiv.org/abs/2505.18121v1
4: Statistics of the collected reward model (RM) training data for WikiHow. Figure 4(a) dis- plays the number of successful and failed trajectories, with a success-to-failure ratio of approximately 1.22. Figure 4(b) presents the number of steps in successful versus failed trajectories, with a step ratio of about 0.63. Successful trajectory synthesis Successful trajectories are synthesized based on “prototype” trajec- tories, i.e., given an existing successful trajectory for a particular task, a new successful trajectory can be generated by removing or adding empty or effectless action tuples. For example, in the WikiHow environment, an empty action corresponds to the action NOTHING , while effectless action tuples might include actions such as scrolling down followed by scrolling up, or going back and immediately repeating the last action. If the agent’s prior exploration fails to produce any successful trajectories for a given task, a successful trajectory is manually annotated by the authors. Using this synthesis approach, the final dataset consists of 5,729 successful trajectories and 4,709 failed trajectories. The dataset statistics are presented in Figure 4. C Details of training data for reward models We leveraged Qwen2.5-7B and GPT-4o-mini to auto-collect a total of 10,438 trajectories, consisting of 5,729 successful and 4,709 failed trajectories. These trajectories comprise 207,102 steps in total, with 79,718 steps originating from successful trajectories and 127,384 from failed ones. The trajectories are partitioned into subsets based on their task goals, resulting in a training set of 7,175 trajectories, a validation set of 1,751 trajectories, and a test set of 1,512 trajectories. This trajectory data can be used directly for naive ORM training. For PROGRM training, we further split the trajectories into individual steps. All steps from successful trajectories are retained, while 62.58% of steps from failed trajectories are sampled to balance the success-to-failure step ratio at approximately 1:1. This results in aPROGRM training set of 113,270 steps, a validation set of 15,935 steps, and a test set of 30,220 steps. 15 D Experiment details All experiments are conducted on a single machine equipped with 8 NVIDIA A800 GPUs of 80 GB memory. We use the Adam optimizer with (β1, β2)=(0.9,0.95)for both reward model training and online RL training of agent models. DeepSpeed ZeRO is employed to optimize GPU memory usage during training. The hyperparameters for training the reward models, including ORM, PROGRM Env, andPROGRM LCS, are listed in Table 5. Hyperparameters for online RL agent training are provided in Table 6. Training a reward model takes roughly 3 hours, while agent RL training requires around 20 hours. Table 5: Hyperparameters for reward model training Hyperparameter Value Learning rate 5×10−5 Batch size 128 Epoch 3 LR scheduler type Cosine Warmup ratio 0.03 ZeRO stage 2 Table 6: Hyperparameters for agent RL training Hyperparameter Value Learning rate 7×10−5 Batch size 64 Epoch 1 LR scheduler type Cosine Warmup ratio 0.03 KL coefficient 0.01 Gamma 0.9 PPO clip 0.2 Rollout temperature 1.0 ZeRO stage 3 E Details of WikiHow deployment We implement a RESTful remote environment server to enable parallel deployment of WikiHow along with the RL trainer. For convenience, we use two
https://arxiv.org/abs/2505.18121v1
Docker images to host the Android Emulator ™ and the replay server for WikiHow environment [ 38]. Flask5is used to build the main server of a remote environment. To reduce the communication latency, only the prompts are transferred on an HTTP (Hyper Text Transport Protocol) flow. To control consumed computing resources and achieve as efficient a simulation as possible, a daemon management thread is implemented to create emulator instances, monitor their running state, and clean stale instances in time. The remote environment server is deployed on a CentOS machine with KVM (Kernel-Based Virtual Machine) enabled, CPU of 64 virtual threads, 1.97 TiB of memory, and an A800 GPU equipped. F Case study In this section, we give some cases to show the potential of PROGRM for predicting moderate progress scores and assigning adequate credits for interaction steps. 5https://flask.palletsprojects.com/en/stable/ 16 0 0.5 1Progress(1) Search an article to learn how to carve stone. (2) Access the article “How to Carve Stone”. (3) Give a negative comment to the article. <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( ,‘ how to carve stone’) <5> CLICK( ) <6> CLICK( ) <7> CLICK( ) <8> CLICK( ) <9> CLICK( ) <10> CLICK( ) <11> CLICK( ) <12> CLICK( ) ▲Task Command ▼Execution Trajectory 0 0.5 1Progress(1) Find an article to learn to make a raft. (2) The target article is “How to Build Rafts” . (3) Why people believe wikihow? <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( , ‘how to build rafts’) <5> GOBACK() <6> INPUT( , ‘how to build rafts’) <7> CLICK( ) <8> CLICK( ) ▲Task Command ▼Execution Trajectory 0 0.5 1Progress(1) I’m seeking some plan to go back using the command prompt. (2) Visit the “How to Go Back Using the Command Prompt” article page. (3) Following your reading of the article, summarize the steps outlined within it. The summaries should be given in a numbered list. <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( ,‘how to go back using the command prompt’) <5> CLICK( ) <6> CLICK( ) <7> SCROLL(DOWN) <8> CLICK( ) <9> SCROLL(DOWN) <10> SCROLL(DOWN) <11> SCROLL(DOWN) <12> SCROLL(DOWN) ▲Task Command ▼Execution Trajectory ORM Score: 0.0004 Figure 5: A failed trajectory with partial progress showing with the progress scores predicted by PROGRM and the final score predicted by ORM. Each line in the bottom-left section is an action taken by the agent in episode. The progress values predicted by PROGRMafter each action executed are illustrated in the bottom-right section. The agent achieves partial progress in the episode, while doesn’t reach the final goal and stops at a progress score lower than 1 (100%). Init in the figure is not an actual action, but a placeholder. Over-penalization of ORM As stated in Final step score prediction of § 3.3, ORM predicting a single less informative score at the episode end indicating mere success or failure can potentially over-penalize the effective steps in a failed trajectory and leads to insufficient exploitation of failed trajectories. As a comparison, PROGRM measures an adequate progress score for each step
https://arxiv.org/abs/2505.18121v1
and can assign proper credits for steps even in failed trajectories. As illustrated in Figure 5, the agent completed partial task without achieving the final goal. The score predicted by the trivial ORM marks the whole trajectory as failed inadequately penalizes all the steps in the trajectory, although some steps do cause meaningful progress gains (actions <4>and<5>in Figure 5). In contrast, the progress curve predicted by PROGRMaccurately reflects the effect of the actions. Thus, PROGRMcan assign moderate credits to these valuable actions even in a failed trajectory. Temporal variation of progress measurement over successful episodes We further show the capacity of PROGRM for task progress estimation with two successful trajectories. Figure 6(a) demonstrates a successful trajectory with its progress-step curve. The agent completes the task progressively, accompanied by that the progress score increases progressively to 1 (100%). The key steps achieving sub-goals (steps <4>,<5>, and <12> in Figure 6(a)) and non-key steps are clearly distinguished through the corresponding progress gains, with higher gains corresponding to key steps and lower gains corresponding to non-key steps, revealing the capacity of P ROGRM for identifying the valuable actions. Figure 6(b) illustrates a successful trajectory where the agent hesitates with aGOBACK action (action <5>in Figure 6(b)) after search but recovers later. PROGRM accurately catches the progress fluctuation and reflects it in the curve by a progress decline and the following rebound. In such way, PROGRM manages to assign a proper credit for each steps in the trajectory and thus provide more exquisite guidance during actor training. 17 0 0.5 1Progress(1) Search an article to learn how to carve stone. (2) Access the article “How to Carve Stone”. (3) Give a negative comment to the article. <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( ,‘ how to carve stone’) <5> CLICK( ) <6> CLICK( ) <7> CLICK( ) <8> CLICK( ) <9> CLICK( ) <10> CLICK( ) <11> CLICK( ) <12> CLICK( ) ▲Task Command ▼Execution Trajectory 0 0.5 1Progress(1) Find an article to learn to make a raft. (2) The target article is “How to Build Rafts” . (3) Why people believe wikihow? <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( , ‘how to build rafts’) <5> GOBACK() <6> INPUT( , ‘how to build rafts’) <7> CLICK( ) <8> CLICK( ) ▲Task Command ▼Execution Trajectory 0 0.5 1Progress(1) I’m seeking some plan to go back using the command prompt. (2) Visit the “How to Go Back Using the Command Prompt” article page. (3) Following your reading of the article, summarize the steps outlined within it. The summaries should be given in a numbered list. <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( ,‘how to go back using the command prompt’) <5> CLICK( ) <6> CLICK( ) <7> SCROLL(DOWN) <8> CLICK( ) <9> SCROLL(DOWN) <10> SCROLL(DOWN) <11> SCROLL(DOWN) <12> SCROLL(DOWN) ▲Task Command ▼Execution Trajectory ORM Score: 0.0004 (a) A successful trajectory and the temporal variation of PROGRM-measured progress. 0 0.5 1Progress(1) Search an article to learn how to carve stone. (2) Access the article “How to Carve Stone”. (3) Give a
https://arxiv.org/abs/2505.18121v1
negative comment to the article. <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( ,‘ how to carve stone’) <5> CLICK( ) <6> CLICK( ) <7> CLICK( ) <8> CLICK( ) <9> CLICK( ) <10> CLICK( ) <11> CLICK( ) <12> CLICK( ) ▲Task Command ▼Execution Trajectory 0 0.5 1Progress(1) Find an article to learn to make a raft. (2) The target article is “How to Build Rafts” . (3) Why people believe wikihow? <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( , ‘how to build rafts’) <5> GOBACK() <6> INPUT( , ‘how to build rafts’) <7> CLICK( ) <8> CLICK( ) ▲Task Command ▼Execution Trajectory 0 0.5 1Progress(1) I’m seeking some plan to go back using the command prompt. (2) Visit the “How to Go Back Using the Command Prompt” article page. (3) Following your reading of the article, summarize the steps outlined within it. The summaries should be given in a numbered list. <1> (Init) <2> CLICK( ) <3> CLICK( ) <4> INPUT( ,‘how to go back using the command prompt’) <5> CLICK( ) <6> CLICK( ) <7> SCROLL(DOWN) <8> CLICK( ) <9> SCROLL(DOWN) <10> SCROLL(DOWN) <11> SCROLL(DOWN) <12> SCROLL(DOWN) ▲Task Command ▼Execution Trajectory ORM Score: 0.0004 (b) A successful trajectory with agent’s hesitation ( GOBACK at step <5>) and the temporal variation of P ROGRM-measured progress. Figure 6: Temporal variation of progress measurement over successful episodes. Each line in the bottom-left sections is an action taken by the agent in episode. The progress scores predicted by PROGRM after each action executed are illustrated in the bottom-right sections. G Prompts used in experiments The used prompts for RMs and actors are listed in Table 7, Table 8, Table 9, Table 10, and Table 11. 18 Table 7: Prompt for P ROGRM System: You are an expert of mobile use, especially the app of WikiHow. This is a public and popular wiki platform you surf everyday. You know well how people can search for specific articles, check author information, find more related articles, make bookmarks, etc. on WikiHow app. So you can accurately assess how efficient the people are finishing a particular task on this app. Now you will be given a trajectory of other’s operation, including *instructions describing the task goal *history actions leading to current state *screen representation reflecting the current state You should give a percentage which is an estimation of his progress on this task. User: Instructions: ${instructions} History Actions: ${actions} Current screen: ${screen} Table 8: Prompt for ORM System: You are an expert of mobile use, especially the app of WikiHow. This is a public and popular wiki platform you surf everyday. You know well how people can search for specific articles, check author information, find more related articles, make bookmarks, etc. on WikiHow app. So you can accurately assess how efficient the people are finishing a particular task on this app. Now you will be given a trajectory of other’s operation, including *instructions describing the task goal *history actions leading to current state *screen representation reflecting the current state You should give
https://arxiv.org/abs/2505.18121v1
a judgment about the success or failure of this task. User: Instructions: ${instructions} History Actions: ${actions} Current screen: ${screen} Table 9: Prompt for ORM Claude System: You are an expert of mobile use, especially the app of WikiHow. This is a public and popular wiki platform you surf everyday. You know well how people can search for specific articles, check author information, find more related articles, make bookmarks, etc. on WikiHow app. So you can accurately assess how efficient the people are finishing a particular task on this app. 19 Table 9: Prompt for ORM Claude Now you will be given a trajectory of other’s operation, including *instructions describing the task goal *history actions leading to current state *screen representation reflecting the current state You should give a percentage which is an estimation of his progress on this task. You should directly give your answer. Do not output any needless thoughts or explanations. User: Instructions: ${instructions} History Actions: ${actions} Current screen: ${screen} Table 10: Prompt for ORM Claude -CoT System: You are an expert of mobile use, especially the app of WikiHow. This is a public and popular wiki platform you surf everyday. You know well how people can search for specific articles, check author information, find more related articles, make bookmarks, etc. on WikiHow app. So you can accurately assess how efficient the people are finishing a particular task on this app. Now you will be given a trajectory of other’s operation, including *instructions describing the task goal *history actions leading to current state *screen representation reflecting the current state You should give a percentage which is an estimation of his progress on this task. You should first generate an explicit thought and then give your answer. You should output in the following format: <think> some thoughts </think> <answer> 0.42 </answer> Follow the format above strictly. And note that the example above are just an example demonstrating the output format and takes NO ANY RELATION with the following inputs. User: Instructions: ${instructions} History Actions: ${actions} Current screen: ${screen} 20 Table 11: Prompt for actors System: You are a clever mobile assistant. You are very familiar with WikiHow and can navigate its app expertly. Now you will be given several information about the task and the screen at the current step, and you need to take an appropriate action according to the given information to finish the task in STEP steps. The action should in format of Python function call. Available actions are: ∗INPUT(element_id: int, text: str) # You can input something into text box through this action ∗CLICK(element_id: int) # You can click on some clickable element through this action ∗LONG_CLICK(element_id: int) # You can long lick on some clickable element through this action ∗SCROLL(direction: Enum) # You can scroll UP/DOWN/LEFT/RIGHT to browse long/wide pages through this action ∗ANSWER(text: str) # You can generate an answer to me through this action ∗GOBACK() # You can go back to the previous screen by pressing GOBACK button of mobile ∗DO_NOTHING() # You can do nothing and just wait for a step Here are
https://arxiv.org/abs/2505.18121v1
some examples of actions: ‘‘‘ INPUT(3, "scooter") CLICK(4) SCROLL(DOWN) GOBACK() ‘‘‘ You need to first think about the reasoning process as an internal monologue and then provide the user with an action. Respond in the following format: <think> ... </think> <action> ... </action> . For example: <think> I need to have a thinking before I take my action. </think> <action> ANSWER("I can take any available action, e.g., give an answer.") </action> Note that all the examples above are just examples demonstrating action usage and output format and takes NO ANY RELATION with the following inputs. Now, take your task. User: Completed instructions: ${history_instructions} Current instruction: ${instruction} Current Screen: ${screen} 21 Table 11: Prompt for actors Action History: ${action_history} H Limitations Although the proposed PROGRM achieves the best results in our experiments, we find that the effectiveness of the LCS-based progress label still holds a remarkable gap with that of the environment- reward-based progress label, both in the performance of the resulting actor and the progress estimation error. There are still many aspects to polish in the current LCS-based progress labeling algorithm, such as soft match function fdesign (see § A), garbage action ( e.g., meaningless empty or scrolling actions) cleaning, etc. Current experiment results have demonstrated the promising effectiveness of PROGRMin GUI agent training. The selected environment also gives the opportunity to obtain a deeper insight into the proposed LCS-based progress labeling algorithm by comparing it with environment-reward-based progress labeling, which is not supported in other environments. However, it can still be doubted if PROGRMcan still work well in other GUI environments. Besides, PROGRMwill be most valuable to be applied to ethe nvironment without well-annotated RL training tasks, as it is expected to efficiently and accurately evaluate the LLM-generated task goals and alleviate the scarcity of well-annotated RL training tasks. I Broader Societal Impacts The proposed PROGRM can be used to train more capable GUI agents, which may bring significant convenience to human users by automating a wide range of tasks in GUI systems. However, alongside these benefits, there are potential risks. More powerful GUI agents could be misused to bypass CAPTCHAs, gain unauthorized access to public internet systems, or perform other malicious activities. Additionally, as GUI agents are not yet perfectly reliable, there is a risk that unexpected or dangerous actions could be taken, potentially causing irreparable damage to data or systems. Overall, the broader societal impacts of PROGRMare primarily realized through the downstream use of the trained GUI agents, rather than from PROGRM itself. Responsible deployment and careful consideration of security and safety are therefore essential. 22
https://arxiv.org/abs/2505.18121v1
arXiv:2505.18122v1 [cs.CL] 23 May 2025UNJOIN: Enhancing Multi-Table Text-to-SQL Generation via Schema Simplification Poojah Ganesan1*Rajat Aayush Jha1*Dan Roth2Vivek Gupta1† 1Arizona State University2University of Pennsylvania {pganesa4,rjha16,vgupt140} @asu.edu danroth @seas.upenn.edu Abstract Recent advances in large language models (LLMs) have greatly improved Text-to-SQL performance for single-table queries. But, it re- mains challenging in multi-table databases due to complex schema and relational operations. Existing methods often struggle with retriev- ing the right tables and columns, generating accurate JOINs and UNIONs, and generalizing across diverse schemas. To address these issues, we introduce UNJOIN, a two-stage framework that decouples the retrieval of schema elements from SQL logic generation. In the first stage, we merge the column names of all tables in the database into a single-table representation by prefixing each column with its table name. This allows the model to focus purely on accurate retrieval without being distracted by the need to write complex SQL logic. In the second stage, the SQL query is generated on this sim- plified schema and mapped back to the original schema by reconstructing JOINs, UNIONs, and relational logic. Evaluations on SPIDER and BIRD datasets show that UNJOINmatches or exceeds the state-of-the-art baselines. UNJOIN uses only schema information, which does not require data access or fine-tuning, making it scalable and adaptable across databases. 1 Introduction Relational databases form the foundation for struc- tured data management in domains such as finance, healthcare, and education. Accessing information from these databases typically requires writing SQL queries, a skill that demands technical ex- pertise. Text-to-SQL, known as semantic parsing, addresses this barrier by translating natural lan- guage questions into executable SQL commands (Androutsopoulos et al., 1995; Li and Jagadish, 2014; Li et al., 2023b), allowing non-experts to interact with complex databases. *These authors contributed equally to this work. †Primary superviser of this work. Figure 1: Baseline vs U NJOIN Early approaches use syntax trees or query sketches to guide query generation (Xu et al., 2017; Guo et al., 2019), while more recent methods rely on sequence-to-sequence models (Colin, 2020; Scholak et al., 2021a). Recent advances leverage large language models (LLMs), either through in- context learning with powerful proprietary models or by fine-tuning smaller open-source alternatives, leading to substantial performance gains (Li et al., 2024; Pourreza and Rafiei, 2024). While recent advances in Text-to-SQL have sig- nificantly improved performance on single-table databases, relatively little attention has been given to the more challenging multi-table setting. As shown in Figure 1, multi-table SQL generation introduces additional complexities such as iden- tifying relevant tables and columns, resolving inter-table relationships (e.g., JOIN s and UNION s), and constructing more intricate queries involving GROUP BY , aggregation, ordering, and nested sub- queries. These challenges are central to building robust and generalizable Text-to-SQL systems for real-world applications. This raises a natural ques- tion: How can the progress made in single-table SQL generation be extended to handle the chal- lenges of multi-table querying? To answer this, we propose UNJOIN, a modular 1 two-stage framework that decouples retrieval of schema elements from complex SQL generation. The key intuition is that LLMs are highly effec- tive at generating
https://arxiv.org/abs/2505.18122v1
SQL for single-table schemas, a setting they are more exposed to during train- ing, while multi-table scenarios introduce struc- tural complexity that is harder to handle directly. UNJOINbridges this gap by reframing multi-table Text-to-SQL generation as a simplified single-table task that LLMs can solve more reliably. UNJOIN operates in two stages: (a.) Stage 1: Schema Simplification: The multi-table schema is flattened into a single-table format by merging the column names of all tables in the database into a single-table representation by prefixing each column with its table name, without altering the underlying data. (b.) Stage 2: Query Generation and Trans- lation. A SQL query is first generated over this simplified schema and then translated back to align with the original schema by reconstructing neces- sary JOIN s,UNION s, and column relationships. By isolating the challenges of schema element selec- tion and SQL logic construction, UNJOINdisam- biguate the reasoning process and enables more accurate, scalable, and generalizable multi-table SQL generation. Our contributions are as follows: 1.We propose UNJOIN, a novel approach for multi-table Text-to-SQL generation based on schema simplification. By decoupling table and column retrieval from complex SQL struc- turing, UNJOINreduces compounding errors and improves robustness. 2.We demonstrate that our method outperforms a wide range of baselines, including (a) stan- dard prompting, (b) in-context learning meth- ods, (c) supervised fine-tuning approaches, (d) end-to-end table Question Answering (QA) models, and (e) recent reasoning-focused models such as Deepseek-R1 on both open book and closed book settings. 3.Through detailed analysis, we show that SQL generated by UNJOINachieves superior per- formance in both table retrieval and column selection. Additionally, when combined with various retrievers in open-domain table QA settings, UNJOIN consistently outperforms standard few-shot prompting baselines. 2 Methodology Existing LLM-based approaches struggle with identifying relevant tables and columns, resolvinginter-table relationships like JOINs and UNIONs, and constructing more complex queries involving GROUP BY , nested subqueries, aggregation, and ordering. This is due to schema complexity, am- biguous column references, and limited context size. Overcoming these challenges is crucial for developing robust and generalizable Text-to-SQL systems suited for real-world applications. In Fig- ure 2, we present our proposed framework, UN- JOIN, which addresses these issues systematically through Schema Simplification andQuery Gener- ation and Translation . In the following section, a detailed introduction of these steps are presented. (1.) Schema Simplification To simplify com- plex multi-table schemas, we introduce a straight- forward Schema Simplification step. Here, we com- bine columns from all tables in the database into one simplified schema, without altering the under- lying data. The goal is to create a single, flat table. We prefix each column name with its correspond- ing table name, which helps in removing ambiguity between similar column names from different table. Let’s consider a database with six tables, each containing ten columns. The resulting single-table representation will comprise one table, named af- ter the database, and a total of 60 columns. Here, each column name is reformatted using the struc- tureTableName.ColumnName , preserving its orig- inal context while eliminating ambiguity. The al- gorithm for Schema Simplification along with an example is shown in
https://arxiv.org/abs/2505.18122v1
Appendix 4 In the simplified format, the structure no longer depends on how the original tables were connected, so there is no concept of table joins at this stage. By removing the need to reason about table-to- table relationships, this step eliminates a layer of complexity for the LLM. The transformation relies only on schema-level information, specifically, ta- ble and column names, and does not require access to row-level data. As a result, it generalizes well to databases of any size or content. This step is also fully deterministic and does not depend on LLMs, helping to reduce computational overhead and avoid hallucination errors that can occur when using LLMs. (2.) Query Generation and Translation. This involves two sub-steps: (a.) Query Generation. and(b.) Query Translation. , which are described in detail below. (a.) Query Generation. In this step, we use the Sim- plified Schema obtained from the previous step to 2 Figure 2: Our Proposed Method . After schema conversion, the resulting single-table schema contains a total of 54 columns obtained by combining the columns from all shown tables: order, account, district, disp, card, client, loan, andtrans . Due to space constraints, the complete single-table representation is not shown. prompt the LLM to generate a semantically ac- curate intermediate (simplified) SQL query. By abstracting away relational complexities, such as JOIN and UNION operations, the LLM is free to fo- cus solely on identifying relevant tables (which are represented as columns in this case of simplified schema), as well as accurately handling SQL oper- ations like aggregation and numeric computations. This reduces the task to a simpler single-table Text- to-SQL scenario, a setting in which contemporary LLMs typically perform very well. To ensure robust and unbiased query genera- tion, the LLM is provided with a carefully struc- tured prompt that includes the Simplified Schema , the user’s question, detailed column descriptions and several schema-agnostic few-shot examples. These manually crafted examples remain consis- tent across evaluations to prevent data leakage or schema-specific biases. Importantly, as our ap- proach never modifies the underlying table data, the resulting intermediate SQL is not directly ex- ecutable; instead, its purpose is to clearly capture user intent within a simplified schema context . The prompts are given in Appendix A (b.) Query Translation. The intermediate SQL query generated in the previous step references a simplified single-table schema and is therefore notdirectly executable. The Query Translation step transforms this intermediate SQL into a fully exe- cutable SQL code, explicitly reintroducing neces- sary relational operations such as JOINs, pertaining to the original multi-table schema. This process is fully automated, requiring only the original schema, the simplified schema, the simpli- fied SQL generated in the previous step, and the user question as input. Since this transformation is implicit within the LLM’s reasoning capabilities, no additional rule-based logic is required. To fur- ther reduce hallucination errors, we apply an edit- distance-based correction mechanism to ensure that the generated SQL aligns with the actual schema. This post-processing step only adjusts table and column names, ensuring that any abbreviations or modifications introduced by the LLM
https://arxiv.org/abs/2505.18122v1
(e.g., table name disp changed to disposition ) are accurately mapped back to their valid schema counterparts. In particular, this step does not alter SQL logic or introduce external constraints; it only enhances schema consistency. UNJOINVaraints: We explore two variations of our proposed method: (a) UNJOIN SP: Performs schema simplification, query generation, and trans- lation sequentially within a single (joint) prompt approach. (b) UNJOIN MP: Separates these steps 3 into distinct stages using a multi-prompt setup, al- lowing for more focused reasoning at each stage. 3 Text2SQL Approaches We compare UNJOINwith five approches, as fol- lows: Standard Prompts: We evaluate against prompt- ing strategies like Direct Prompting with Few-Shot Chain of Thought (CoT) (Wei et al., 2023), Pro- gram of Thoughts (PoT) (Chen et al., 2023), Meta Prompting (MP) (Suzgun and Kalai, 2024), and Self-Consistency (SC) (Wang et al., 2023). In-Context Learning (ICL): We compare UN- JOIN against several recent in-context learning baselines. DIN-SQL (Pourreza and Rafiei, 2023) decomposes the task into subtasks, prompting GPT- 4 separately for each. C3-SQL (Dong et al., 2023) introduces schema filtering followed by calibrated prompting with self-consistency. RSL-SQL (Cao et al., 2024) improves schema linking through bidi- rectional reasoning, contextual augmentation, and multi-turn self-correction. Several other recent in- context learning methods (Sheng et al., 2025; Pour- reza et al., 2024; Lee et al., 2025) are not publicly available. Supervised Fine-Tuning (SFT): Here, we com- pare against CodeS-7B (Li et al., 2024) and DTS- SQL (Pourreza and Rafiei, 2024). End2End Table QA (TQA): We also evalu- ateUNJOIN on End-to-End QA tasks. DATER (Ye et al., 2023), TabSQLify (Nahid and Rafiei, 2024), and ReActTable (Zhang et al., 2024b) de- compose the table by generating SQL, typically using sequence-to-sequence architectures or fine- tuned LLMs, and then perform QA. We also com- pare against non-SQL based QA models such as MultiTabQA (Pal et al., 2023) (Seq2Seq model), ResdSQL (Li et al., 2023a) and two variats of QFMTS (Zhang et al., 2024a): QFMTS_1 (summa- rization followed by QA), and QFMTS_2 (single- table schema summarization followed by QA). Reasoning LLMs: This category consists of models optimized for complex reasoning tasks, including DeepSeek-R1-Distill-Qwen-7B (DeepSeek-AI et al., 2025), DeepSeek-R1-Distill- Llama-70B (DeepSeek-AI et al., 2025). 4 Experimental Evaluation. Datasets: We evaluate our approach on two widely used large-scale cross-domain Text-to-SQL datasets: Spider (Yu et al., 2018) and BIRD (Liet al., 2023b). Spider includes 200 databases, while BIRD contains 96, both spanning a diverse range of domains. In each dataset, tables are grouped by topic, with each topic corresponding to a sepa- rate database that includes an average of 5.4 tables, along with queries answerable using that schema. Since our focus is on queries that require reasoning over multiple tables, we exclude those that involve only a single table, following a filtering strategy similar to (Chen et al., 2025b). After filtering, we obtain 443 queries across 81 databases for Spider, and 1095 queries across 77 databases for BIRD. Metrics: A generated SQL query may be struc- turally valid and executable, yet still return an incorrect answer. To evaluate both correctness and execution, we use two key
https://arxiv.org/abs/2505.18122v1
metrics: (a.) Query Execution Accuracy (QE) measures the per- centage of generated queries that execute success- fully without runtime errors. It checks only whether the query runs, not whether the result is correct, (b.) Exact Match (EM) captures the percentage of gen- erated queries that not only execute successfully but also return the correct answer. Since EM re- quires both successful execution and correct output, it is a stricter metric and always a subset of QE. A query may be valid and count toward QE, yet yield an incorrect result, leading to a lower EM. LLM’s: We evaluate the performance of Prompt- ing based and Table-QA baselines (SQL-based) across three language models: GPT-4o (OpenAI et al., 2024) ( gpt-4o-2024-08-06 ), Gemini 1.5 Flash (Team et al., 2024), and Llama 3.3 (70B)1. Table-QA baselines (non SQL-based), ICL and SFT baselines are evaluated using GPT-4o. For further analysis about table and column retrieval and on variations of our approach baseline, we ex- pand our evaluation to include additional models: GPT-4o Mini2(gpt-4o-mini-2024-07-18 ), Llama 3.1 (3B), SQLCoder (34B), Mixtral 7x8B (Jiang et al., 2024) and CodeLlama (Rozière et al., 2024). This broader evaluation provides insight into the impact of model size and architecture on schema- aware SQL generation. Evaluation Settings. We evaluate our approach under two distinct settings: (a) Closed-book setting — The relevant database is provided in advance. Our method, UNJOIN, directly generates the final SQL query over the given multi-table schema, and 1https://github.com/meta-llama/llama-models 2https://openai.com/index/gpt-4o-system-card/ 4 (b) Open-book setting — Relevant tables must first be retrieved using state-of-the-art retrievers. Then, UNJOINis applied to generate the final SQL query over the retrieved tables. In the closed-book setting, the full database schema is available, enabling SQL generation with- out retrieval. In contrast, the open-book setting requires a retrieval step, where tables are selected based on both table-query similarity and table-table (i.e., joinability) similarity. This setting is more challenging because (i) the retrieval process may miss relevant tables (i.e., recall is less than 100 %), and (ii) the retrieved tables may come from differ- ent databases, leading to invalid or spurious joins. 4.1 Results: Closed Book Settings Standard Prompts vs UNJOIN:As shown in Table 1 , prompt-based baselines achieve high QE (e.g., CoT with 98.87 QE on SPIDER) but fall short in EM (75.28 EM), reflecting errors in table and col- umn selection. In contrast, UNJOIN(UNJOIN MP) maintains a strong QE (99.9) while significantly im- proving EM (77.13), demonstrating better schema understanding. A similar trend appears on BIRD (See Appendix 10), where UNJOIN MPsurpasses the best baseline in EM (50.36 vs. 44.38). GPT-4o Gemini LLAMA 70B Method QE EM QE EM QE EM Standard Prompts CoT 93.70 63.10 95.25 65.23 98.87 75.28 PoT 93.77 67.39 94.40 72.89 96.71 68.61 MP 94.14 67.76 92.91 67.39 90.88 63.86 SC 94.14 67.76 94.48 72.89 96.35 67.21 Table QA (SQL-based) DATER 19.30 07.80 20.10 08.90 17.77 06.57 TabSQLify 93.38 64.10 90.07 69.96 93.80 68.61 ReActTable 02.00 00.20 03.00 00.20 02.80 00.30 UNJOIN SP 96.35 76.13 94.89 73.36 94.49 69.62 UNJOIN MP 94.89 76.00 95.99 75.57 99.90 77.13 Table 1: QE and
https://arxiv.org/abs/2505.18122v1
EM scores on SPIDER dataset. ICL vs UNJOIN : The results in Table 2 show that RSL-SQL achieves the highest overall perfor- mance on both SPIDER and BIRD, particularly with strong generalization to BIRD (QE: 90.8, EM: 54.3). However, UNJOINperforms competitively, on SPIDER, where UNJOIN SPachieves an EM of 76.13, slightly surpassing RSL-SQL. On BIRD, both variants of UNJOINoutperform DIN-SQL and C3, showing robust cross-domain performance. ICL Text-to-SQLSPIDER BIRD QE EM QE EM DIN-SQL + GPT-4o 95.97 75.56 87.53 49.32 C3 + GPT-4o 94.50 71.42 - - RSL-SQL 96.40 76.04 90.80 54.30 UNJOIN SP(GPT-4o) 96.35 76.13 88.55 51.74 UNJOIN MP(GPT-4o) 94.89 76.00 89.75 50.36 Table 2: ICL models vs U NJOIN.End-to-end Table QA vs UNJOIN:From Tables 1 and 3 it can be seen that UNJOINoutperforms all table QA baselines in both QE and EM by a sub- stantial margin. This proves its efficiency and use- fullness in end-to-end table QA tasks, apart from Text-to-SQL. It also scales efficiently on larger databases like BIRD, where other baselines strug- gle (see Appendix 10). MultiTabQA’s near-zero EM (0.03) suggests possible overfitting to single- table queries in SPIDER. We did not evaluate multi- table baselines on BIRD due to its large dataset size exceeding LLM input limits and the high cost of fine-tuning these models on larger datasets. As highlighted earlier, fine-tuned models often suffer from limited generalizability and become overly domain-specific, increasing costs while reducing their applicability to diverse scenarios. For these reasons, we restricted our experimentation to the SPIDER dataset. Table QA (non SQL-based) baselines EM MultiTabQA (Seq2Seq) 00.03 QFMTS_1 (GPT-4o) 25.28 QFMTS_2 (GPT-4o) 25.55 ResdSQL 28.87 UNJOIN SP(GPT-4o) 76.13 UNJOIN MP(GPT-4o) 76.00 Table 3: Performance comparison on SPIDER dataset. SFT vs UNJOIN : Across both datasets, UNJOIN demonstrates strong performance (Table 4), outper- forming existing SFT baselines on SPIDER, and showing notable generalization on BIRD, highlight- ing the effectiveness of our schema simplification and decomposition strategy in improving multi- table Text-to-SQL generation. SFT Text-to-SQLSPIDER BIRD QE EM QE EM CodeS-7B 94.14 74.72 89.65 51.14 DTS-SQL DeepSeek 7B 87.50 69.23 - - UNJOIN SP(GPT-4o) 96.35 76.13 88.55 51.74 UNJOIN MP(GPT-4o) 94.89 76.00 89.75 50.36 Table 4: SFT models vs U NJOIN. Reasoning LLM’s vs UNJOIN:The results in Table 5 highlight the strong performance of UN- JOIN compared to reasoning model baselines in both QE and EM, demonstrating the versatility of our approach across different datasets. Reasoning LMSPIDER BIRD QE EM QE EM DeepSeek Qwen 7B 91.21 57.82 66.25 25.01 DeepSeek Llama 70B 95.26 61.08 84.12 44.54 UNJOIN SP(GPT-4o) 96.35 76.13 88.55 51.74 UNJOIN MP(GPT-4o) 94.89 76.00 89.75 50.36 Table 5: Reasoning models vs U NJOIN. 5 4.2 Results: Open Book Settings Table 6 presents the impact of UNJOIN on the EM accuracy of various retriever-based models. Across all retrievers—ranging from Contriever3 and DTR (Herzig et al., 2021) to their enhanced counterparts in the JAR (Chen et al., 2025b) frame- work— UNJOINconsistently yields substantial im- provements in SQL generation performance. No- tably, the EM scores increase by 14.90 % to 19.57 %, demonstrating the robustness and generalizabil- ity of our approach. Retriever EM(%): CoT →UNJOIN(∆)
https://arxiv.org/abs/2505.18122v1
ARM 31.7 →51.27 (+19.57) Contriever 29.7 →47.99 (+18.29) DTR 30.4 →49.85 (+19.45) JAR (DTR) 36.9 →52.63 (+15.73) JAR (Contriever) 36.2 →51.10 (+14.90) Table 6: Exact Match (EM) scores with UNJOINin open book settings. ∆in parentheses shows the improvement. End2End Table Extraction. Table 7 presents a detailed comparison of multiple retriever mod- els evaluated on precision (P) and recall (R) met- rics, with the integration of our proposed U NJOIN framework for End2End table extraction settings. The results indicate a consistent and substantial improvement in retrieval performance across all retrievers when augmented with U NJOIN. Specifically, both the DTR and Contriever re- trievers exhibit significant performance gains, im- proving their precision by approximately 25% and 27%, and their recall by around 22% and 26% re- spectively. Even retrievers with relatively strong baseline performance, such as JAR (DTR) and JAR (Contriever), benefit from UNJOIN, achieving gains in precision and recall ranging from 4.6% to 6.7%. These results highlight the core strength of our UNJOIN method—its ability to consistently en- hance SQL generation across a range of retriev- ers. By simplifying schemas and structuring query decomposition, UNJOINacts as a plug-in module that improves both base retrievers (e.g., Contriever, DTR) and advanced systems like JAR and ARM (Chen et al., 2025a). This demonstrates its broad applicability and effectiveness in handling multi- table Text-to-SQL challenges. 5 Discussion In this section, we further present ablation studies and additional insights from our UNJOINapproach for the closed book settings. 34 https://huggingface.co/facebook/contriever-msmarcoHow does UNJOINbenefit End2End table extrac- tion? We analyze End2End table extraction per- formance where we asses how UNJOINapproach enhances table retrieval and column extraction, along with its impact on query accuracy. Table 8 presents the precision and recall scores for table and column selection across different methods: standard CoT prompting, two variants of our approach— UNJOIN SPandUNJOIN MP. Ad- ditionally, we introduce a new baseline, CoT-SS (CoT on Simplified Schema), which directly gen- erates the final multi-table SQL (including JOINs, etc.) from the intermediate simplified schema using few-shot chain-of-thought prompting. This com- parison highlights the effectiveness of decoupling schema reasoning from SQL generation. Further- more, we observe that UNJOIN SPandUNJOIN MPconsistently outperform other baselines (CoT, CoT-SS) in end-to-end table and column extraction. Figure 3: Retrieval accuracy Performance with Increas- ing Number of Tables How robust is UNJOINwith increasing relevant tables? We analyze how table and column re- trieval accuracy is affected as the number of rele- vant tables needed to answer the query increases. As shown in Figure 3, baseline methods exhibit a sharp decline in performance as query complex- ity grows, whereas UNJOINsustains high retrieval accuracy. This demonstrates the scalability and robustness of our approach in handling complex multi-table queries. Baselines often fail on multi- operation queries as the number of tables increases, while UNJOIN’s separate translation phase handles these systematically. How does schema simplification benefit End2End table extraction? From table 9, we see that CoT-SS suffers significant drops in QE and EM, but from table 8, we can see that it shows strong table/column retrieval performance. This shows that schema simplification helps in retrieving correct table and
https://arxiv.org/abs/2505.18122v1
column names, improving precision and recall. For instance, 6 Retriever ARM JAR (DTR) JAR (Contriever) DTR Contriever P R P R P R P R P R CoT 32.88 74.47 65.60 66.10 64.70 65.40 59.50 59.20 55.90 55.50 UNJOIN 84.30 83.20 72.31 70.79 70.84 70.00 84.55 81.75 82.97 81.88 ∆ +51.42 +08.73 +06.71 +04.69 +06.14 +04.60 +25.05 +22.55 +27.07 +26.38 Table 7: Precision (P) and Recall (R) for all retriever configurations with UNJOIN for open book settings. ∆ indicates the absolute improvement. Model CoT CoT-SS UNJOIN SP UNJOIN MP CoT CoT-SS UNJOIN SP UNJOIN MP Table Precision (%) Table Recall (%) GPT-4o 59.30 60.90 84.10 83.00 58.90 61.50 86.70 84.10 GPT-4o Mini 64.00 80.00 96.50 96.60 63.00 80.00 97.60 98.20 Gemini 1.5 Flash 55.10 76.90 72.90 78.10 54.80 74.40 75.90 78.60 Mixtral 45.90 53.50 71.70 71.00 40.50 53.30 78.30 74.70 SQLCoder-7B 52.90 64.00 48.40 65.00 53.60 69.30 53.00 70.20 SQLCoder-34B 54.80 67.50 68.10 58.10 55.40 67.10 73.90 62.00 Llama 3.1 (3B) 55.40 65.90 52.90 49.70 56.00 65.80 62.00 48.30 Llama 3.3 (70B) 40.80 74.90 77.10 72.70 41.20 74.00 79.10 75.50 Column Precision (%) Column Recall (%) GPT-4o 50.70 67.40 88.10 88.60 53.70 71.10 95.50 95.30 GPT-4o Mini 46.70 71.00 87.70 87.70 49.80 75.20 94.60 95.40 Gemini 1.5 Flash 53.90 90.40 88.60 89.10 55.60 86.10 96.20 93.20 Mixtral 40.00 65.80 80.10 87.40 37.40 57.00 89.50 93.00 SQLCoder-7B 50.00 63.30 43.60 75.00 54.10 70.90 54.00 83.50 SQLCoder-34B 45.20 70.70 81.20 75.40 48.80 60.00 93.20 77.00 Llama 3.1 (3B) 50.60 58.90 43.40 52.00 54.70 58.90 66.70 54.30 Llama 3.3 (70B) 38.90 84.50 86.70 87.50 42.60 89.60 96.00 96.60 Table 8: Evaluation of Table/Column Precision and Recall (in %) on SPIDER dataset. For BIRD Dataset results, please refer to Appendix D. Query Execution (QE) Exact Match (EM) Dataset Model CoT-SS. UNJOIN SP UNJOIN MP CoT-SS. UNJOIN SP UNJOIN MP SpiderGPT-4o 14.59 96.35 94.89 11.68 76.13 76.00 Gemini 1.5 81.39 94.89 95.99 59.23 73.36 75.57 Llama 3.3 (70B) 91.91 94.49 99.90 58.05 69.62 77.13 SQLCoder (34B) 05.12 83.75 51.74 04.03 53.05 25.36 SQLCoder (7B) 16.48 26.00 58.80 11.83 06.67 35.46 BIRDGPT-4o 16.86 88.55 89.75 11.24 51.74 50.36 Gemini 1.5 67.40 78.51 81.73 38.58 44.73 48.40 Llama 3.3 (70B) 82.61 91.97 96.28 49.40 52.10 56.35 SQLCoder (34B) 11.24 64.75 31.75 08.03 28.55 19.70 SQLCoder (7B) 08.53 20.01 35.34 04.70 06.00 20.70 Table 9: QE and EM on SPIDER andBIRD datasets. moving from CoT to CoT-SS often increases recall (e.g., Llama 70B table recall on SPIDER goes from 0.41 to 0.74), showing that a simplified schema aids in finding relevant tables/columns. Why does CoT-SS fails across all models? One other interesting thing to note is that the perfor- mance of CoT-SS on EM is generally poor across all the models. This highlights the importance of query translation step in our approach. Since CoT- SS directly generates the final SQL from the in- termediate Simplified Schema, skipping the query translation step, it doesn’t focus on integrating the correct compositional SQL operations, and thus, it omits necessary JOIN paths or introduce invalid syntax. This shows that query translation is essen- tial
https://arxiv.org/abs/2505.18122v1
in ensuring correct table linking and reducingstructural inconsistencies. How does our approach perform across differ- ent model sizes? From Table 9, we observe that our approach UNJOIN MPperforms reasonably well with different model sizes. But our approach is most effective with larger LLMs due to the their su- perior instruction-following capability. One of the exceptions to the above trend is SQLCoder (34B), which, despite its size, does not perform well with UNJOIN. One reason can be that unlike general- purpose LLMs, SQLCoder is highly specialized for SQL generation and lacks strong natural language understanding and instruction-following capabil- ities, outside generating SQL. To gain a deeper understanding of UNJOIN’s impact across differ- ent SQLCoder model sizes and its sensitivity to 7 End2End extraction, please refer to Appendix E. Where do UNJOINfails? Detailed inspections of failure cases reveal three major error sources: (a.) Misaligned Column Names: Without the deterministic schema mapping, LLMs hallucinate or rename columns, leading to partial matches. (b.)Ambiguity in Natural Language Queries: Queries with unclear phrasing or multiple possible meanings often result in incorrect SQL generation. For instance, in the query "What are the names and release years for all the songs of the youngest singer?", the presence of similar column names (Song Name and Name) in the database creates ambiguity, leading to errors in column selection. (c.) Unconventional Query Phrasing: Users fre- quently use informal or unconventional phrasing in queries, making it challenging for the model to accurately interpret their intent. For example, in the query "For each singer name, what is the total sales for their songs?" the ordering condition needs adjustment to conform to proper SQL syntax. 6 Comparison with Related Work Seq2Seq methods. Early Text-to-SQL systems like Seq2SQL (Zhong et al., 2017) and SQLNet (Xu et al., 2017) used Seq2Seq architectures but strug- gled with multi-table queries due to limited schema understanding. Schema-aware methods like IRNet (Guo et al., 2019), RAT-SQL (Wang et al., 2021), and RESDSQL (Li et al., 2023a) improved per- formance by modeling schema relationships more explicitly. However, these methods often rely on manual schema linking and struggle with general- izing to unseen databases. PLMs based approaches. Advancements in pre- trained transformer models (PLMs) like T5 (Raffel et al., 2023) and BART (Lewis et al., 2019) signifi- cantly boosted Text-to-SQL accuracy. Transformer- based models such as PICARD-T5 (Scholak et al., 2021b), TABSQLify (Nahid and Rafiei, 2024), and BASE-SQL (Sheng et al., 2025) incorporated sym- bolic execution and constrained decoding to en- hance multi-table reasoning. Still, their dependence on fine-tuning and pipeline complexity limits adapt- ability and scalability. LLMs prompts. Prompt-based LLM methods like C3-SQL (Dong et al., 2023), DAIL-SQL (Gao et al., 2023), and MCS-SQL (Lee et al., 2025) by- pass fine-tuning and use stepwise reasoning and prompt ensembling to improve performance. Yet, these models can be brittle, prompt-sensitive, andprone to producing invalid or incomplete SQL. Other systems tackle schema complexity via query decomposition and retrieval (e.g., DATER (Ye et al., 2023), ReActTable (Zhang et al., 2024b), CHASE- SQL (Pourreza et al., 2024)), or modular pipelines like DIN-SQL (Pourreza and Rafiei, 2023) and MAC-SQL
https://arxiv.org/abs/2505.18122v1
(Wang et al., 2025) that assign subtasks to dedicated agents. These approaches improve robustness, but add orchestration complexity, and are hard to generalize. SFT End2End methods. SFT-based meth- ods (Li et al., 2024; Yang et al., 2024; Pour- reza and Rafiei, 2024; Talaei et al., 2024; Gorti et al., 2025; Sheng et al., 2025) fine-tune open- source LLMs to improve SQL generation. Hy- brid approaches like CHESS (Talaei et al., 2024), XiYan-SQL (Gao et al., 2025) and MSc-SQL (Gorti et al., 2025) combined prompt-based strate- gies with targeted fine-tuning, often leveraging smaller open-source models. Comprehensive sur- veys provide an overview of this evolving land- scape (Qin et al., 2022; Katsogiannis-Meimarakis and Koutrika, 2023; Shi et al., 2024). Despite these advancements, challenges remain in generalization, scalability, and correctness for complex multi-table queries. Our framework, U N- JOIN, addresses these gaps via schema simplifi- cation and decoupled query translation, achieving strong performance on Spider and BIRD. 7 Conclusion and Future Work In this work, we propose UNJOIN, a two-step framework for multi-table Text-to-SQL that simpli- fies schemas for improved retrieval, generates an intermediate SQL query, and maps it back to the original schema. This modular approach reduces structural and retrieval errors, outperforming most of the baselines, including prompting-based, ICL and SFT based techniques, and reasoning models like DistilledDeepSeek Llama 70B. When layered over strong retrievers like ARM, JAR, Contriever, and DTR, UNJOINsignificantly boosts end-to-end retrieval and SQL generation accuracy. By pro- cessing only schema information, UNJOINensures scalability and generalizability without pre-training or fine-tuning, advancing multi-table QA. Future work includes extending UNJOINto han- dle more complex data formats such as hierarchical, semi-structured, unstructured, and deeply nested schemas, enabling broader applicability across real- world databases and web tables. 8 Limitations Our approach consistently outperforms existing baselines on BIRD and SPIDER while remain- ing scalable across various schemas and large databases. However, it has certain limitations. The effectiveness of UNJOIN depends on the LLM’s ability to accurately follow instructions, which may be less reliable with smaller or less capable models. Moreover, UNJOINcurrently focuses on structural correctness rather than cell-level content extraction (e.g., resolving ambiguous entity names like “M. Obama” vs. “Michelle Obama”). As a result, it is less suited for tasks requiring precise entity disam- biguation or record-level analysis. UNJOINalso does not work on multimodal/unstructured/semi- structured tables. Ethics Statement We affirm that our work upholds the highest ethi- cal standards in research and publication. Ethical considerations have been carefully addressed to ensure the responsible use of computational lin- guistics methodologies. To support reproducibility, we provide detailed information, including publicly available code, datasets, and relevant resources, all compliant with their respective ethical guidelines. Our claims are backed by experimental results, ac- knowledging minor variations due to the stochastic nature of black-box LLMs, which we mitigate by using a fixed temperature. Additionally, we outline dataset splits, model configurations, and prompting strategies to ensure transparency and reproducibil- ity. AI was utilized in both experimentation and paper writing to enhance analysis, streamline re- sult interpretation, and improve overall presenta- tion clarity. Automated tools assisted in structuring content, refining language, and ensuring
https://arxiv.org/abs/2505.18122v1
coherence, making the findings effectively communicated. Acknowledgments We gratefully acknowledge the Cognitive Compu- tation Group at the University of Pennsylvania and the Complex Data Analysis and Reasoning Lab at Arizona State University for their resources and computational support. References Ion Androutsopoulos, Graeme D. Ritchie, and Peter Thanisch. 1995. Natural language interfaces to databases - an introduction. CoRR , cmp-lg/9503016.Zhenbiao Cao, Yuanlei Zheng, Zhihao Fan, Xiaojin Zhang, Wei Chen, and Xiang Bai. 2024. Rsl- sql: Robust schema linking in text-to-sql generation. Preprint , arXiv:2411.00073. Peter Baile Chen, Yi Zhang, Michael Cafarella, and Dan Roth. 2025a. Can we retrieve everything all at once? arm: An alignment-oriented llm-based re- trieval method. Preprint , arXiv:2501.18539. Peter Baile Chen, Yi Zhang, and Dan Roth. 2025b. Is ta- ble retrieval a solved problem? exploring join-aware multi-table retrieval. Preprint , arXiv:2404.09889. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2023. Program of thoughts prompting: Disentangling computation from rea- soning for numerical reasoning tasks. Preprint , arXiv:2211.12588. Raffel Colin. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. , 21. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint , arXiv:2501.12948. Xuemei Dong, Chao Zhang, Yuhang Ge, Yuren Mao, Yunjun Gao, lu Chen, Jinshu Lin, and Dongfang Lou. 2023. C3: Zero-shot text-to-sql with chatgpt. Preprint , arXiv:2307.07306. Dawei Gao, Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, and Jingren Zhou. 2023. Text-to-sql empowered by large language models: A benchmark evaluation. Preprint , arXiv:2308.15363. Yingqi Gao, Yifu Liu, Xiaoxia Li, Xiaorong Shi, Yin Zhu, Yiming Wang, Shiqi Li, Wei Li, Yun- tao Hong, Zhiling Luo, Jinyang Gao, Liyu Mou, and Yu Li. 2025. A preview of xiyan-sql: A multi-generator ensemble framework for text-to-sql. Preprint , arXiv:2411.08599. Satya Krishna Gorti, Ilan Gofman, Zhaoyan Liu, Ji- apeng Wu, Noël V ouitsis, Guangwei Yu, Jesse C. Cresswell, and Rasa Hosseinzadeh. 2025. Msc-sql: Multi-sample critiquing small language models for text-to-sql translation. Preprint , arXiv:2410.12916. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian- Guang Lou, Ting Liu, and Dongmei Zhang. 2019. To- wards complex text-to-SQL in cross-domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics , pages 4524–4535, Florence, Italy. Association for Computational Linguistics. 9 Jonathan Herzig, Thomas Müller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 512–519, Online. Association for Computational Lin- guistics. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gi- anna Lengyel, Guillaume Bour, Guillaume Lam- ple, Lélio Renard Lavaud, Lucile Saulnier, Marie- Anne Lachaux, Pierre Stock, Sandeep
https://arxiv.org/abs/2505.18122v1
Subramanian, Sophia Yang, and 7 others. 2024. Mixtral of experts. Preprint , arXiv:2401.04088. George Katsogiannis-Meimarakis and Georgia Koutrika. 2023. A survey on deep learning approaches for text- to-SQL. The VLDB Journal , 32(4):905–936. Dongjun Lee, Choongwon Park, Jaehyuk Kim, and Heesoo Park. 2025. MCS-SQL: Leveraging mul- tiple prompts and multiple-choice selection for text- to-SQL generation. In Proceedings of the 31st Inter- national Conference on Computational Linguistics , pages 337–353, Abu Dhabi, UAE. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: De- noising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Preprint , arXiv:1910.13461. Fei Li and Hosagrahar V Jagadish. 2014. Constructing an interactive natural language interface for relational databases. Proceedings of the VLDB Endowment , 8(1):73–84. Haoyang Li, Jing Zhang, Cuiping Li, and Hong Chen. 2023a. Resdsql: Decoupling schema link- ing and skeleton parsing for text-to-sql. Preprint , arXiv:2302.05965. Haoyang Li, Jing Zhang, Hanbing Liu, Ju Fan, Xi- aokang Zhang, Jun Zhu, Renjie Wei, Hongyan Pan, Cuiping Li, and Hong Chen. 2024. Codes: Towards building open-source language models for text-to-sql. Preprint , arXiv:2402.16347. Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin C. C. Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023b. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Preprint , arXiv:2305.03111. Md Mahadi Hasan Nahid and Davood Rafiei. 2024. Tabsqlify: Enhancing reasoning capabilities of llms through table decomposition. Preprint , arXiv:2404.10150.OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. Vaishali Pal, Andrew Yates, Evangelos Kanoulas, and Maarten de Rijke. 2023. Multitabqa: Generating tabular answers for multi-table question answering. InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 6322–6334, Toronto, Canada. Association for Computational Linguistics. Mohammadreza Pourreza, Hailong Li, Ruoxi Sun, Yeounoh Chung, Shayan Talaei, Gaurav Tarlok Kakkar, Yu Gan, Amin Saberi, Fatma Ozcan, and Sercan O. Arik. 2024. Chase-sql: Multi-path reason- ing and preference optimized candidate selection in text-to-sql. Preprint , arXiv:2410.01943. Mohammadreza Pourreza and Davood Rafiei. 2023. Din-sql: Decomposed in-context learning of text-to- sql with self-correction. Preprint , arXiv:2304.11015. Mohammadreza Pourreza and Davood Rafiei. 2024. DTS-SQL: Decomposed text-to-SQL with small large language models. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2024 , pages 8212–8220, Miami, Florida, USA. Association for Computational Linguistics. Bowen Qin, Binyuan Hui, Lihan Wang, Min Yang, Jinyang Li, Binhua Li, Ruiying Geng, Rongyu Cao, Jian Sun, Luo Si, Fei Huang, and Yongbin Li. 2022. A survey on text-to-sql parsing: Concepts, methods, and future directions. Preprint , arXiv:2208.13629. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. Exploring
https://arxiv.org/abs/2505.18122v1
the limits of transfer learning with a unified text-to-text trans- former. Preprint , arXiv:1910.10683. Baptiste Rozière, Antoine Bosselut, Peter J. Liu, Thomas Scialom, and Dani Yogatama. 2024. Codel- lama: Open foundation models for code. Preprint , arXiv:2308.12950. Torsten Scholak, Nathan Schucher, and Dzmitry Bah- danau. 2021a. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 9895–9901, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Torsten Scholak, Nathan Schucher, and Dzmitry Bah- danau. 2021b. Picard: Parsing incrementally for constrained auto-regressive decoding from language models. Preprint , arXiv:2109.05093. 10 Lei Sheng, Shuai-Shuai Xu, and Wei Xie. 2025. Base- sql: A powerful open source text-to-sql baseline ap- proach. Preprint , arXiv:2502.10739. Liang Shi, Zhengju Tang, Nan Zhang, Xiaotong Zhang, and Zhi Yang. 2024. A survey on employing large language models for text-to-sql tasks. Preprint , arXiv:2407.15186. Mirac Suzgun and Adam Tauman Kalai. 2024. Meta- prompting: Enhancing language models with task- agnostic scaffolding. Preprint , arXiv:2401.12954. Shayan Talaei, Mohammadreza Pourreza, Yu-Chen Chang, Azalia Mirhoseini, and Amin Saberi. 2024. Chess: Contextual harnessing for efficient sql synthe- sis.Preprint , arXiv:2405.16755. Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, Soroosh Mariooryad, Yifan Ding, Xinyang Geng, Fred Al- cober, Roy Frostig, Mark Omernick, Lexi Walker, Cosmin Paduraru, Christina Sorokin, and 1118 oth- ers. 2024. Gemini 1.5: Unlocking multimodal un- derstanding across millions of tokens of context. Preprint , arXiv:2403.05530. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2021. Rat-sql: Relation-aware schema encoding and linking for text- to-sql parsers. Preprint , arXiv:1911.04942. Bing Wang, Changyu Ren, Jian Yang, Xinnian Liang, Ji- aqi Bai, LinZheng Chai, Zhao Yan, Qian-Wen Zhang, Di Yin, Xing Sun, and Zhoujun Li. 2025. Mac-sql: A multi-agent collaborative framework for text-to-sql. Preprint , arXiv:2312.11242. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. Preprint , arXiv:2203.11171. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Preprint , arXiv:2201.11903. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sql- net: Generating structured queries from natural lan- guage without reinforcement learning. Preprint , arXiv:1711.04436. Jiaxi Yang, Binyuan Hui, Min Yang, Jian Yang, Jun- yang Lin, and Chang Zhou. 2024. Synthesizing text- to-sql data from weak and strong llms. Preprint , arXiv:2408.03256. Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, and Yongbin Li. 2023. Large language mod- els are versatile decomposers: Decompose evidence and questions for table-based reasoning. Preprint , arXiv:2301.13808.Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingn- ing Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic pars- ing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing
https://arxiv.org/abs/2505.18122v1
, pages 3911–3921, Brussels, Bel- gium. Association for Computational Linguistics. Weijia Zhang, Vaishali Pal, Jia-Hong Huang, Evangelos Kanoulas, and Maarten de Rijke. 2024a. QFMTS: Generating query-focused summaries over multi- table inputs. In Proceedings of the 27th European Conference on Artificial Intelligence (ECAI-2024) , Stockholm, Sweden. IOS Press. Yunjia Zhang, Jordan Henkel, Avrilia Floratou, Joyce Cahoon, Shaleen Deep, and Jignesh M. Patel. 2024b. Reactable: Enhancing react for table question an- swering. Proceedings of the VLDB Endowment , 17(8):1981–1994. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. Preprint , arXiv:1709.00103. 11 A Appendix: LLM Prompts and Examples A.1 Prompt for UNJOIN SP Listing 1: SQL Query Generation Prompt for UNJOIN SP You are an expert at semantic parsing . You have to follow two steps in order to complete your task . Step 1: Getting the simplified query You will be given a simplified schema which has only one table and multiple columns and a question . Please return the sql query with the correct format and syntax pertaining to that question . Remember to focus on getting the correct column extraction , and where clauses . DO NOT do any join operations . Treat this as a single table . The resulting query is the simplified query . Example Table : bank_data Column Name Description customer . customer_id Unique identifier for each customer customer . name Name of the customer customer . gender Gender of the customer account . account_id Unique identifier for accounts account . balance Current balance of the account loan . loan_id Unique identifier for loans loan . amount Loan amount loan . status Status of the loan (e.g., Approved / Rejected ) Question 1: " Which customers have an account balance greater than 10 ,000?" SQL Query : SELECT customer . customer_id , customer . name , account . balance FROM bank_data WHERE account . balance > 10000; Question 2: " List all customers with an approved loan ." SQL Query : SELECT customer . customer_id , customer . name , loan . loan_id , loan . amount FROM bank_data WHERE loan . status = ’Approved ’; Question 3: " How many male customers have a loan ?" SQL Query : SELECT COUNT (*) AS male_customers_with_loans FROM bank_dataWHERE customer . gender = ’M’ AND loan . loan_id IS NOT NULL ; Step 2: Getting the final query Once you get the simplified query , you will have the following : - A question : A natural language description of the desired query result . - A simplified schema : A virtual table with columns in the format table_name . column_name , combining data from multiple original tables . - A simplified query : A SQL query written against the simplified schema . - An original schema : A set of multiple related tables where the actual data resides . Your task : Translate the simplified query into a query compatible with the original schema . Ensure the translated query aligns with the intent described in the question . Follow
https://arxiv.org/abs/2505.18122v1
these steps for translation : 1. Understand the Core Objective from the Question : - Identify the goal of the query (e.g ., aggregate data , filter specific rows , join information across tables ). 2. Map Simplified Schema Columns to the Original Schema : - Identify how the columns in the simplified schema correspond to tables and columns in the original schema . 3. Construct Necessary Joins : - If the original schema splits data across multiple tables , determine the joins needed to recreate the relationships . 4. Translate Filters and Conditions : - Map WHERE clauses , conditions , and filters in the simplified query to the original schema . 5. Adapt Query Logic ( Aggregation , Sorting , etc .): - Match aggregations , grouping , or ordering logic from the simplified query to the original schema . 6. Validate the Final Query Against the Question : - Review the final query to ensure it satisfies the question and produces the intended result . < examples > 12 Key Considerations : - Use the Question as a Guide : Align the query logic with the intent expressed in the question . - Simplified Schema as a Mapping Tool : Treat the simplified schema as a bridge between the question and the original schema . - Validation : Ensure the translated query runs against the original schema and produces the intended result . A.2 Prompt for UNJOIN MP: Step1 : Query Generation on Simplified Schema Listing 2: SQL Query Generation Prompt: UNJOIN MP Step 1 You are an expert at semantic parsing . You will be given a schema which has only one table and multiple columns and a question . Please return the SQL query with the correct format and syntax pertaining to that question . Remember to focus on getting the correct column extraction and WHERE clauses . DO NOT perform any join operations . Treat this as a single table . <examples > Instructions: 1. The query should strictly adhere to the schema provided . 2. Ensure correct SQL syntax with SELECT ,FROM ,WHERE , GROUP BY, and ORDER BYclauses as needed . 3. The output query must be structured , readable , and executable in a standard SQL database . Output : A.3 Prompt for UNJOIN MP: Step2 : Query Translation Listing 3: SQL Query Generation Prompt: UNJOIN MP Step2 You are an expert at semantic parsing . You will be provided : - A question : A natural language description of the desired query result . - A simplified schema : A virtual table with columns in the format table_name . column_name , combining data from multiple original tables . - A simplified query : A SQL query written against the simplified schema .- An original schema : A set of multiple related tables where the actual data resides . Your task : 1. Generate a SQL query based on a simplified schema . 2. Translate the simplified query into a query compatible with the original schema . 3. Ensure the translated query aligns
https://arxiv.org/abs/2505.18122v1
with the intent described in the question . ### Steps for Translation : 1. ** Understand the Core Objective from the Question **: - Identify the goal of the query (e.g ., aggregate data , filter specific rows , join information across tables ). 2. ** Map Simplified Schema Columns to the Original Schema **: - Identify how the columns in the simplified schema correspond to tables and columns in the original schema . 3. ** Construct Necessary Joins **: - If the original schema splits data across multiple tables , determine the joins needed to recreate the relationships . 4. ** Translate Filters and Conditions **: - Map WHERE clauses , conditions , and filters in the simplified query to the original schema . 5. ** Adapt Query Logic ( Aggregation , Sorting , etc .) **: - Match aggregations , grouping , or ordering logic from the simplified query to the original schema . 6. ** Validate the Final Query Against the Question **: - Review the final query to ensure it satisfies the question and produces the intended result . <examples > ### Key Considerations : - ** Use the Question as a Guide **: - Align the query logic with the intent expressed in the question . - The question may highlight details ( e.g., time ranges , specific groups ) that must be included in the query . - ** Simplified Schema as a Mapping Tool **: 13 - Treat the simplified schema as a bridge between the question and the original schema . - Focus on accurately mapping the simplified columns to the original schema . - ** Validation **: - Ensure the translated query runs against the original schema and produces the intended result . ** Output :** B Schema Simplification Algorithm and Example Algorithm 1 Schema Simplification 1:Input: Set of databases D 2:Output: Simplified schema dictionary S 3:S← {} ▷Initialize empty dictionary 4:for all database dinDdo 5: S[d.name]←[ ]▷Initialize list for each database 6: for all tabletind.tables do 7: for all column name cint.columns do 8: s←t.name +’.’+c 9: Append stoS[d.name] 10: end for 11: end for 12:end for 13:return S C Appendix: Baseline Results BIRD Dataset GPT-4o Gemini LLAMA 70B Method QE EM QE EM QE EM Standard Prompting Baselines CoT 87.95 44.38 73.09 42.40 91.27 53.55 PoT 72.49 43.49 71.76 46.67 83.09 51.47 MP 73.19 44.54 71.50 44.78 82.89 51.92 SC 73.79 42.86 70.95 45.19 86.22 52.06 Table QA (SQL-based) baselines TabSQLify 66.83 42.41 70.32 44.12 77.78 49.94 UNJOIN UNJOIN SP 88.55 51.74 78.51 44.73 91.97 52.10 UNJOIN MP 89.75 50.36 81.73 48.40 96.28 56.35 Table 10: QE and EM scores on BIRD dataset.D Evaluation Metrics across different models E Further Discussion Reason: SQLCoder 34B and 7B Underperfor- mance. SQLCoder 34B’s best table recall in UN- JOIN MP(60.3 on BIRD) is lower than the best performing version of smaller general models like Llama 3.1 (62.0 on BIRD) (see Table 11) despite its larger size. This suggests that SQLCoder struggles not just with instruction following, but also with schema reasoning, since it is
https://arxiv.org/abs/2505.18122v1
primarily trained on SQL generation rather than structured multi-table reasoning. End2End Tables Extraction Sensitivity. Across models, the gap between the best and worst- performing baselines is more pronounced for col- umn precision and recall than for table precision/re- call. For instance, in SPIDER, GPT-4o’s column re- call (95.5 in UNJOIN SP) is significantly higher than SQLCoder-7B (54.1 in UNJOIN SP), whereas the ta- ble recall gap is smaller. Similarly, we can observe this behavior within the same model with different sizes. For example, on the SPIDER dataset in the UNJOIN SPbaseline, Llama 3.3 (70B) achieves a table recall of 74.0, whereas Llama 3.1 (3B) scores 65.9, showing a difference of only 8.1. However, the column recall for Llama 70B is 89.6, while Llama 3B achieves only 62, resulting in a much larger gap of 27.6. This trend indicates that smaller models struggle significantly more with precise column selection than table selection, likely due to their weaker contextual understanding and reason- ing capabilities. 14 Figure 4: Schema Simplification Model Table Precision Table Recall CoT CoT-SS UNJOIN SP UNJOIN MP CoT CoT-SS UNJOIN SP UNJOIN MP GPT-4o 59.30 57.20 73.60 74.70 58.60 58.00 73.40 74.20 GPT-4o Mini 57.40 58.80 75.00 74.80 56.00 58.30 72.40 73.30 Gemini 1.5 Flash 53.00 62.90 65.30 66.20 52.40 61.00 65.30 65.70 Mixtral 36.00 62.10 57.00 66.00 34.30 60.80 60.00 67.00 SQLCoder-7B 56.00 62.60 22.10 60.60 50.70 63.10 23.30 58.60 SQLCoder-34B 51.80 51.60 70.10 60.80 47.80 47.80 71.60 60.30 Llama 3.1 (3B) 53.30 53.80 53.00 50.00 52.30 51.90 62.00 49.00 Llama 3.3 (70B) 67.10 74.30 75.30 76.80 67.20 71.20 74.80 75.00 Column Precision Column Recall GPT-4o 68.80 68.00 86.30 86.60 68.20 65.80 85.40 85.00 GPT-4o Mini 67.20 69.40 83.10 85.00 66.00 65.10 80.00 83.00 Gemini 1.5 Flash 54.40 88.60 82.80 83.80 53.30 85.40 82.90 82.30 Mixtral 42.00 71.50 62.20 75.00 40.00 54.00 62.20 73.00 SQLCoder-7B 55.60 69.80 17.10 62.50 52.60 66.40 20.20 61.10 SQLCoder-34B 55.40 56.10 75.50 67.20 51.80 44.60 76.80 63.90 Llama 3.1 (3B) 59.40 53.70 43.40 52.00 58.00 47.00 66.70 54.30 Llama 3.3 (70B) 81.80 84.80 85.00 86.00 81.60 82.90 84.90 84.90 Table 11: Evaluation of Table/Column Precision and Recall on BIRD dataset. 15
https://arxiv.org/abs/2505.18122v1
TabSTAR: A Foundation Tabular Model With Semantically Target-Aware Representations Alan Arazi Eilam Shapira Roi Reichart {alanarazi7, eilam.shapira, roireichart}@gmail.com Technion - IIT Abstract While deep learning has achieved remarkable success across many domains, it has historically underperformed on tabular learning tasks, which remain dominated by gradient boosting decision trees (GBDTs). However, recent advancements are paving the way for Tabular Foundation Models, which can leverage real-world knowledge and generalize across diverse datasets, particularly when the data con- tains free-text. Although incorporating language model capabilities into tabular tasks has been explored, most existing methods utilize static, target-agnostic textual representations, limiting their effectiveness. We introduce TabSTAR: a Foundation Tabular Model with Semantically Target-Aware Representations. TabSTAR is designed to enable transfer learning on tabular data with textual features, with an ar- chitecture free of dataset-specific parameters. It unfreezes a pretrained text encoder and takes as input target tokens, which provide the model with the context needed to learn task-specific embeddings. TabSTAR achieves state-of-the-art performance for both medium- and large-sized datasets across known benchmarks of classifica- tion tasks with text features, and its pretraining phase exhibits scaling laws in the number of datasets, offering a pathway for further performance improvements. 1 Introduction In recent years, deep learning has profoundly reshaped research and practice in computer vision [48,63,32,16] and natural language processing [ 53,5,76,15,12]. This transformation was notably accelerated by the rise of foundation models [ 7,84,4], capable of cross-modal understanding and generalization from massive pretraining across heterogeneous data sources. Importantly, they enabled an end-to-end approach that outperformed previous modular alternatives [ 68,1]. Moreover, deep learning models excel at transfer learning [ 87], generalizing from their pretraining data to new tasks. Their strength, combined with techniques like In-Context Learning (ICL) [ 12] and Parameter-Efficient Fine-Tuning (PEFT) [40], has enabled rapid adaptation to new tasks with only limited labeled data. Despite this progress, deep learning has historically lagged behind gradient-boosted decision trees (GBDTs) on tabular data [ 11,14,57], in both classification and regression tasks [ 62,10,30,52]. The heterogeneity of tabular data, which lacks the spatial locality of images or the sequential order of text, makes it more challenging for deep models to learn. Consequently, GBDTs have remained the de facto standard for tabular learning, offering strong out-of-the-box performance, computational efficiency, and built-in inductive biases (e.g., robustness to skewed feature distributions and automatic feature selection) that make them especially well-suited to heterogeneous datasets [ 30]. Nonetheless, GBDTs cannot be pretrained to reuse strong representations for downstream tasks. This limitation becomes critical in low-data settings like those often found in healthcare applications [ 50]. Crucially, they must rely on external embedding models to process unstructured data types like text and images, yielding fixed feature representations that cannot be finetuned for a specific prediction task. Preprint. Under review.arXiv:2505.18125v1 [cs.LG] 23 May 2025 Table 1: A binary classification toy dataset for hospital patient release outcomes. Decision is the target variable. Age(numerical), Department (high-cardinality), and Report (textual) are the features. Age Department Report Decision 45 Cardiology Mild chest discomfort. Released 62 Neurology Complaints of headache and occasional dizziness. Hospitalized 38 Oncology Completed treatment cycle without
https://arxiv.org/abs/2505.18125v1
adverse reactions. Released 55 Neurology Reports episodes of vertigo and memory lapses. Hospitalized The emerging field of Tabular Foundation Models (TFMs) has begun addressing these shortcomings, introducing powerful cross-dataset learning strategies [ 80,45,37]. However, the flagship model TabPFN-v2 [ 37] still handles text inputs no more flexibly than conventional GBDTs. This design choice is not incidental; historically, tabular benchmarks have prioritized numerical datasets without free-text features, largely for ease of modeling and evaluation. A recent study [ 46] of mainstream tabular datasets benchmarks [ 24,22,83,52] found that half of these datasets are more than 20 years old, being a poor representation of modern real-world data. Real-world tabular datasets often include high-cardinality1and free-text features [ 13], illustrated by a toy example in Table 1. In such datasets, free-text features (e.g., Report ) carry rich semantic information critical for tasks like predicting whether a patient will be discharged from the hospital or require continued care. Yet, most models encode them in a target-agnostic manner, delegating to a generic embedding that fails to capture task-specific nuances for predicting Decision . Crucially, that same embedding would have been used for a different target variable (e.g., Treatment Cost ). Similarly, categorical features with dozens of unique values (e.g., Department ) are difficult to encode efficiently without external knowledge, making naive approaches brittle and limiting generalization. Importantly, the column names, which could guide the model toward more effective representations, are typically ignored. Addressing these limitations is crucial for developing tabular models that leverage semantic information, transfer knowledge from many datasets, and generalize across domains. In this paper, we introduce TabSTAR : a novel Tabular Foundation Model with Semantically Target- Aware Representations,2designed explicitly for end-to-end handling of purely textual features. By integrating an unfrozen text encoder at its core, TabSTAR can optimize free-text feature rep- resentations, demonstrating their clear superiority over alternative frozen embedding approaches. Additionally, it introduces a novel approach of target-aware tokens , which inject semantic information about the target variable as part of the input, allowing for efficient parameter sharing and resulting in an architecture with no dataset-specific parameters (see Figure 1). TabSTAR’s training is highly efficient3and its performance steadily improves with more pretraining data. Empirically, TabSTAR achieves state-of-the-art (SOTA) performance on classification benchmarks containing substantial textual content, demonstrating significant advances over GBDTs and leading TFMs. 2 Related Work This section reviews prior work in five areas relevant to our approach. We begin with deep learning methods tailored for tabular data, which were applied to a single dataset. We then discuss cross-dataset transfer learning techniques that improve generalization by leveraging related datasets. Next, we cover the field of TFMs, which aim to generalize across diverse tasks and datasets through large-scale pretraining. Finally, we review recent work on applying large language models (LLMs) to tabular data and elaborate on existing AutoML [33] multimodal solutions. Deep Learning on a Single Tabular Dataset Several architectures have been proposed to enhance deep learning for tabular data [ 67,44,78,81].TabNet [3] and TabTransformer [42] introduced attention mechanisms into tabular deep learning, while FT-Transformer [25] and its improvement [26] jointly integrated numerical and
https://arxiv.org/abs/2505.18125v1
categorical features into a transformer [ 76]. Other novel approaches leveraged inter-example information at inference time, with SAINT [66] proposing row- 1High-cardinality features are categorical columns with a large number of unique values. 2Code is available at https://github.com/alanarazi7/TabSTAR . 3Pretraining within 48 hours on a single A40 GPU. Finetuning with PEFT for low memory footprint. 2 level attention between examples, Non-Parametric Transformers [47] processing the entire dataset, including labels, in a single forward pass, and TabR [27] combining a k-nearest-neighbor mechanism with a traditional Multi-Layer Perceptron (MLP) architecture. Recent works such as TabM [28] and RealMLP [38] focused on refining MLPs without an attention component. Despite these innovations, single-dataset deep learning models have not yet convincingly outperformed GBDTs [ 62,30,61]. Furthermore, none of them addressed the challenge of modeling tabular datasets with rich textual features. Cross-Dataset Transfer Learning Deep learning was proven to shine when performing transfer learning in many machine learning domains [ 87]. Motivated by this success, [ 50,85] proved that cross-dataset learning can boost single-dataset performance, but were limited to strict requirements such as partial overlap of feature names. To address this limitation, TransTab [79] integrated semantic understanding into feature tokenization, and XTab [86] pretrained a transformer backbone with dataset- specific parameters, proving that pretraining contributes to a stronger initialization for a downstream task. Despite their small scale, these studies demonstrated cross-dataset transfer learning’s potential, laying essential groundwork for the rise of TFMs. Tabular Foundation Models TFMs represent an emerging paradigm in tabular learning. While the definition is still evolving, we adopt the framing proposed by [ 74], which identifies key desired characteristics of TFMs: large-scale pretraining with adaptability to downstream tasks, mixed-type column support, cross-domain generalization, use of textual metadata,4and column-order invariance. TabPFN [35] is recognized as the first TFM, and its successor TabPFN-v2 [37] currently sets the SOTA in tabular learning, becoming a popular approach for TFMs [ 58,20]. TabPFN-v2 was the first model to consistently outperform GBDTs on medium-sized datasets, by pretraining Bayesian Prior-Data Fitted Networks (PFNs) [ 55] on 130 million synthetic datasets. Using ICL at inference time, it accepts up to 10,000 examples as input and predicts without updating its weights. Nonetheless, similarly to GBDTs, TabPFN-v2 uses off-the-shelf embeddings for text features, limiting its effectiveness. CM2 [85],CARTE [45] and TP-BERTa [80] represent a shift toward semantic tabular modeling, leveraging textual signals and external knowledge at a greater scale. Unlike prior methods, these models transfer knowledge via language representations. CM2 pretrained on over 2,000 datasets, but did not focus on free-text features and used static word embeddings without further finetuning them. CARTE encodes tables as star-shaped graphs, jointly representing features by their names and values, and applies attention over the graph to capture contextual relations. While effective for high-cardinality features, it lacks exposure to longer free-text fields during pretraining and was proven useful mainly for small datasets. TP-BERTa adapts RoBERTa [51] with intra-feature attention and a tokenization scheme that maps numerical values into discrete relative-magnitude bins, to address the weakness of language models when tokenizing numbers [ 72]. Although it performs well, its use of
https://arxiv.org/abs/2505.18125v1
dataset-specific output layers limits scalability and complicates multi-task learning. Consequently, they trained two separate models,5wasting potential for better cross-dataset learning. Notably, none of these approaches finetune semantic representations during downstream task training. In our work, we demonstrate that this is critical to align textual and tabular features. Large Language Models for Tabular Data The remarkable success of LLMs is unprecedented [12,56]. During the past years, several research attempts have tried to combine LLMs and tabular data. One line of work is on using LLMs directly for tabular prediction, by converting tabular data into serialized text. TabLLM [34] assessed LLMs under few-shot scenarios, while Tabula-8b [23] finetuned the Llama 3-8B model extensively on tabular data. Although useful for few-shot learning, these models are computationally expensive,6suboptimal for numerical features [ 72,74], and potentially compromised on widely-used benchmarks due to prior exposure during training [ 8]. While current generations of LLMs weren’t adopted for tabular learning, their emergent knowledge from their pretraining could be crucial when textual features are present [ 74,19]. Additionally, LLMs can be used in multiple aspects of tabular learning, as they seem to be promising synthetic data generators [9, 65], useful data cleaners [6] and clever feature engineers [36]. 4Contextual information such as the dataset description, column names and category names. 5One for classification and one for regression. A joint model for both tasks performed significantly worse. 6Llama 3-8b has orders of magnitude more parameters than TP-BERTa, which has roughly 110M parameters. 3 Age: 40-50 Decision: ReleasedReport: [Free Text]Numerical Encoder Interaction Encoder Prediction HeadsStacking Hospitalized Decision: HospitalizedTextual EncoderNumerical Fusion Verbalization Encoding Fusion Interaction PredictionFigure 1: The TabSTAR architecture illustrated with our toy dataset. The model processes numerical features, textual features, and all possible target values for classification. Multimodal AutoML Historically, textual tabular datasets have been largely overlooked in classical tabular benchmarks. However, the AutoML [ 33] community has made significant progress in developing multimodal solutions. In particular, AutoGluon [18] introduced the AutoML Multimodal Benchmark [60], initially focusing on text features and later evolving into AutoGluon-Multimodal [70,69], which incorporates images as well. This powerful AutoML framework can fuse text and image foundation models with tabular models and ensemble multiple models through a meta-learning approach [ 21], making it one of the few systems able to refine static textual representations via joint learning. Nevertheless, this line of work should not be seen as a single model but rather as a highly optimized, production-ready system. According to the authors, it is "a collection of tricks that significantly enhance performance" [ 70], establishing itself as a robust approach for multimodal, multi-model tabular learning. However, this line of work remains somewhat orthogonal to the development of novel TFMs. 3 TabSTAR In this section, we introduce TabSTAR: a Tabular Foundation Model with Semantically Target- Aware Representations. Our training framework consists of two stages: (1) Pretraining , where it is pretrained over a corpus of tabular datasets7in a multi-task regime, mixing classification with regression tasks, then (2) Finetuning , where the pretrained model is further trained with LoRA [41] on a single downstream task. TabSTAR is designed to
https://arxiv.org/abs/2505.18125v1
enable effective cross-dataset learning by applying supervised learning on the target variable in both stages. At its core, is uses an unfrozen encoder-only language model, which can invoke world knowledge acquired during the language model pretraining.8The encoder is combined with a tabular-specific architecture tailored to structured data, mitigating the known limitations of language models in tabular settings [72, 74]. TabSTAR’s architecture comprises five core modules: (1) Verbalization , mapping every feature into a textual representation composed of both the column name and value, with a special treatment to numerical features for full numerical precision; (2) Encoding , transforming semantic and numerical data into meaningful embeddings of the same dimension; (3) Fusion , integrating textual and numerical representations; (4) Interaction , modeling dependencies and relationships between different elements through self-attention and cross-element interactions; and (5) Prediction , where Interaction’s outputs are projected into a real value for regression or a probability distribution for classification. Figure 1 illustrates the architecture while Appendix A elaborates, and Appendix B discusses the training. A key innovation of TabSTAR is the introduction of target-aware tokens , a novel approach that integrates the target variable’s identity as an input to the model. Unlike existing TFMs [ 37,45,80, 82,86,79], which treat the target value as a mere label, TabSTAR fuses target-awareness from the very beginning. For classification tasks, each target value is verbalized and encoded like any other feature. Then, features and target tokens interact with each other, building representations that are then used for prediction. Crucially, this target-awareness allows parameter sharing between all target tokens, which can later use a shared prediction head that maps tokens to probabilities regardless of the number of classes and their identity. By doing so, TabSTAR eliminates the need for dataset-specific components commonly found in prior work [ 25,86,80]. TabSTAR’s flexible architecture effortlessly scales9to any dataset size, and handles any number of classes in multiclass classification tasks. 7Ranging from metadata-rich, text-heavy datasets to numeric-only tables lacking column names. 8Note that the language-model pretraining occurs before TabSTAR’s pretraining. Unless specified differently, the term pretraining refers to TabSTAR’s pretraining, which assumes the use of a pretrained language model. 9Except when the number of features becomes very large, where memory limitations may arise. 4 Table 2: An illustrative verbalization of the first patient of Table 1. Each semantic feature is verbalized with its name and value. The numerical Agevalue 45 is standardized (mapped into z-scores, e.g., 0.27) and binned (providing a range to the verbalization, e.g., 40-50, and its quantile). The target variable Decision is mapped into its two possible elements, regardless of its original true value. Name Value Semantic Numerical Age 45 “Age: 40–50 (Quantile 50–60%)” 0.27 Department Cardiology “Department: Cardiology” - Report Mild chest discomfort. “Report: Mild chest discomfort.” - Decision Hospitalized “Target. Decision: Hospitalized” - Decision Released “Target. Decision: Released” - Verbalization All the features and each of the target values are processed into a sequence of elements . Numerical features are processed into two inputs: a numerical one and a semantic one. The numerical input is standardized using z-scores, with outlier clipping at
https://arxiv.org/abs/2505.18125v1
±3 standard deviations. In addition, they are verbalized using quantile-based binning into 10 bins, a novel approach to mitigate the precision loss inherent in language models [ 72]. Appendix A.1 shows a precise example and §6 discusses different verbalization strategies. In contrast, semantic features are directly verbalized by concatenating the feature name and textual value, without any numerical representation. The target variable is also included as part of the input: In classification tasks, each of the Cpossible values is represented by an element, constant for every example, while the true value remains hidden. For regression tasks, a single element is verbalized, carrying only the target name. Table 1 shows a toy dataset of patient records and outcomes and Table 2 shows the verbalization for the first patient. Encoding We employ a pretrained e5-small-v2 [77] embedding model for semantic encoding, chosen for its strong performance on the MTEB benchmark [ 54] with a relatively modest parameter count. By unfreezing half of its layers, the representations are optimized for predicting the target variable, which leads to a significant impact on TabSTAR performance (see §6). Each verbalization element is encoded independently into a semantic representation, with attention applied between tokens within each sequence element. In parallel, we encode standardized numerical values by projecting them into the same dimension using a small MLP. For the patient in Table 2, this results in a numerical embedding for Agealongside semantic representations for each of the five verbalizations. Fusion To obtain a unified representation for each sequence element, we apply a fusion block consisting of a single encoder-only transformer layer. For each numerical feature, the block attends over its numerical and semantic embeddings, producing a fused representation. In our running example, the representation of Agenow jointly captures both its semantic context (the fact that the value represents age) as well as its numerical value (the patient’s age, 45, or 0.27 after standardization). Interaction The fused, semantically-rich and numerically-grounded representations of all elements interact via a 6-layer Transformer encoder [ 76]. Each input element is now a token, with feature tokens and target tokens all attending to each other. Unlike standard language models which integrate positional encoding, the Interaction module’s inputs are order-invariant, a desiredatum for TFMs, as defined by [ 74]. The encoder produces contextualized representations for each target value. In our example, this yields dedicated embeddings for the Release andHospitalization target values. The role of these representations is to carry information about the likelihood of each value to be the true value. Prediction TabSTAR is designed for cross-dataset learning, with shared regression and classifi- cation heads used during both pretraining and finetuning. For classification, each of the Ctarget tokens is processed independently through the same classification head, which projects them to scores. We then apply a softmax over all the possible values to yield a probability distribution. Crucially, the fact that target tokens for every class in every dataset share the same classification head allows efficient parameter sharing, flexibly supports any number of output classes, and removes any need for dataset-specific parameters. This is not only efficient during
https://arxiv.org/abs/2505.18125v1
pretraining, but also provides a better initialization for finetuning. In our example, both the Released andHospitalized tokens go through the same classification head, which maps them from representations to logits. Applying softmax yields predicted probabilities. For regression tasks, a single target token is projected into a real value. 5 4 Experiments The TabSTAR Pretraining Corpus While TabSTAR could be pretrained on a massive scale, for this work we limit ourselves to a modest pretraining corpus focusing on classification, as we believe that TabSTAR’s inductive biases are best suited to shine in this task. We manually curate a pretraining corpus of 350 high-quality tabular datasets (253 classification, 97 regression), in a tedious process in which we uncover numerous duplications in the most popular tabular repositories, OpenML [ 75] and Kaggle,10as elaborated by [ 73]. Our datasets are sourced from popular benchmarks [24,45,22,21,30,60,52,31], which have almost non-existing representation of textual tabular datasets. Thus, we furthermore increase our corpus, focusing on classification datasets with rich semantic content. See Appendix C for more details. Benchmark Tabular datasets with free-text have seen little prior research, and accordingly, bench- marks are incredibly rare. Therefore, we consider all the possible datasets from AutoML Multimodal Benchmark [60], from the analysis about free-text and high-cardinality features by [ 31] and from the CARTE paper [ 45]. After a deduplication process we end up with 50 datasets. However, there are two important limitations: first, the benchmark is heavily biased towards regression, with 36 datasets in total. Secondly, 29 out of these 36 datasets were solely contributed by the CARTE benchmark, which focuses more heavily on high-cardinality features as it was pretrained over knowledge graphs. While our main motivation is classification tasks with textual features, we decide nevertheless to evaluate on the full set of 50 datasets although it heavily biases towards regression problems and high-cardinality features, rather than classification and free-text (see Appendix D). Baselines We compare TabSTAR against a diverse set of baselines. For tree-based methods, we evaluate CatBoost [57],XGBoost (XGB) [ 14], and Random Forest (RF) [ 11] with the default configuration proposed by [ 25]. For CatBoost and XGBoost we consider a tuned version, where hyperparameters are optimized separately for each task using random search with 5-fold cross- validation under a 4-hour budget on 8 CPU cores. Among TFMs, we evaluate TabPFN-v2 [37] andCARTE [45]. For CARTE, we tune only the learning rate for each task, following the original paper. Since the public TabPFN-v2 model does not support text, we use their closed-sourced API client.11For models lacking native support for textual features, we embed text using e5-small-v2 [77], allowing a fair comparison. For more details about the hyperparameters for each baseline as well as exclusion of models such as TP-BERTa due to potential leakage concerns, see Appendix E. Experimental Setup Each of the 50 datasets in the benchmarks is evaluated with 10 random train-test splits (90% training, 10% testing), resulting in 500 runs per model. While 30 of the datasets have more than 10,000 examples, the evaluated TFMs have strict limitations. TabPFN-v2 employs ICL and thus receives as input
https://arxiv.org/abs/2505.18125v1
at most 10,000 examples. Although CARTE imposes no size cap,12it suffers from inefficiency and no preset configuration, requiring six learning-rate trials and totalling 6,000 slow GPU runs. Because of these important limitations, we consider two experiment conditions: (1)10K: Each model is trained13over at most 10,000 training examples, and (2) Unlimited : We add TabSTAR-Unlimit, CatBoost-Tuned-Unlimit, and XGBoost-Tuned-Unlimit and evaluate them on the full version of the 30 datasets,14while retaining 10K baselines as weaker reference points. We exclude the untuned GBDTs and keep the same number of models as in the 10K condition. The TabSTAR Training To maximize the value of cross-dataset learning, instead of pretraining TabSTAR once, we create five dataset splits. Each variant is pretrained on the 350 pretraining datasets and 40 of the benchmark datasets, while the other 10 serve exclusively as its test set. Crucially, the whole collection was carefully curated to prevent any data leakage from duplicate or overly similar datasets. For finetuning, while dataset-specific hyperparameter tuning can boost performance, we believe that robust out-of-the-box defaults are essential for TFMs and their evaluation, following Tab-PFN-v2’s approach. Therefore, we use a default hyperparameters configuration that was found robust over a disjoint set of tabular datasets, as detailed in Appendix B.2. 10https://www.kaggle.com/datasets 11https://github.com/PriorLabs/tabpfn-client 12In their own paper, CARTE was evaluated only over up to 2,048 examples, without scaling guarantees. 13While TabPFN-v2 isn’t technically trained, we adopt this term for conciseness. 14We technically cap the amount of examples to 100,000 for computational efficiency. 6 5 Results We evaluate each model using AUROC (classification) and R2(regression) as metrics. Following [ 37], we normalize scores per dataset split to the [0,1]range, using the best and worst model performance as anchors.15The normalized scores are averaged across all runs, with 95% CIs. Performance for all models on both conditions are shown in Figure 2 (classification) and Figure 3 (regression). Besides their example limit, TabPFN-v2 cannot process 4 datasets (more than 10 target classes or more than 500,000 cells) and CARTE cannot handle 15 (a bug in their PowerTransformer implementation). Reported averages for these models are computed only over the datasets where evaluation is feasible. Appendix F expands on dataset level performance, head-to-head comparisons and running times. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Normalized Score TabSTAR TabPFN-v2 CatBoost-Tuned XGB-Tuned CatBoost CARTE XGB RFClassification - Up to 10K examples (14 datasets) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Normalized Score TabSTAR-Unlimit CatBoost-Tuned-Unlimit XGB-Tuned-Unlimit TabSTAR-10K TabPFN-v2-10K CatBoost-Tuned-10K XGB-Tuned-10K CARTE-10KClassification - Above 10K examples (10 datasets) Figure 2: Comparison of normalized scores with 95% CIs between TabSTAR and baseline models in classification tasks, evaluated on up to 10,000 examples (left) and above 10,000 (right). In classification problems, TabSTAR consistently achieves SOTA performance. This is evident both when restricting the dataset size to 10,000 examples and when using larger datasets in the unlimited condition. For the 10K condition, TabSTAR achieves a 0.809 score, performing better than TabPFN-v2 (0.783) and significantly better than GBDTs (0.756 CatBoost-Tuned, 0.744 XGB-Tuned). When analyzing head-to-head comparisons (Appendix F.2), TabSTAR outperforms TabPFN-v2 (7/11 datasets), XGB-Tuned (10/14) and CatBoost-Tuned (11/14).
https://arxiv.org/abs/2505.18125v1
For the Unlimited condition, TabSTAR- Unlimit achieves a 0.874 score, significantly above the second-best CatBoost-Tuned-Unlimit with 0.734. Importantly, all Unlimit variants surpass the 10K ones, emphasizing the importance of scaling. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Normalized Score CatBoost-Tuned XGB-Tuned CatBoost CARTE TabPFN-v2 TabSTAR RF XGBRegression - Up to 10K examples (36 datasets) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Normalized Score CatBoost-Tuned-Unlimit XGB-Tuned-Unlimit TabSTAR-Unlimit CARTE-10K CatBoost-Tuned-10K XGB-Tuned-10K TabPFN-v2-10K TabSTAR-10KRegression - Above 10K examples (20 datasets) Figure 3: Comparison of normalized scores with 95% CIs between TabSTAR and baseline models in regression tasks, evaluated on up to 10,000 examples (left) and above 10,000 (right). Although regression is not our main focus, TabSTAR achieves competitive results in the 10K condition, but clearly does not set the SOTA. Surprisingly, while TabPFN-v2 is superior, it significantly underperforms compared to GBDTs which dominate this category. This emphasizes the need for better modeling of textual tabular learning, especially since TabPFN-v2 has shown remarkable performance in non-textual tabular datasets, and CARTE set the SOTA for small datasets. When analyzing the Unlimited variants, TabSTAR scales well, and surpasses other TFMs which cannot scale, but the gap from GBDTs remains significant. §7 discusses this limitation and suggests promising directions for future generations of TabSTAR to achieve SOTA in regression as well. 15For a single run, the best model gets 1, the worst gets 0 and the rest are linearly scaled accordingly. 7 6 Analysis We analyze the factors contributing to TabSTAR’s strong performance by addressing three key research questions: Q1: How important is the encoder language model unfreezing? Q2: Does the number of datasets during pretraining contribute to the downstream task performance? and Q3: How do different verbalization methods of numerical features impact performance? To answer these questions, we pretrain several variants of TabSTAR for each analysis, limiting ourselves to a subset of the tabular datasets used for the main experiment (see §4). Specifically, each variant is pretrained over 256 datasets16including 30 datasets from our benchmark, and evaluated over the remaining 20 datasets (12 regression, 8 classification). This reduced setup allows leveraging transfer learning and exploiting our corpus, without the burden of training multiple folds per variant. Results are reported with the same normalized metric used in §5, scaling the performance to the [0,1] range. Appendix G.1 lists the 20 datasets used for evaluation along with per-dataset results. 0 20 40 Epochs0.60.81.0Validation LossUnfrozen Layers 0 1 3 6 9 0.00 0.25 0.50 0.75 1.00 Average Score0 Unfrozen 1 Unfrozen 3 Unfrozen 6 Unfrozen 9 Unfrozen Figure 4: Performance as a function of the number of encoder layers unfrozen: Validation loss during TabSTAR’s pretraining (left) and normalized scores with 95% CIs on the downstream tasks (right). Unfreezing even a single encoder layer significantly improves the performance of TabSTAR. Q1: The Role of the Encoder Unfreezing To investigate whether unfreezing layers of the textual encoder impacts performance, we conduct experiments where we unfreeze varying numbers of the 12 encoder layers during both TabSTAR’s pretraining and finetuning stages.17Figure 4 shows the validation loss during TabSTAR pretraining (left)
https://arxiv.org/abs/2505.18125v1
and the normalized score on the downstream tasks (right) as a function of the number of unfrozen encoder layers. Notably, unfreezing even a single encoder layer significantly outperforms using static embeddings. Further substantial improvements are observed as more layers are tuned, with the best results achieved when unfreezing 6 layers. While unfreezing 9 layers shows lower performance, it is plausible that adding more datasets to the pretraining phase will affect this finding. See Appendix G.2 for more details. Table 3: Normalized score with 95% CIs by the number of datasets used during TabSTAR pretraining. Pretraining Datasets 0 16 64 256 Classification 0.352 ± 0.086 0.450 ± 0.084 0.558 ± 0.086 0.786 ± 0.076 Regression 0.338 ± 0.073 0.395 ± 0.068 0.642 ± 0.066 0.811 ± 0.055 Q2: The Effect of Pretraining To evaluate the impact of pretraining on TabSTAR’s downstream performance, we compare a pretrained version of TabSTAR with a version that was finetuned from scratch.18In line with previous work [ 86,82], the pretrained model performs significantly better, highlighting the critical role of transfer learning for TabSTAR’s success. To further investigate the effect of the number of pretraining datasets on downstream task performance, we train two additional versions: one pretrained on 16 datasets and another on 64 datasets. As shown in Table 3, increasing the number of pretraining datasets consistently improved performance in both classification and regression tasks. Notably, the substantial gain in regression tasks suggests that TabSTAR’s downstream performance on §5 could improve with more pretraining data (see Appendix G.3). 16Except for variants of Q2, which analyze the effect of number of datasets on pretraining. 17For each variant, the number of unfrozen layers remains the same in both pretraining and finetuning. 18Since LoRA underperforms on random weights, we finetune the entire non-pretrained model. 8 Q3: Numerical Verbalization A key challenge in integrating language models with numerical data is determining how to best represent numerical values within a linguistic framework. While some semantic tabular methods omit numerical features from the verbalization [ 79,82], TP-BERTa [80] introduced Relative Magnitude Tokenization [80], which encode numerical information through non-semantic special bin tokens. In constrast, TabSTAR injects semantic numerical information into the verbalization of numerical features, as illustrated in Table 2. To quantify the effect of our novel verbalization, we explore two thinner variants: (1) Name + Bin , which excludes the quantile information, and (2) Name , which omits numeric information entirely and verbalizes the feature name only. Appendix G.4 shows an illustrative example for each variant and presents the full results. As demonstrated in Table 4, our findings reveal that incorporating numerical information significantly enhances performance, highlighting the importance of balancing numerical precision with a representation format that aligns with the language model’s parametric knowledge. Table 4: Normalized score with 95% CIs by the numerical verbalization method. Verbalization Method Name Name + Bin TabSTAR Classification 0.386 ± 0.095 0.544 ± 0.093 0.593 ± 0.097 Regression 0.386 ± 0.081 0.584 ± 0.076 0.596 ± 0.079 7 Discussion and Conclusion We introduce TabSTAR, a Tabular Foundation Model with Semantically Target-Aware Representa- tions, which integrates textual features
https://arxiv.org/abs/2505.18125v1
through an unfrozen pretrained encoder. In addition, its novel target-aware tokens enable efficient cross-dataset generalization without dataset-specific parameters. Despite limited pretraining data and a relatively small text encoder [ 77], TabSTAR sets the SOTA in tabular classification with textual features, significantly surpassing GBDTs and leading TFMs. Since scaling laws in data and model size have proven themselves for LLMs [ 43] and TabSTAR improves with the number of pretraining datasets (see §6), future work should scale TabSTAR across both model and data dimensions. For model scaling, we envision a family of model sizes, common for LLMs [ 29,71,49], that will allow a trade-off between quality and costs. Data scaling might leverage self-supervised learning [ 59] over large-scale table corpora [ 17], or realistic synthetic tabular data generators [ 9], which have proven successful [ 37,2]. At scale, it could potentially unlock few-shot learning capabilities and develop automatic feature-engineering skills [36]. Beyond scaling, TabSTAR’s semantic approach has tremendous potential to explicitly include world knowledge, by leveraging LLMs which to date have had limited impact on tabular learning. As a few motivating examples, LLMs could improve TabSTAR’s numerical verbalization binning approach by providing semantically informed thresholds, or provide explicit, contextual world knowledge that could be injected as a strong prior in small data scenarios. While these directions seem like plausible research paths, they come with a risk of data leakage due to the memorization properties of LLMs [ 8]. Evaluating TFMs fairly while keeping benchmarks uncontaminated would be an important enabler for tabular research. As a step in this direction, we are releasing several TabSTAR variants, each with a different dataset withheld during pretraining, ensuring that for every dataset there is a TabSTAR model that has never seen it. We urge fellow researchers to adopt this approach in their own work. While TabSTAR sets a new bar in classification, its regression results lag behind GBDTs, which outperform other TFMs as well. This gap could be narrowed through additional scaling, and also by exploring regression-via-classification techniques like [ 37,2]. Furthermore, TabSTAR has not been extensively evaluated in few-shot scenarios and in purely numerical datasets.19In addition, it demands more compute compared to GBDTs, and it may struggle with memory constraints on datasets containing hundreds of features. Despite these limitations, TabSTAR offers a promising pathway toward improving performance on tabular datasets with textual fields, common in industries with high social impact (e.g., healthcare, education), or with significant economic value (e.g., banking, manufacturing). We believe TabSTAR paves the way for a new generation of semantically enriched tabular models, and we welcome the research community’s innovations built on this foundation. 19Partly because of the computational burden of tuning baselines, and the lack of objective leaderboards [ 73]. 9 Acknowledgments and Disclosure of Funding We thank Omri Feldman for brainstorming since the very beginning; Elad Hoffer and Ofir Lindenbaum for consulting and feedback; David Holzmüller and Myung Kim for supporting evaluations; and Noah Hollmann, Léo Grinsztajn, and the Prior Labs team for providing extensive access to TabPFN-v2. References [1]Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, and et al. Deep Speech 2
https://arxiv.org/abs/2505.18125v1
: End-to-End Speech Recognition in English and Mandarin. In Proceedings of The 33rd International Conference on Machine Learning , pages 173–182. PMLR, June 2016. URL https://proceedings.mlr.press/v48/amodei16.html . ISSN: 1938-7228. [2]Abdul Fatir Ansari, Lorenzo Stella, Ali Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebastian Pineda Arango, Shub- ham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Hao Wang, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, and Bernie Wang. Chronos: Learning the Language of Time Series. Transactions on Machine Learning Research , May 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=gerNCVqqtR . [3]Sercan Ö Arik and Tomas Pfister. TabNet: Attentive Interpretable Tabular Learning. Proceedings of the AAAI Conference on Artificial Intelligence , 35(8):6679–6687, May 2021. ISSN 2374- 3468. doi: 10.1609/aaai.v35i8.16826. URL https://ojs.aaai.org/index.php/AAAI/ article/view/16826 . Number: 8. [4]Muhammad Awais, Muzammal Naseer, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Foundation Mod- els Defining a New Era in Vision: A Survey and Outlook. IEEE Transactions on Pat- tern Analysis and Machine Intelligence , 47(4):2245–2264, April 2025. ISSN 1939-3539. doi: 10.1109/TPAMI.2024.3506283. URL https://ieeexplore.ieee.org/abstract/ document/10834497 . [5]Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate, May 2016. URL http://arxiv.org/abs/1409.0473 . arXiv:1409.0473 [cs]. [6]Tommaso Bendinelli, Artur Dox, and Christian Holz. Exploring LLM Agents for Cleaning Tabular Machine Learning Datasets. March 2025. URL https://openreview.net/forum? id=RXnQPYSoun . [7]Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, and et al. On the Opportunities and Risks of Foundation Models, August 2021. URL https://arxiv.org/abs/2108.07258v3 . [8]Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, and Rich Caruana. Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models. First Conference on Language Modeling , 2024. [9]Vadim Borisov, Kathrin Sessler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language Models are Realistic Tabular Data Generators. September 2022. URL https: //openreview.net/forum?id=cEygmQNOeI . [10] Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci. Deep Neural Networks and Tabular Data: A Survey. IEEE Transactions on Neural Networks and Learning Systems , 35(6):7499–7519, June 2024. ISSN 2162-2388. doi: 10.1109/TNNLS.2022.3229161. URL https://ieeexplore.ieee.org/abstract/ document/9998482 . [11] Leo Breiman. Random Forests. Machine Learning , 45(1):5–32, October 2001. ISSN 1573-0565. doi: 10.1023/A:1010933404324. URL https://doi.org/10.1023/A:1010933404324 . 10 [12] Tom Brown, Benjamin Mann, Nick Ryder, and et al. Language Models are Few-Shot Learners. InAdvances in Neural Information Processing Systems , volume 33, pages 1877–1901. Cur- ran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html . [13] Patricio Cerda and Gaël Varoquaux. Encoding High-Cardinality String Categorical Variables. IEEE Transactions on Knowledge and Data Engineering , 34(3):1164–1176, March 2022. ISSN 1558-2191. doi: 10.1109/TKDE.2020.2992529. URL https://ieeexplore.ieee.org/ abstract/document/9086128 . [14] Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16, pages 785–794, New York, NY , USA, August 2016. Association for Computing Machinery. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939785. URL https://dl. acm.org/doi/10.1145/2939672.2939785 . [15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of
https://arxiv.org/abs/2505.18125v1
the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https:// aclanthology.org/N19-1423/ . [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, June 2021. URL http://arxiv.org/abs/2010.11929 . arXiv:2010.11929 [cs]. [17] Gus Eggert, Kevin Huo, Mike Biven, and Justin Waugh. TabLib: A Dataset of 627M Tables with Context, October 2023. URL http://arxiv.org/abs/2310.07875 . arXiv:2310.07875 [cs]. [18] Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and Alexander Smola. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data, March 2020. URL http://arxiv.org/abs/2003.06505 . arXiv:2003.06505 [stat]. [19] Xi Fang, Weijie Xu, Fiona Anting Tan, Ziqing Hu, Jiani Zhang, Yanjun Qi, Srinivasan H. Sen- gamedu, and Christos Faloutsos. Large Language Models (LLMs) on Tabular Data: Prediction, Generation, and Understanding - A Survey. Transactions on Machine Learning Research , March 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=IZnrCGF9WI . [20] Benjamin Feuer, Robin T. Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, and Colin White. TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks. Advances in Neural Information Processing Systems , 37:83430– 83464, December 2024. URL https://proceedings.neurips.cc/paper_files/paper/ 2024/hash/97dc07f1253ab33ee514f395a82fa7cc-Abstract-Conference.html . [21] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter. Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61, 2022. ISSN 1533-7928. URL http://jmlr.org/papers/v23/ 21-0992.html . [22] Sebastian Felix Fischer, Matthias Feurer, and Bernd Bischl. OpenML-CTR23 – A curated tabular regression benchmarking suite. August 2023. URL https://openreview.net/ forum?id=HebAOoMm94 . [23] Josh Gardner, Juan C. Perdomo, and Ludwig Schmidt. Large Scale Transfer Learning for Tabular Data via Language Modeling. Advances in Neural Information Processing Systems , 37:45155– 45205, December 2024. URL https://proceedings.neurips.cc/paper_files/paper/ 2024/hash/4fd5cfd2e31bebbccfa5ffa354c04bdc-Abstract-Conference.html . 11 [24] Pieter Gijsbers, Marcos L. P. Bueno, Stefan Coors, Erin LeDell, Sébastien Poirier, Janek Thomas, Bernd Bischl, and Joaquin Vanschoren. AMLB: an AutoML Benchmark. Journal of Machine Learning Research , 25(101):1–65, 2024. ISSN 1533-7928. URL http://jmlr.org/ papers/v25/22-0493.html . [25] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deep learning models for tabular data. In Proceedings of the 35th International Conference on Neural Information Processing Systems , NIPS ’21, pages 18932–18943, Red Hook, NY , USA, December 2021. Curran Associates Inc. ISBN 978-1-7138-4539-3. [26] Yury Gorishniy, Ivan Rubachev, and Artem Babenko. On embeddings for numerical features in tabular deep learning. In Proceedings of the 36th International Conference on Neural Informa- tion Processing Systems , NIPS ’22, pages 24991–25004, Red Hook, NY , USA, November 2022. Curran Associates Inc. ISBN 978-1-7138-7108-8. [27] Yury Gorishniy, Ivan Rubachev, Nikolay Kartashev, Daniil Shlenskii, Akim Kotelnikov, and Artem Babenko. TabR: Tabular Deep Learning Meets Nearest Neighbors. The Twelfth Interna- tional Conference on Learning Representations, October 2023. URL https://openreview. net/forum?id=rhgIgTSSxW . [28] Yury Gorishniy, Akim Kotelnikov, and Artem Babenko. TabM: Advancing Tabular Deep Learning with Parameter-Efficient Ensembling, February 2025. URL http://arxiv.org/ abs/2410.24210 . arXiv:2410.24210 [cs]. [29]
https://arxiv.org/abs/2505.18125v1
Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, and et al. The Llama 3 Herd of Models, November 2024. URL http://arxiv.org/abs/2407.21783 . arXiv:2407.21783 [cs]. [30] Leo Grinsztajn, Edouard Oyallon, and Gael Varoquaux. Why do tree-based models still outper- form deep learning on typical tabular data? Advances in Neural Information Processing Systems , 35:507–520, December 2022. URL https://proceedings.neurips.cc/paper_files/ paper/2022/hash/0378c7692da36807bdec87ab043cdadc-Abstract-Datasets_and_ Benchmarks.html . [31] Léo Grinsztajn, Edouard Oyallon, Myung Jun Kim, and Gaël Varoquaux. Vectorizing string entries for data processing on tables: when are larger language models better?, December 2023. URL http://arxiv.org/abs/2312.09634 . arXiv:2312.09634 [stat]. [32] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition, December 2015. URL http://arxiv.org/abs/1512.03385 . arXiv:1512.03385 [cs]. [33] Xin He, Kaiyong Zhao, and Xiaowen Chu. AutoML: A survey of the state-of-the-art. Knowledge-Based Systems , 212:106622, January 2021. ISSN 0950-7051. doi: 10.1016/j. knosys.2020.106622. URL https://www.sciencedirect.com/science/article/pii/ S0950705120307516 . [34] Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. TabLLM: Few-shot Classification of Tabular Data with Large Language Models. InProceedings of The 26th International Conference on Artificial Intelligence and Statistics , pages 5549–5581. PMLR, April 2023. URL https://proceedings.mlr.press/v206/ hegselmann23a.html . ISSN: 2640-3498. [35] Noah Hollmann, Samuel Müller, Katharina Eggensperger, and Frank Hutter. TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second. The Eleventh International Conference on Learning Representations, September 2022. URL https:// openreview.net/forum?id=cp5PvcI6w8_ . [36] Noah Hollmann, Samuel Müller, and Frank Hutter. Large Language Models for Auto- mated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engi- neering. Advances in Neural Information Processing Systems , 36:44753–44775, Decem- ber 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/hash/ 8c2df4c35cdbee764ebb9e9d0acd5197-Abstract-Conference.html . 12 [37] Noah Hollmann, Samuel Müller, Lennart Purucker, Arjun Krishnakumar, Max Körfer, Shi Bin Hoo, Robin Tibor Schirrmeister, and Frank Hutter. Accurate predictions on small data with a tabular foundation model. Nature , 637(8045):319–326, January 2025. ISSN 1476- 4687. doi: 10.1038/s41586-024-08328-6. URL https://www.nature.com/articles/ s41586-024-08328-6 . Publisher: Nature Publishing Group. [38] David Holzmüller, Léo Grinsztajn, and Ingo Steinwart. Better by default: Strong pre-tuned MLPs and boosted trees on tabular data. Advances in Neural Information Processing Systems , 37: 26577–26658, December 2024. URL https://proceedings.neurips.cc/paper_files/ paper/2024/hash/2ee1c87245956e3eaa71aaba5f5753eb-Abstract-Conference. html . [39] Shi Bin Hoo, Samuel Müller, David Salinas, and Frank Hutter. The Tabular Foundation Model TabPFN Outperforms Specialized Time Series Forecasting Models Based on Simple Features. October 2024. URL https://openreview.net/forum?id=H02X7RO3OC#discussion . [40] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Larous- silhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learn- ing, pages 2790–2799. PMLR, May 2019. URL https://proceedings.mlr.press/v97/ houlsby19a.html . ISSN: 2640-3498. [41] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-Rank Adaptation of Large Language Models. October 2021. URL https://openreview.net/forum?id=nZeVKeeFYf9 . [42] Xin Huang, Ashish Khetan, Milan Cvitkovic, and Zohar Karnin. TabTransformer: Tabular Data Modeling Using Contextual Embeddings, December 2020. URL http://arxiv.org/abs/ 2012.06678 . arXiv:2012.06678 [cs]. [43] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling Laws for Neural Language Models, January
https://arxiv.org/abs/2505.18125v1
2020. URL http://arxiv.org/abs/2001.08361 . arXiv:2001.08361 [cs]. [44] Liran Katzir, Gal Elidan, and Ran El-Yaniv. Net-DNF: Effective Deep Modeling of Tabular Data. October 2020. URL https://openreview.net/forum?id=73WTGs96kho . [45] Myung Jun Kim, Léo Grinsztajn, and Gaël Varoquaux. CARTE: pretraining and transfer for tabular learning. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of ICML’24 , pages 23843–23866, Vienna, Austria, July 2024. JMLR.org. [46] Ravin Kohli, Matthias Feurer, Katharina Eggensperger, Bernd Bischl, and Frank Hutter. Towards Quantifying the Effect of Datasets for Benchmarking: A Look at Tabular Machine Learning. [47] Jannik Kossen, Neil Band, Clare Lyle, Aidan N Gomez, Thomas Rainforth, and Yarin Gal. Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning. In Advances in Neural Information Processing Systems , volume 34, pages 28742– 28756. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/ 2021/hash/f1507aba9fc82ffa7cc7373c58f8a613-Abstract.html . [48] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems , volume 25. Curran Associates, Inc., 2012. URL https://papers.nips.cc/paper_files/ paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html . [49] Barak Lenz, Opher Lieber, Alan Arazi, and et al. Jamba: Hybrid Transformer-Mamba Language Models. October 2024. URL https://openreview.net/forum?id=JFPaD7lpBD . [50] Roman Levin, Valeriia Cherepanova, Avi Schwarzschild, Arpit Bansal, C. Bayan Bruss, Tom Goldstein, Andrew Gordon Wilson, and Micah Goldblum. Transfer Learning with Deep Tabular Models. The Eleventh International Conference on Learning Representations, September 2022. URL https://openreview.net/forum?id=b0RuGUYo8pA . 13 [51] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Opti- mized BERT Pretraining Approach, July 2019. URL http://arxiv.org/abs/1907.11692 . arXiv:1907.11692 [cs]. [52] Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, Vishak Prasad C, Ganesh Ramakrish- nan, Micah Goldblum, and Colin White. When Do Neural Nets Outperform Boosted Trees on Tabular Data? Advances in Neural Information Processing Systems , 36:76336–76369, Decem- ber 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/hash/ f06d5ebd4ff40b40dd97e30cee632123-Abstract-Datasets_and_Benchmarks.html . [53] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space, September 2013. URL http://arxiv.org/abs/1301. 3781 . arXiv:1301.3781 [cs]. [54] Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. MTEB: Massive Text Embedding Benchmark. In Andreas Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Lin- guistics , pages 2014–2037, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.eacl-main.148. URL https://aclanthology.org/ 2023.eacl-main.148/ . [55] Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, and Frank Hutter. Transformers Can Do Bayesian Inference. October 2021. URL https://openreview.net/ forum?id=KSugKcbNf9 . [56] OpenAI. GPT-4 Technical Report, March 2024. URL http://arxiv.org/abs/2303.08774 . arXiv:2303.08774 [cs]. [57] Liudmila Prokhorenkova, Gleb Gusev, Aleksandr V orobev, Anna Veronika Doro- gush, and Andrey Gulin. CatBoost: unbiased boosting with categorical features. InAdvances in Neural Information Processing Systems , volume 31. Curran Asso- ciates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 14491b756b3a51daac41c24863285549-Abstract.html . [58] Jingang Qu, David Holzmüller, Gaël Varoquaux, and Marine Le Morvan. TabICL: A Tabular Foundation Model for In-Context Learning on Large Data, February 2025. URL http:// arxiv.org/abs/2502.05564 . arXiv:2502.05564 [cs]. [59] Ivan Rubachev, Artem Alekberov, Yury Gorishniy, and Artem Babenko. Revisiting Pretraining Objectives for Tabular
https://arxiv.org/abs/2505.18125v1
Deep Learning, July 2022. URL http://arxiv.org/abs/2207. 03208 . arXiv:2207.03208 [cs]. [60] Xingjian Shi, Jonas Mueller, Nick Erickson, Mu Li, and Alex Smola. Benchmarking Multimodal AutoML for Tabular Data with Text Fields. August 2021. URL https://openreview.net/ forum?id=Q0zOIaec8HF . [61] Assaf Shmuel, Oren Glickman, and Teddy Lazebnik. A Comprehensive Benchmark of Machine and Deep Learning Across Diverse Tabular Datasets, August 2024. URL http://arxiv.org/ abs/2408.14817 . arXiv:2408.14817 [cs]. [62] Ravid Shwartz-Ziv and Amitai Armon. Tabular data: Deep learning is not all you need. Information Fusion , 81:84–90, May 2022. ISSN 1566-2535. doi: 10.1016/j.inffus.2021.11.011. URL https://www.sciencedirect.com/science/article/pii/S1566253521002360 . [63] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition, April 2015. URL http://arxiv.org/abs/1409.1556 . arXiv:1409.1556 [cs]. [64] Leslie N. Smith and Nicholay Topin. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates, May 2018. URL http://arxiv.org/abs/1708. 07120 . arXiv:1708.07120 [cs]. 14 [65] Aivin V . Solatorio and Olivier Dupriez. REaLTabFormer: Generating Realistic Relational and Tabular Data using Transformers, February 2023. URL http://arxiv.org/abs/2302. 02041 . arXiv:2302.02041 [cs]. [66] Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C. Bayan Bruss, and Tom Goldstein. SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre- Training, June 2021. URL http://arxiv.org/abs/2106.01342 . arXiv:2106.01342 [cs]. [67] Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, and Jian Tang. AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks. InProceedings of the 28th ACM International Conference on Information and Knowledge Management , CIKM ’19, pages 1161–1170, New York, NY , USA, November 2019. Association for Computing Machinery. ISBN 978-1-4503-6976-3. doi: 10.1145/3357384.3357925. URL https://dl.acm.org/doi/10.1145/3357384.3357925 . [68] Ilya Sutskever, Oriol Vinyals, and Quoc V . Le. Sequence to sequence learning with neural net- works. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2 , volume 2 of NIPS’14 , pages 3104–3112, Cambridge, MA, USA, December 2014. MIT Press. [69] Zhiqiang Tang, Haoyang Fang, Su Zhou, Taojiannan Yang, Zihan Zhong, Cuixiong Hu, Katrin Kirchhoff, and George Karypis. AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models. AutoML Conference 2024 (ABCD Track), April 2024. URL https://openreview.net/forum?id=irStSm9waW . [70] Zhiqiang Tang, Zihan Zhong, Tong He, and Gerald Friedland. Bag of Tricks for Multimodal AutoML with Image, Text, and Tabular Data, December 2024. URL http://arxiv.org/ abs/2412.16243 . arXiv:2412.16243 [cs]. [71] Gemini Team. Gemini: A Family of Highly Capable Multimodal Models, May 2025. URL http://arxiv.org/abs/2312.11805 . arXiv:2312.11805 [cs]. [72] Avijit Thawani, Jay Pujara, Pedro A. Szekely, and Filip Ilievski. Representing Numbers in NLP: a Survey and a Vision, March 2021. URL http://arxiv.org/abs/2103.13136 . arXiv:2103.13136 [cs]. [73] Andrej Tschalzev, Lennart Purucker, Stefan Lüdtke, Frank Hutter, Christian Bartelt, and Heiner Stuckenschmidt. Unreflected Use of Tabular Data Repositories Can Undermine Research Quality, March 2025. URL http://arxiv.org/abs/2503.09159 . arXiv:2503.09159 [cs]. [74] Boris Van Breugel and Mihaela Van Der Schaar. Position: why tabular foundation models should be a research priority. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of ICML’24 , pages 48976–48993, Vienna, Austria, July 2024. JMLR.org. [75] Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: networked science in machine learning. SIGKDD Explor.
https://arxiv.org/abs/2505.18125v1
Newsl. , 15(2):49–60, June 2014. ISSN 1931-0145. doi: 10.1145/2641190.2641198. URL https://doi.org/10.1145/2641190.2641198 . [76] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is All you Need. In I. Guyon, U. V on Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf . [77] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. Text Embeddings by Weakly-Supervised Contrastive Pre-training, February 2024. URL http://arxiv.org/abs/2212.03533 . arXiv:2212.03533 [cs]. [78] Ruoxi Wang, Rakesh Shivanna, Derek Cheng, Sagar Jain, Dong Lin, Lichan Hong, and Ed Chi. DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems. In Proceedings of the Web Conference 2021 , WWW ’21, pages 1785–1797, New 15 York, NY , USA, June 2021. Association for Computing Machinery. ISBN 978-1-4503-8312- 7. doi: 10.1145/3442381.3450078. URL https://dl.acm.org/doi/10.1145/3442381. 3450078 . [79] Zifeng Wang and Jimeng Sun. TransTab: Learning Transferable Tabular Transformers Across Tables. October 2022. URL https://openreview.net/forum?id=A1yGs_SWiIi . [80] Jiahuan Yan, Bo Zheng, Hongxia Xu, Yiheng Zhu, Danny Chen, Jimeng Sun, Jian Wu, and Jintai Chen. Making Pre-trained Language Models Great on Tabular Prediction. The Twelfth International Conference on Learning Representations, October 2023. URL https: //openreview.net/forum?id=anzIzGZuLi . [81] Junchen Yang, Ofir Lindenbaum, and Yuval Kluger. Locally Sparse Neural Networks for Tabular Biomedical Data. In Proceedings of the 39th International Conference on Machine Learning , pages 25123–25153. PMLR, June 2022. URL https://proceedings.mlr.press/v162/ yang22i.html . ISSN: 2640-3498. [82] Chao Ye, Guoshan Lu, Haobo Wang, Liyao Li, Sai Wu, Gang Chen, and Junbo Zhao. To- wards Cross-Table Masked Pretraining for Web Data Mining. May 2024. URL https: //openreview.net/forum?id=9jj7cMOXQo . [83] Han-Jia Ye, Si-Yang Liu, Hao-Run Cai, Qi-Le Zhou, and De-Chuan Zhan. A Closer Look at Deep Learning on Tabular Data. CoRR , January 2024. URL https://openreview.net/ forum?id=eu2cABIHge . [84] Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, Jianxin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, and Lichao Sun. A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT, May 2023. URL http://arxiv.org/abs/2302.09419 . arXiv:2302.09419 [cs]. [85] Qile Zhou, Han-Jia Ye, Leye Wang, and De-Chuan Zhan. Unlocking the Transferability of Tokens in Deep Models for Tabular Data. October 2023. URL https://openreview.net/ forum?id=u2OVQ2Xvq1 . [86] Bingzhao Zhu, Xingjian Shi, Nick Erickson, Mu Li, George Karypis, and Mahsa Shoaran. XTab: Cross-table Pretraining for Tabular Transformers. In Proceedings of the 40th International Conference on Machine Learning , pages 43181–43204. PMLR, July 2023. URL https: //proceedings.mlr.press/v202/zhu23k.html . ISSN: 2640-3498. [87] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A Comprehensive Survey on Transfer Learning. Proceedings of the IEEE , 109(1):43–76, January 2021. ISSN 1558-2256. doi: 10.1109/JPROC.2020.3004555. URL https://ieeexplore.ieee.org/abstract/document/9134370 . 16 A Architecture This appendix provides additional technical details for the architecture introduced in §3. First, we discuss
https://arxiv.org/abs/2505.18125v1
the verbalization module; next, we formally describe the architecture step-by-step; and finally, we present selected experiments on the TabSTAR architecture. A.1 The Verbalization Module TabSTAR’s verbalization module standardizes heterogeneous tabular inputs by converting each column, whether predictive feature or target variable, into templated text blocks. We first describe the detection of column types, and then detail the processing steps for each type. Feature Detection We classify each column as either numerical, referring to quantitative values, or semantic, referring to textual values including categorical and boolean fields encoded as text. We rely on heuristics involving both the primitive data type (e.g., string, float) and human annotation (e.g., OpenML metadata). However, real-world datasets pose challenges, as numerical features can often be stored as strings (e.g., “35 years”, “unknown age”) or may lack inherent order (e.g., country calling codes). Leveraging LLMs for contextualized data cleaning can be a promising direction [6]. A special case is the handling of timestamp and date columns. Similarly to [ 39], we rely on skrub’s DatetimeEncoder20to detect datetime columns and decompose each one of them into a set of new features. Each extracted feature then undergoes its own processing: For example, the weekday is treated as semantic, while the total seconds since the Unix epoch is treated as numerical. Integrating date features more holistically remains an open research question. Table 5: Illustrative verbalization of a numerical feature ( Age) with 10 bins. Examples outside the range and missing values are considered as special bins. Bin Range Example Value Illustrative Verbalization – Lower than 18 17 Age: Lower than 18 (Quantile 0%) 1 18–23 20 Age: 18–23 (Quantile 0–10%) 2 23–27 25 Age: 23–27 (Quantile 10–20%) 3 27–31 29 Age: 27–31 (Quantile 20–30%) 4 31–35 33 Age: 31–35 (Quantile 30–40%) 5 35–40 38 Age: 35–40 (Quantile 40–50%) 6 40–45 42 Age: 40–45 (Quantile 50–60%) 7 45–51 48 Age: 45–51 (Quantile 60–70%) 8 51–58 55 Age: 51–58 (Quantile 70–80%) 9 58–67 63 Age: 58–67 (Quantile 80–90%) 10 67–87 83 Age: 67–87 (Quantile 90–100%) – Higher than 87 93 Age: Higher than 87 (Quantile 100%) – Unknown – Age: Unknown Value Numerical Features Numerical features are represented by both a numerical and a semantic representation. For the numerical representation, given a value x, we compute the clipped z-score z′= clip (x−µ)/σ,−3,3 where µ, σ are the training set mean and the standard deviation, and missing values are set to 0. For the semantic representation, we build B= 10 quantile bins over the training distribution to map the value accordingly. Table 5 shows an illustrative example for the feature Agefrom our running example in Table 2. Semantic Features Semantic features are sanitized (e.g., normalizing whitespaces) and verbalized using the template presented in Table 6. Missing values are mapped to “Unknown Value”, just like for numerical features. If a text exceeds the model’s context window (512 tokens for e5-small-v2 ), it is naively truncated to fit it. This limitation is far more pronounced for methods that serialize the entire example into a single textual sequence [ 80], thereby dramatically reducing the effective context size. 20https://skrub-data.org/
https://arxiv.org/abs/2505.18125v1
17 Target Variables The verbalization templates for the target values are prepended to every example. For classification tasks, each possible label is verbalized, while for regression we verbalize a single element consisting solely of the feature name. Employing a binning strategy to treat regression as a classification task is a future work direction, as discussed in §7. For regression tasks, target values go through the same standardization with outlier clipping as numerical features, being used solely as the ground truth without going through the input. Table 6: Verbalization templates for semantic features and target values. Element Type Verbalization Template Predictive feature "Predictive Feature: {feature_name}" "Feature Value: {feature_value}" Classification target "Target Feature: {target_name}" "Feature Value: {target_value}" Regression target "Numerical Target Feature: {target_name}" A.2 The Annotated TabSTAR Table 7 describes the number of parameters per component in the TabSTAR architecture, when using e5-small-v2 [77] as the text encoder. It has approximately 47.26M parameters, most of which come from the text encoder. When unfreezing 6 layers of the text encoder, about 24.70M parameters are tuned, with the remaining 11.92M embedding parameters and 10.65M layer ones being kept frozen. Table 7: Parameter counts for TabSTAR components Module # Parameters Encoding: Semantic 33,360,000 Encoding: Numerical 296,832 Fusion 1,774,464 Interaction 10,646,784 Prediction 1,185,794 To describe the architecture more precisely, we start by defining the dataset formally. Let D= {(xi, yi)}n i=1denote a tabular dataset with nexamples. Each example xi= [xi1, . . . , x im]hasm features. The target variable yiis either continuous (regression) or discrete (classification) taking one of Cclasses. For simplicity, we describe the architecture at the example level, though all computations are actually carried out on mini-batches of size B. The batches are always drawn from a single dataset in both pretraining and finetuning, removing any need for padding. Verbalization We denote by tthe number of target entries where t=Cfor classification and t= 1 for regression. We then form a raw sequence of length e=t+mby listing the ttarget values, followed by the mfeature entries. Each element jin this sequence is then verbalized into a semantic string sjand a numerical value nj, set to be the clipped z-score for numerical non-missing features, and zero otherwise. The example is thus represented by parallel sequences (s,n)of length e. Encoding Each semantic string sjand numerical value njare projected into a d-dimensional vector. Semantic strings are encoded with an encoder-only language model ( e5-small-v2 [ 77]). Each string is tokenized, passed through the model, and pooled to produce its final embedding. This process is independent between elements, i.e. the attention is at the token level within a single element. In parallel, each numeric value is fed through a two-layer MLP that first projects from 1 to 2d dimensions, applies a ReLU and dropout, and then projects back down to d. This produces matching d-dimensional embeddings for each of the eelements, ready to be fused. Fusion To unify semantic and numerical embeddings into a single representation, we apply a single-layer Transformer Encoder21over each element’s pair of vectors. Concretely, for each element 21With 2 attention heads, a feed-forward hidden size of 4d,
https://arxiv.org/abs/2505.18125v1
dropout 0.1, and ReLU activation. 18 we stack its d-dimensional text and numeric embeddings and feed them through the encoder layer. For every element, the attention is applied between its two representations, and we average the two outputs to produce one fused d-dimensional embedding. This yields a final sequence of length eand dimension d, which will serve as tokens for the Interaction block. Interaction The fused sequence of etokens is processed by a standard Transformer Encoder with model dimension d= 384 ,L= 6layers, 6attention heads per layer, feed-forward size 4d, dropout 0.1, ReLU activation and using a pre-norm configuration. Unlike in language modeling, feature ordering is irrelevant, so no positional encodings are used. The encoder produces contextualized embeddings for every position, and we retain the ttarget embeddings for the prediction. Prediction We initialize two identical MLP heads, one for regression and one for classification. Each of them consists of a hidden layer of size 4d(with ReLU activation) followed by a linear projection to a single output. For each dataset, we choose the relevant head and process the ttarget token embeddings. For classification, we independently feed each one of the t=Ctarget tokens to the classification head to obtain a score (logit) per class. Notably, the same head is shared across classes and datasets. We apply softmax over these scores, yielding a probability distribution regardless of the number of classes. For regression, the single target token is projected into a real value. Note that the heads is shared between datasets, as regression outputs are always clipped z-scores. A.3 Architecture Experiments In this section we explore the effect of different design choices for TabSTAR’s architecture. For each experiment, we only vary the parameter of interest, keeping everything else fixed. We follow the same pretraining regime as in Appendix B.1, except that for computational efficiency we train only 25 epochs (instead of 50) with 128 pretraining datasets (instead of 390). We evaluate each variant relying solely on pretraining performance, as an approximation for downstream task performance. We acknowledge that our conclusions might depend on this limited scale, hence we discuss a subset of the experiments briefly to reflect the depth of our work and inspire future research. The Fusion Block’s Mechanism For the fusion block, we consider two simpler alternatives to the attention mechanism, both of then underperforming: (1) Concatenation, by concatenating the semantic and numerical d-dimensional vectors into a 2d-dimensional vector, and projecting them back via an MLP, and (2) Multiplication, by multiplying the semantic representation directly with the numerical value22in a straightforward, parameter-free manner as in [79, 82]. The Number of Interaction Layers We experiment with the number of encoder layers, and observe that 3 yields anecdotally worse performance than 6, with lower parameter count. Nevertheless, we prioritize a deeper network as for datasets with very complex relationships we believe that this might be beneficial. Additionally, we try a 9-layer variant which performs significantly worse, while also increasing the parameter count. Row-Level Attention We experiment with adopting the architecture proposed by SAINT [ 66], which adds row-level attention to each encoder layer. Similar concepts
https://arxiv.org/abs/2505.18125v1
are also employed by models that get the input the whole dataset, labels included [ 37,47]. We run experiments with 2, 4 and 6 layers as they are roughly equivalent in parameter count to 3, 6, and 9 layers without row-attention. We observe no substantial gain, and thus we prioritize the simpler solution, as row-level attention is sensitive to the batch size and adds complexity to inference time. B Training In this section we elaborate on the two stages of TabSTAR’s training: pretraining and finetuning, presented in §3. As in Appendix A.3, we summarize key pretraining experiments. 22After rescaling it to be centered around 1 rather than 0, using a learned scaling factor. 19 B.1 Pretraining TabSTAR is pretrained employing supervised learning in a multi-task regime, jointly learning regression, binary and multiclass classification tasks. The parameters of the architecture are fully- shared, without any need for dataset-specific parameters. Every example during the pretraining updates all the model’s weights, with the sole exception of the prediction head, for which every example uses its respective head depending on the nature of the task (classification or regression). Sampling For computational efficiency, each dataset is subsampled once before the pretraining. At the example level, we sample up to 300,000 examples from each dataset, stratified by the target variable for classification tasks. Since we only use a fraction of each dataset for each pretraining epoch, this decision has negligible influence. In addition, we randomly sample up to 200 features per dataset. While straightforward, this decision is suboptimal as feature importance isn’t taken into consideration. As this work does not focus on wide-feature datasets, we consider this trade-off acceptable. Importantly, this setup is enabled during finetuning as the TabSTAR architecture is agnostic to the number of features. We split each dataset into train-validation splits (95%-5%),23 without any need for test splits, and cap the validation set at a maximum of 1,000 examples used for evaluating pretraining performance. Batching Every epoch, we randomly sample up to 2,048 examples from each dataset in mini- batches of 32, and shuffle all the batches. We conduct gradient accumulation and update the model every 4 steps to reduce the chances of a single update being dominated by a single dataset, so the global batch size is effectively 128. Appendix B.3.1 elaborates on the effect of batch size. Metrics Our loss function is cross-entropy for classification, and MSE for regression. With standardized targets, R2≈1−MSE , although this equivalence is degraded by clipping targets in preprocessing. We train with mixed-precision and apply gradient clipping to stabilize training without task-specific weights, with Appendix B.3.2 discussing the limitations of this approach. We use as metrics AUROC for classification and R2, so for each task the optimal metric value is 1. We average performance across all datasets into a single metric that reflects the pretraining performance. Training We pretrain for 50 epochs with the OneCycleLR [64] optimizer, with warmup during the first 5 epochs (10%) and cosine annealing. Early stopping is conducted after 3 epochs without improvement on the pretraining metric. The weight decay is set to 0.001,
https://arxiv.org/abs/2505.18125v1
and a max learning rate of lr= 5×10−5is applied uniformly across all layers. Appendix B.3.3 discusses experiments with differential learning rate. Pretraining running time varies depending on the number of epochs and the included datasets. The full models (390 datasets) reported in §5 train for less than 48 hours on a single NVIDIA A40 GPU (48GB memory), and we believe that this could be optimized much further. B.2 Finetuning We finetune downstream tasks using LoRA’s implementation of the peftpackage24. We use a rank of r= 32 , setα= 2r= 64 anddropout = 0.1. We employ the same scheduler as in the pretraining phase, with the only difference being that we set lr= 0.001, and increase the patience parameter for early stopping to 5. We apply a train-test split of 90%-10% and sample a validation set of 10%. As opposed to the pretraining, all batches are drawn from the same dataset. Therefore, we observe no effect from changing the mini-batch size when keeping the global batch size fixed to 128. We tune 1,597,440 out of TabSTAR’s 47,263,874 parameters (3.4%). Finetuning hyperparameters are selected by pretraining TabSTAR over 256 datasets and performing grid-search over a held-out set of 25 downstream tasks disjoint from the 50 datasets in the benchmark evaluated in §4. The search space is presented in Table 8, and we observe that average performance is relatively robust across this space. An interesting observation is that decreasing the number of parameters by setting r= 16 mildly hurts performance, but it has upsides on memory and latency aspects, allowing a future trade-off exploration. As a final note, we argue that providing a strong default configuration for TFMs is crucial for evaluating them, but for real-world applications, it is still recommendable to find the best hyperparameters tailored to the downstream task. 23We choose only 5% for efficiency, as we use hundreds of pretraining datasets. 24https://github.com/huggingface/peft 20 Table 8: LoRA hyperparameter tuning grid search for TabSTAR’s finetuning. Hyper-parameter Search Space LoRA rank ( r) 16, 32, 64 Learning Rate 0.0005, 0.001, 0.002, 0.005, 0.01 Dropout 0, 0.1 The only experiment in this paper where we employ full finetuning instead of LoRA is for the non-pretrained variant discussed in the analysis in §6. For this variant we fully finetune the pretrained model on each downstream task. Compared to the pretraining, we use lr= 2.5×10−5and increase the patience to 5. These hyperparameters are lightly tuned using the same procedure as for LoRA, and we observe that fully finetuning the model achieves comparable performance, except for small datasets, where training is more prone to overfitting. B.3 Pretraining Experiments In this section, we briefly elaborate on some experiments performed over TabSTAR’s pretraining protocol. As in Appendix A.3, we highlight only a subset of them in a high-level manner. B.3.1 Batch Size During pretraining, we use a mini-batch size of 32, each of them drawn from a single dataset. Since we train with gradient accumulation and a global batch size of 128, varying the batch size affects the diversity of a single model update: lower batch sizes are likely to be
https://arxiv.org/abs/2505.18125v1
exposed to more datasets. We decrease the batch size to 16 and 8 and observe an improvement at the cost of slower training. An interesting direction for future work is moving to mixed-datasets batches, which require more complex implementation but might benefit from more regularized learning. Such approach, however, goes against row-level attention methods and ICL, as discussed in Appendix A.3. B.3.2 Loss Weights Pretraining the model over hundreds of datasets in a multi-task regime presents a key challenge: the loss scale of each dataset can vary substantially, depending on task difficulty or task type. For example, a multiclass classification task with dozens of classes will naturally yield a higher average loss than a binary task. These dynamics can also shift during training. Our default approach naively averages the loss across all datasets, which risks over-weighting tasks for potentially arbitrary reasons. To address this, we explore two alternative weighting strategies: (1) Assigning a constant weight per task type, accounting for the number of classes in classification tasks, and (2) Normalizing each dataset’s contribution by the best loss achieved by CatBoost [ 57] when being fitted to that dataset. While these strategies better reflect task-specific characteristics, they hardly impact performance and introduce additional complexity. Notably, adjusting loss weights across tasks impacts metric interpretability, as each weighting scheme implicitly optimizes a different objective. We do not explore more sophisticated methods such as learning per-dataset weights, as these often require mixed-dataset batches and introduce additional learnable parameters. We believe, however, that multi-task pretraining over tabular datasets remains an open and important research question. B.3.3 Differential Learning Rate TabSTAR’s weights initialization is not balanced: the textual encoder is a pretrained embedding model while the rest of the architecture parameters are randomly initialized. To counteract this imbalance, we experiment with using differential learning rates for the textual encoder layers, and experiment with scaling it by a factor of 0.5 and of 0.75. To our surprise, this decision hurts performance, so we stick to a uniform learning rate across all layers. 21 C Training Datasets In this appendix we briefly expand on the pretraining corpus elaborated in §4. It is composed of 350 datasets, spanning all the datasets appearing in AMLB [24],OpenML-CTR23 [22],TabZilla [52] and the ones presented by Grinsztajn [30]. After deduplication,25this results in 152 datasets (94 classification, 58 regression). Interestingly, only 6 of these 152 datasets have free-text or high- cardinality features. We manually add datasets from OpenML [ 75] and Kaggle, as well as from the AutoML-Benchmark-Train [21] corpus, and achieve a total of 350 datasets, with 49 textual datasets. Table 9 details the 253 classification datasets and Table 10 the 97 regression ones. We elaborate the Dataset name, the number of examples n, the number of features m, and the number of classes C for classification. In addition, we mark datasets that belong to one of the benchmarks, and the ones that have text features. Importantly, the textual flag is quite permissive, as it includes features with relatively short texts or potentially low predictive power (e.g., people names or addresses). Table 9: The 253
https://arxiv.org/abs/2505.18125v1
Classification Datasets of the Pretraining Corpus, with their nexamples, mfeatures, Cclasses, presence in a benchmark ( B) and whether they are textual ( T). Dataset n m C B T KDDCup99 4,898,422 40 20 ✓ mimic_extract_los_3 4,155,270 17 68 ✓ Online-P2P-Lending 2,875,146 16 5 sf-police-incidents 2,215,023 8 2 ✓ ✓ physionet_sepsis 1,552,210 42 2 poker-hand 1,025,009 10 10 ✓ Higgs 1,000,000 28 2 ✓ BAF_base 1,000,000 30 2 Credit_Card_Fraud_ 1,000,000 7 2 Harry-Potter-fanfiction-data 648,493 13 4 ✓ porto-seguro 595,212 57 2 ✓ covertype 581,012 54 7 ✓ A VIDa-hIL6 573,891 3 2 ✓ airlines 539,383 7 2 ✓ ✓ HolisticBias 472,991 14 4 ✓ albert 425,240 78 2 ✓ DBPedia 342,781 3 219 ✓ hcdr_main 307,511 120 2 Mental_Health_Dataset 292,364 16 5 Kuzushiji-49 270,912 784 49 spoken-arabic-digit 263,256 14 10 cdc_diabetes 253,680 21 2 skin-segmentation 245,057 3 2 LT-Vehicle-Loan-Default-Prediction 233,154 38 2 Churn_Telco_Europa 190,776 17 2 ldpa 164,860 6 11 Give-Me-Some-Credit 150,000 10 2 ✓ walking-activity 149,332 4 22 social_bias_frames 144,649 16 3 ✓ Wikipedia_Talk_Labels 140,379 12 15 ✓ Municipal-Debt-Risk-Analysis 138,509 13 2 MiniBooNE 130,064 50 2 ✓ nba-shot-logs 128,069 15 2 ✓ college_scorecard 124,699 117 2 drug-directory 120,215 16 7 ✓ TVS_Loan_Default 119,528 29 2 Continued on next page 25And the exclusion of the fifadataset, which is included in the benchmark. 22 Table 9: The 253 Classification Datasets of the Pretraining Corpus. Dataset n m C B T road-safety 111,762 32 2 ✓ Diabetes130US 101,766 46 3 ✓ fars 100,968 29 8 Credit_Score_Classification 100,000 26 3 ✓ numerai28.6 96,320 21 2 ✓ Run_or_walk_information 88,588 6 2 jannis 83,733 54 4 ✓ KDD98 82,318 477 2 APSFailure 76,000 169 2 ✓ kick 72,983 32 2 ✓ ✓ human-choice-prediction 71,579 20 2 ✓ Traffic_violations 70,340 20 3 ✓ Fashion-MNIST 70,000 784 10 ✓ Cardiovascular-Disease-dataset 70,000 11 2 mnist_784 70,000 719 10 connect-4 67,557 42 3 ✓ mobile_churn 66,469 63 2 helena 65,196 27 100 ✓ LICD 63,634 413 2 ✓ CIFAR_10 60,000 3,072 10 REASONER 58,497 34 2 ✓ volkert 58,310 147 10 ✓ shuttle 58,000 9 7 ✓ GTSRB-HueHist 51,839 256 43 okcupid-stem 50,789 19 3 ✓ ✓ KDDCup09-Upselling 50,000 13,419 2 ✓ KDDCup09_appetency 50,000 207 2 ✓ adult 48,842 14 2 ✓ League-of-Legends-Diamond 48,651 14 2 tamilnadu-electricity 45,781 2 20 bank-marketing 45,211 16 2 ✓ meta_stream_intervals 45,164 74 11 jungle_chess 44,819 6 3 ✓ Dynamically-Generated-Hate-Speech-Dataset 41,144 8 2 ✓ Breast-cancer-prediction 39,998 11 2 Click_prediction_small 39,948 11 2 ✓ Hotel-Reviews 38,932 3 2 ✓ electricity 38,474 8 2 ✓ nomao 34,465 118 2 ✓ Employee-Turnover-at-TECHCO 34,452 9 2 Amazon_employee_access 32,769 9 2 ✓ Credit-Risk-Dataset 32,581 11 2 Default-of-Credit-Card-Clients-Dataset 30,000 23 2 ✓ funpedia 29,819 3 3 ✓ credit_risk_china 27,522 27 5 Insurance 23,548 10 2 guillermo 20,000 4,281 2 ✓ riccardo 20,000 4,283 2 ✓ insurance_dataset 20,000 26 4 letter 20,000 16 26 game-of-thrones-script-all-seasons 16,825 5 43 ✓ NewspaperChurn 15,855 16 2 ✓ mozilla4 15,545 5 2 pol 15,000 26 11 ✓ Continued on next page 23 Table 9: The 253 Classification Datasets of the Pretraining Corpus. Dataset n m C B T eeg-eye-state 14,980 14 2 MagicTelescope 13,376 10 2 ✓
https://arxiv.org/abs/2505.18125v1
nursery 12,958 8 4 online-shoppers-intention 12,330 17 2 Disaster-Tweets 11,370 4 2 ✓ mammography 11,183 6 2 PhishingWebsites 11,055 30 2 ✓ Binary-Dataset-of-Phishing-and-Legitimate-URLs 11,000 14 2 pendigits 10,992 16 10 WBCAtt 10,298 11 5 artificial-characters 10,218 7 10 ✓ internet_usage 10,108 71 46 ✓ robert 10,000 7,200 10 ✓ dilbert 10,000 2,000 5 ✓ shrutime 10,000 10 2 JapaneseV owels 9,961 14 9 GesturePhaseSegmentationProcessed 9,873 32 5 ✓ FICO-HELOC-cleaned 9,871 23 2 ✓ IBRD_Loans_Classification 9,215 6 10 Indian_pines 9,144 220 8 SpeedDating 8,378 120 2 ✓ ✓ fabert 8,237 795 7 ✓ mushroom 8,124 21 2 isolet 7,797 617 26 eye_movements 7,608 23 2 ✓ twonorm 7,400 20 2 blastchar 7,043 19 2 musk 6,598 167 2 first-order-theorem-proving 6,118 51 6 ✓ HMEQ_Data 5,960 12 2 philippine 5,832 308 2 ✓ optdigits 5,620 62 10 BachChoralHarmony 5,586 15 68 page-blocks 5,473 10 5 wall-robot-navigation 5,456 24 4 christine 5,418 1,611 2 ✓ phoneme 5,404 5 2 ✓ Is_fraud 5,227 19 2 ✓ sylvine 5,124 20 2 ✓ Satellite 5,100 36 2 ✓ Multiclass_Classification_for_Corporate_Credit 5,000 7 10 Personal-Loan-Modeling 5,000 12 2 churn 5,000 20 2 ✓ waveform-5000 5,000 40 3 air-quality-and-pollution-assessment 5,000 9 4 Heart_Failure_Prediction 5,000 12 2 compas-two-years 4,966 11 2 ✓ wine-quality-white 4,898 11 7 ✓ wilt 4,839 5 2 ✓ spambase 4,601 57 2 StackOverflow-polarity 4,423 1 3 ✓ hiva_agnostic 4,229 1,617 2 Fraud-Detection-Updated 4,156 27 2 ada 4,147 46 2 ✓ Continued on next page 24 Table 9: The 253 Classification Datasets of the Pretraining Corpus. Dataset n m C B T analcatdata_supreme 4,052 7 10 hypothyroid 3,770 27 3 Bioresponse 3,751 1,776 2 ✓ Internet-Advertisements 3,279 1,558 2 ✓ led24 3,200 24 10 kr-vs-kp 3,196 36 2 ✓ splice 3,190 60 3 ✓ dna 3,186 180 3 ✓ gina 3,153 970 2 ✓ madeline 3,140 259 2 ✓ jasmine 2,984 144 2 ✓ cjs 2,796 29 6 madelon 2,600 500 2 ozone-level-8hr 2,534 72 2 ✓ segment 2,310 16 7 ✓ cardiotocography 2,126 23 10 Estimation_of_Obesity_Levels 2,111 16 7 kc1 2,109 21 2 ✓ Corporate-Credit-Rating 2,026 30 8 ✓ mfeat-factors 2,000 216 10 ✓ South_Asian_Churn_dataset 2,000 13 2 mfeat-zernike 2,000 47 10 ✓ mfeat-fourier 2,000 76 10 ✓ pbcseq 1,945 18 3 steel-plates-fault 1,941 27 7 ✓ car 1,728 6 4 ✓ GAMETES_Heterogeneity 1,600 20 2 one-hundred-plants-texture 1,599 64 100 ✓ audit-data 1,552 35 2 OV A_Breast 1,545 10,935 2 amazon-commerce-reviews 1,500 10,000 50 ✓ yeast 1,484 8 10 ✓ cmc 1,473 9 3 ✓ ibm-employee-attrition 1,470 31 2 pc4 1,458 37 2 ✓ Data_Science_Nigeria_Telecoms_Churn 1,400 14 2 hepatitis_c_virus_hcv_for_egyptian_patients 1,385 28 4 Bank-Note-Authentication-UCI 1,372 4 2 baseball 1,340 16 3 Titanic 1,309 13 2 ✓ mental-health-in-tech-survey 1,259 26 2 ✓ hill-valley 1,212 100 2 Heart-Disease-Dataset-(Comprehensive) 1,190 11 2 ✓ volcanoes-e1 1,183 3 5 Airlines-Tweets-Sentiments 1,097 1 3 ✓ MiceProtein 1,080 77 8 cnae-9 1,080 856 9 ✓ solar_flare 1,058 9 5 ✓ qsar-biodeg 1,055 41 2 ✓ SOCC 1,043 13 4 ✓ rmftsa_sleepdata 1,024 2 4 autoUniv-au1-1000 1,000 20 2 collins 1,000 19 30 credit-g 1,000 20 2 ✓ Continued on next page 25 Table 9: The 253
https://arxiv.org/abs/2505.18125v1
Classification Datasets of the Pretraining Corpus. Dataset n m C B T vowel 990 12 11 The-Estonia-Disaster-Passenger-List 989 6 2 ✓ xd6 973 9 2 tokyo1 959 42 2 tic-tac-toe 958 9 2 Tour-and-Travels-Customer-Churn-Prediction 954 6 2 acp-breast-cancer 949 1 4 ✓ oil_spill 937 48 2 anneal 898 18 5 Cervical_Cancer_Risk_Factors 858 30 5 vehicle 846 18 4 ✓ analcatdata_authorship 841 70 4 glioma_grading_clinical_and_mutation_features 839 23 2 analcatdata_dmft 797 4 6 regensburg_pediatric_appendicitis 780 55 3 QSAR_Bioconcentration_classification 779 12 3 ✓ Diabetes_Dataset 768 8 2 blood-transfusion-service-center 748 4 2 ✓ eucalyptus 736 19 5 ✓ breast-w 699 9 2 Australian 690 14 2 ✓ soybean 683 35 19 profb 672 8 2 ✓ Student_Performance 666 11 4 balance-scale 625 4 3 ✓ Loan-Predication 614 11 2 monks-problems-2 601 6 2 ✓ synthetic_control 600 60 6 ilpd 583 10 2 micro-mass 571 1,082 20 ✓ wdbc 569 30 2 arsenic-male-lung 559 4 2 cylinder-bands 540 34 2 climate-model-simulation-crashes 540 18 2 water-treatment 527 36 2 Early-Stage-Diabetes-Risk-Prediction-Dataset 520 16 2 dresses-sales 500 12 2 irish 500 5 2 arrhythmia 443 262 10 wholesale-customers 440 7 2 vote 435 16 2 cars 406 7 3 chronic-kidney-disease 400 25 2 differentiated_thyroid_cancer_recurrence 383 16 2 colic 368 26 2 ✓ breast-cancer 286 9 2 qualitative-bankruptcy 250 6 2 us-2020-presidential-election-speeches 245 5 7 ✓ audiology 192 57 8 ✓ bone_marrow_transplant_children 187 36 2 darwin 174 450 2 tae 151 5 3 EgyptianSkulls 150 4 5 lymph 148 18 3 ✓ Continued on next page 26 Table 9: The 253 Classification Datasets of the Pretraining Corpus. Dataset n m C B T arcene 100 9,920 2 ✓ Table 10: The 93 Regression Datasets of the Pretraining Corpus, with their nexamples, mfeatures, presence in a benchmark ( B) and whether they are textual ( T). Dataset n m B T delays_zurich_transport 5,465,575 14 ✓ New-York-Citi-Bike-Trip 4,500,000 7 USA-Airport-Dataset 3,606,803 14 ✓ New-York-Taxi-Trip 2,083,778 21 Buzzinsocialmedia_Twitter 583,250 77 ✓ nyc-taxi-green-dec-2016 581,835 18 ✓ 515K-Hotel-Reviews-Data-in-Europe 515,738 16 ✓ dionis 416,188 54 ✓ Yolanda 400,000 100 ✓ Allstate_Claims_Severity 188,318 130 ✓ Football_players_Fifa_stats 183,142 37 black_friday 166,821 9 ✓ medical_charges 163,065 3 ✓ football-manager-data 159,541 87 ✓ wave_energy 72,000 32 ✓ video_transcoding 68,784 18 ✓ dating_profile 59,946 30 ✓ diamonds 53,940 9 ✓ sarcos 48,933 21 ✓ physiochemical_protein 45,730 9 ✓ fried 40,768 10 2dplanes 40,768 10 mv 40,768 10 Perth-House-Prices 33,656 17 ✓ cps88wages 28,155 6 ✓ fps_benchmark 24,624 39 ✓ news_popularity2 24,007 4 ✓ house_16H 22,784 16 ✓ health_insurance 22,272 11 ✓ house_sales 21,613 21 ✓ superconductivity 21,263 81 ✓ california_housing 20,640 8 ✓ avocado-sales 18,249 11 Bike_Sharing_Demand 17,379 12 ✓ elevators 16,599 18 ✓ FIFA20-Players 14,999 72 ✓ miami_housing 13,932 15 ✓ naval_propulsion_plant 11,934 14 ✓ Brazilian_houses 10,692 11 ✓ German-House-Prices 10,552 24 ✓ sulfur 10,081 5 ✓ climate_change_impact 10,000 14 grid_stability 10,000 12 ✓ Credit-Card-Dataset-for-Clustering 8,949 16 topo_2_1 8,885 261 ✓ yprop_4_1 8,885 212 ✓ Continued on next page 27 Table 10: The 93 Regression Datasets of the Pretraining Corpus. Dataset n m B T seoul_bike_sharing_demand_cat 8,760 13 pumadyn32nh 8,192 32 ✓ kin8nm 8,192 8 ✓ cpu_activity 8,192 21 ✓ bank32nh
https://arxiv.org/abs/2505.18125v1
8,192 32 Pollen-Luxembourg-1992-2018 7,784 36 colleges 7,063 44 ✓ ✓ wind 6,574 14 QSAR-TID-10980 5,766 1,024 ✓ QSAR-TID-11 5,742 1,024 ✓ Myanmar-Air-Quality 5,122 10 Santander_transaction_value 4,459 4,735 ✓ SAT11-HAND-runtime-regression 4,440 114 ✓ Mercedes_Benz_Greener_Manufacturing 4,209 364 ✓ abalone 4,177 8 ✓ pollen 3,848 4 space_ga 3,107 6 ✓ scotch-whiskey-reviews-update-2020 2,247 4 ✓ quake 2,178 3 ✓ auction_verification 2,043 7 ✓ us_crime 1,994 126 ✓ airfoil_self_noise 1,503 5 ✓ house_prices 1,460 80 house_prices_nominal 1,460 79 ✓ NBA-PLAYERS–2016-2019 1,408 43 ✓ Insurance-Premium-Data 1,338 6 Moneyball 1,232 14 ✓ socmob 1,156 5 ✓ MIP-2016-regression 1,090 116 ✓ geographical_origin_of_music 1,059 116 ✓ concrete_compressive_strength 1,030 8 ✓ Household-monthly-electricity-bill 1,000 9 stock 950 9 QSAR_fish_toxicity 908 6 ✓ cars 804 17 ✓ energy_efficiency 768 8 ✓ kdd_el_nino-small 709 8 student_performance_por 649 30 ✓ strikes 625 6 sensory 576 11 ✓ meta 528 21 forest_fires 517 12 ✓ rmftsa_ladata 508 10 boston 506 13 ✓ no2 500 7 Diabetes(scikit-learn) 442 10 NBA-2k20-player-dataset 439 14 ✓ baseball-hitter 263 22 bodyfat 252 14 Lisbon-House-Prices 246 13 tecator 240 124 ✓ 28 D Benchmark Datasets This appendix elaborates on the benchmark presented in §4. We consider all datasets proposed by AutoML Multimodal Benchmark (SHI) [ 60],Vectorizing (VEC) [ 31], and CARTE-Benchmark (CRT) [45], resulting in a final set of 50 datasets. We deduplicate datasets that appear as-is in more than one benchmark. In addition, since CARTE explores the concept of multi-table learning, they introduce highly-overlapping datasets for which we remove one variant (see 4.3 and B.2 in their paper). Table 11 presents the classification datasets and Table 12 the regression ones. Each table includes an internal IDused for reference, the Dataset name, the number of examples nand of features m, and the number of classes Cfor classification.26Finally, we also indicate the benchmark sources where each dataset appears. In addition, Table 13 presents the full benchmark with a short description per dataset, and Table 14 details the datasets removed during the deduplication process. Most of the excluded datasets are regression datasets from the CARTE-Benchmark , because of its high-overlapping nature. Table 11: The 14 classification datasets of the benchmark, with their nexamples, mfeatures, C classes, and presence in the SHI, VEC and CRT benchmarks. ID Dataset n m C SHI VEC CRT C01 women_clothing_review 18,788 10 5 ✓ C02 us-accidents 7,728,394 42 4 ✓ ✓ C03 data_scientist_salary 15,841 6 6 ✓ C04 imdb_genre_prediction 800 11 2 ✓ C05 product_sentiment_machine_hack 5,091 2 4 ✓ C06 google_qa_question_type_reason 4,863 39 5 ✓ C07 michelin-guide-restaurants-2021 17,735 11 5 ✓ C08 fake_job_postings2 12,725 5 2 ✓ C09 jigsaw_unintended_bias100K 100,000 40 2 ✓ C10 yelp-reviews-dataset 10,000 5 5 ✓ C11 news_channel 20,284 17 6 ✓ C12 wine_reviews 84,123 5 30 ✓ ✓ ✓ C13 kick_starter_funding 86,502 9 2 ✓ C14 melbourne_airbnb 18,316 89 10 ✓ E Baselines In this appendix we first discuss models excluded from the evaluation due to data leakage concerns, and then cover implementation details for the baselines used in our main experiments §4. E.1 Excluded Baselines As opposed to GDBTs or single-dataset deep learning methods, evaluating pretrained tabular models introduces additional complexity. Indeed, leakage can come
https://arxiv.org/abs/2505.18125v1
in multiple forms. When LLMs are involved, there is a risk of memorization [ 8], and models trained on synthetic datasets [ 37] which try to mimic real-world distributions, can be unintentionally biased towards popular benchmarks. While these two forms of leakage are subtle and hard to detect, a more direct form must be strictly avoided: When the same dataset (or a variant of it) is used during pretraining, and then it is evaluated as a downstream task. In such scenario there is inevitable severe data leakage, especially when running with multiple random test splits. The rest of the section explains how both TP-BERTa [80] andCM2 [82] suffer from such contamination with respect to our benchmark. As we briefly mention in §7, we advocate for improving TFM research by encouraging models that are practical to evaluate, by releasing several versions of each model, and providing default hyperparameters. TP-BERTa We exclude TP-BERTa from our evaluation for two key reasons. First, their imple- mentation assumes that every example is treated as a serialized single sequence, which allows for 26We treat ranking problems with up to 10 discrete values as multiclass problems. 29 Table 12: The 36 regression datasets of the benchmark, with their nexamples, mfeatures, and presence in the SHI, VEC and CRT benchmarks. ID Dataset n m SHI VEC CRT R01 used-cars-dataset-cardekho 37,814 112 ✓ R02 second-hand-mercedes-benz 16,392 7 ✓ R03 animeplanet-recommendation 14,391 14 ✓ R04 ML/DS-Salaries 119,628 9 ✓ R05 Babies-R-Us 5,085 12 ✓ R06 employee_salaries 9,228 11 ✓ ✓ R07 spotify-tracks-dataset 114,000 18 ✓ R08 california_house_price 37,951 39 ✓ R09 fifa 19,178 28 ✓ R10 coffee-scrap-coffeereview 2,440 17 ✓ R11 BikeWale 9,003 6 ✓ ✓ R12 used-car-prices-in-pakistan 72,655 9 ✓ R13 bookprice_prediction 4,989 8 ✓ R14 ae_price_prediction 22,662 12 ✓ R15 Employee-remuneration 44,574 5 ✓ ✓ R16 filmtv-movies-dataset 41,399 17 ✓ R17 free-7-million-company-dataset 7,173,426 7 ✓ ✓ R18 museums 22,290 21 ✓ R19 vivino-wine-data 8,650 6 ✓ R20 wikiliq-dataset 12,569 12 ✓ R21 beer-profile-and-ratings 3,197 24 ✓ R22 korean-drama 1,647 9 ✓ R23 videogamesales 16,598 5 ✓ R24 zomato-bangalore-restaurants 41,665 15 ✓ ✓ R25 the-movies-dataset 45,460 20 ✓ R26 nba-draft-basketball 1,669 22 ✓ R27 Goodreads 3,967 14 ✓ R28 Rotten-Tomatoes 7,158 15 ✓ R29 saudi-arabia-used-cars-dataset 8,035 12 ✓ R30 top-ramen-ratings-2022 4,105 4 ✓ ✓ R31 Journal-Score-SJR 31,136 21 ✓ R32 chocolate-bar-ratings 1,795 8 ✓ R33 mercari_price_suggestion100K 100,000 9 ✓ R34 wine-price-on-polish-market 2,247 18 ✓ R35 clear-corpus 4,724 30 ✓ ✓ R36 jc_penney_products 10,860 5 ✓ a maximum length of 512 tokens, as elaborated in Appendix A.1. While this decision is efficient for datasets with a low amount of features and no free-text presence, around half of the datasets in our benchmark are too long for that limitation, as they either contain too many features or long free-texts. Second, TP-BERTa’s pretraining uses datasets that appear in our evaluation set, as listed in Table 6 of their paper [ 80]. It is evident that several datasets overlap directly with datasets in our benchmark §4 (e.g., 1510_fifa ,1368_IMDb-Ratings ,1639_Melbourne ), disqualifying them for our purposes. Furthermore, we observe a concerning overlap between their pretraining
https://arxiv.org/abs/2505.18125v1
and downstream task datasets (e.g., airlines ,sf police anddiabetes ). We believe that this questions the validity of their evaluation, and that such contamination poses a serious challenge for the TFM community which could be substantially addressed by better tabular data repositories [73]. CM2 CM2 was pretrained over OpenTabs , a compilation of more than 2,000 datasets drawn from public tabular data repositories, including OpenML and Kaggle. While this collection is valuable, pretraining a model over these datasets compromises further evaluation of any of them. Naturally, the overlap with our benchmark here is extremely high, making it infeasible to use as a 30 Table 13: Benchmark Datasets Description ID Description C01 Women Clothing E-Commerce Reviews C02 US Accidents between 2016 and 2023 C03 Indian Data Scientist Salary Prediction C04 IMDB Movies Genre Prediction C05 Product Sentiment Analysis C06 Google QA Question Type Reason Explanation C07 Michelin Guide Restaurants Awards C08 Fake Job Posting Detection C09 Online Social Media Comments Toxicity C10 YELP Dataset Reviews C11 News Channel Prediction C12 Wine Reviews for Variety Prediction C13 Kickstarter Funding Prediction C14 Melbourne AirBnB Listings R01 User cars and listing price in the website Cardekho R02 Second-hand cars Mercedes Benz price Italy R03 Anime-Planet Recommendation Database 2020 R04 Salaries of ML/DS Professionals Worldwide R05 Prices Prediction for baby product from Babies R Us website R06 Employee Salary in Montgomery County, MD R07 Spotify Tracks Popularity R08 California Houses 2020 Prices R09 FIFA 2022 Players Wages R10 Coffee Review Rating R11 Bike and scooters from bikewale website in India R12 Used car prices in Pakistan 2021 R13 Book Price Prediction R14 American Eagle Retailer Price Prediction R15 Employee Remuneration and Expenses - Vancouver R16 FilmTV movies ataset rating R17 Company size prediction R18 General information on the US museums R19 Vivino Spanish Wine Data R20 WikiliQ - Alcohol dataset (May, 2022) R21 Tasting profiles and consumer reviews for beers R22 Korean Dramas R23 Video Games Sales R24 Zomato Restaurants in Bengaluru R25 Metadata of movies released until 2017 for box-office revenues R26 NBA Draft Basketball Player Data 1989-2021 R27 Books ratings R28 Rotten Tomatoes Movie Ratings R29 Saudi Arabia Used Cars Price from Syarah Website R30 Ramen Ratings R31 Academic impact for Scientific Journals R32 Chocolate Bar expert ratings R33 Mercari Online Marketplace Product Prices R34 Information about wines on the polish market R35 Readability scores for text passages spanning various genres and time periods R36 JC Penney Product Prices in Retailer Website 31 Table 14: Excluded datasets, with their benchmark origin and reason for removal: (1) Duplicate dataset, (2) Unavailable , for datasets with inconvenient or unavailable hosting outside tabular repositories, and (3) Pretraining , for two (regression) datasets mistakenly used for the pretraining. Dataset Benchmark Reason Duplicate google_qa_answer_type SHI Duplicate google_qa_question_type news_popularity2 SHI Pretraining US Presidential VEC Unavailable Journal Influence VEC Duplicate Journal-Score-SJR Buy Buy Baby CRT Duplicate Babies-R-Us Bikedekho CRT Duplicate BikeWale Journal Score JCR CRT Duplicate Journal-Score-SJR Japanese Anime CRT Duplicate animeplanet-recommendation Mydramalist CRT Duplicate korean-drama Prescription Drugs CRT Unavailable Roger Ebert CRT Unavailable US Presidential CRT Unavailable Used Cars 24 CRT
https://arxiv.org/abs/2505.18125v1
Duplicate used-car-prices-in-pakistan Whisky CRT Pretraining Wine.com CRT Duplicate wine_reviews WineEnthusiasts CRT Duplicate wine_reviews baseline. Interestingly, their repository27lists TP-BERTa as a method trained on a subset of OpenTabs, reinforcing that the leakage is shared between the models. E.2 Baselines Implementation and Hyperparameters This section outlines the implementation and hyperparameter tuning strategy used for the baselines reported in §4. While each baseline has its own model-specific preprocessing pipeline, we apply two shared preprocessing steps to both TabSTAR (as detailed in Appendix A.1) and all the baselines: (1) We perform date preprocessing by using skrub’s DatetimeEncoder , and (2) Apply a clipped z-score transformation for target variables in regression datasets. Textual Feature Handling CARTE natively supports textual inputs, and the TabPFN-v2 API client28does as well, although its implementation details remain undisclosed. On the other hand, GBDTs do not natively support free-text features,29, so we preprocess these features into fixed-size embeddings using skrub ’sTextEncoder , which internally applies a frozen e5-small-v2 encoder to each semantic column. This aligns with the encoder used in TabSTAR, enabling a fair comparison across models. There are, however, two key differences in how TabSTAR handles these embeddings: First, the embeddings are specifically finetuned for the task, contributing significantly to its strong performance as shown in §5 and further analyzed in §G.2. The second detail is that skrub applies dimensionality reduction to 30 dimensions, as proposed by [ 31]. This compressed representation performs comparably to the full embedding space, while offering improved inference efficiency. E.2.1 TabPFN-v2 We run TabPFN-v2 using their API client which supports text features.30While the intrinsic details of their textual handling remain undocumented, it’s reasonable to assume that it resembles the processing we apply to GBDTs, as their model leverages ICL and their architecture has no textual encoder. 27https://github.com/Chao-Ye/CM2 28https://github.com/PriorLabs/tabpfn-client 29CatBoost includes a built-in text module, but it underperforms compared to dense text embeddings. 30We use v2.0.8, the latest version available at the time of running the experiments. 32 E.2.2 CARTE We run CARTE using its package,31which inherently performs k-fold cross-validation. After consulting with the authors, we set k= 5for efficiency instead of their default, 10. We do grid search over their recommended learning rates,32and we take the best-performing variant per dataset split. E.2.3 CatBoost and CatBoost-Tuned We run CatBoost using the catboost package33and run the default configuration suggested by [ 25] by setting early _stopping _rounds = 50 ,od_pval = 0.001,iterations = 2000 . For the tuned version, we use the Optuna package34with random search, with a budget of 4 hours for every run and parallelizing trials on 8 CPU cores. We use 5-fold cross-validation and take the best configuration selected based on this mean score. We use it then to retrain the model on the full training data. For hyperparameter space, we follow the hyperparameter search suggested by [ 37] as detailed in Table 15. Table 15: CatBoost-Tuned Hyperparameters Search Space Hyperparameter Search Space lr logU(e−5,1) random_strength U{1,2, . . . , 20} l2_leaf_reg logU(1,10) bagging_temperature U(0.0,1.0) leaf_estimation_iterations U{1,2, . . . , 20} iterations U{100,101, . . . , 4000} E.2.4 XGBoost and XGBoost-Tuned We run
https://arxiv.org/abs/2505.18125v1
XGBoost using the xgboost package35and follow the same procedure as for CatBoost, except additional preprocessing (e.g., transforming categorical variables into one-hot encodings). For the default configuration, we follow the suggestion of [ 25] and use: booster = ”gbtree ”, early _stopping _rounds = 50 ,n_estimators = 2000 . For the tuned variant, we follow the hyperparameter search space suggested by [37], as shown in Table 16. Table 16: XGBoost-Tuned Hyperparameters Search Space Hyperparameter Search Space learning_rate logU(e−7,1) max_depth U{1,2, . . . , 10} subsample U(0.2,1) colsample_bytree U(0.2,1) colsample_bylevel U(0.2,1) min_child_weight logU(e−16, e5) alpha logU(e−16, e2) reg_lambda logU(e−16, e2) gamma logU(e−16, e2) n_estimators U{100,101, . . . , 4000} E.2.5 Random Forest We treat Random Forest as a weak baseline to establish a lower-bound reference for each dataset split. We run it with the sklearn package36and use its default configuration with n_estimators = 100 . 31https://github.com/soda-inria/carte 32{2.5×10−4,5×10−4,7.5×10−4,2.5×10−3,5×10−3,7.5×10−3} 33https://pypi.org/project/catboost/ 34https://pypi.org/project/optuna/ 35https://pypi.org/project/xgboost/ 36https://scikit-learn.org/ 33 Table 17: Classification performance per dataset (up to 10K). The top performance score is bolded first, and then all scores are rounded. We report average AUROC with 95% CIs. ID CARTE CatB CatB-T RF TabPFN TabSTAR XGB XGB-T C01 88.4±0.3 90.2±0.3 90.3±0.4 88.8±0.3 90.3±0.3 90.8±0.3 89.3±0.4 90.2±0.4 C02 97.3±0.5 97.4±0.4 96.3±0.5 97.9±0.5 97.2±0.3 97.6±0.3 C03 82.5±0.3 82.0±0.3 81.2±0.6 77.3±0.3 82.4±0.3 83.0±0.3 80.5±0.3 82.1±0.3 C04 84.5±1.9 85.4±1.8 82.4±3.0 88.3±1.5 83.7±2.3 80.7±2.9 85.4±2.0 C05 88.5±0.9 90.6±1.0 90.9±0.5 88.0±1.1 91.2±0.7 91.3±0.8 88.1±1.1 90.3±0.6 C06 80.9±1.5 81.3±1.3 82.6±1.1 73.6±1.2 87.7±0.5 87.0±0.6 81.5±1.0 83.7±0.9 C07 90.1±0.4 90.3±0.3 90.6±0.3 85.1±0.7 89.8±0.4 91.5±0.3 87.2±0.6 89.3±0.5 C08 90.8±1.6 93.0±0.9 93.2±1.0 90.2±1.3 91.3±1.1 93.5±1.5 91.9±1.1 94.4±0.7 C09 99.8±0.1 99.8±0.1 99.6±0.2 99.8±0.1 99.3±0.1 99.7±0.1 99.8±0.1 C10 86.7±0.2 87.0±0.2 84.6±0.2 87.6±0.4 89.1±0.2 85.4±0.3 86.8±0.2 C11 78.4±0.5 79.7±0.5 80.7±0.4 76.2±0.6 81.4±0.4 79.2±0.5 78.2±0.6 80.2±0.5 C12 96.0±0.2 97.5±0.1 97.7±0.1 95.3±0.2 98.3±0.1 96.6±0.2 97.3±0.1 C13 70.9±0.8 73.7±1.1 74.1±1.0 71.1±1.1 72.3±0.9 75.0±0.7 70.2±0.8 74.1±1.1 C14 83.5±0.3 84.0±0.5 80.1±0.3 84.0±0.3 81.4±0.4 83.2±0.4 F Extended Main Results In this appendix we provide the main results for the experiment as reported in §5. As elaborated in §4, each model is evaluated on each dataset across 10 splits. Since performance scales vary between datasets, we follow the normalization approach proposed by [ 37], rescaling all scores to the [0,1] range. The final reported performance for each model is the average over these normalized runs, and we compute 95% confidence intervals using the standard normal approximation: ˆµ±1.96ˆσ√n. In this section, we often use abbreviated model names for conciseness: We refer to CatBoost variants asCatB andCatB-T , with the latter being the Tuned version. Similarly, we use XGB andXGB-T for XGBoost. We maintain the abbreviation of Random Forest (RF) and shorten TabPFN-v2 to simply TabPFN . For models that run on an unlimited number of training examples, we add a -Usuffix, i.e. TabSTAR-U ,CatB-T-U , and XGB-T-U . F.1 Dataset Level Performance We report AUROC for classification and R2for regression, with 95% CIs computed over the 10 runs for each dataset. Tables 17 and 18 summarize classification performance on datasets with up to 10K and over 10K examples, respectively. Tables 19 and 20 to the same for regression tasks. For conciseness,
https://arxiv.org/abs/2505.18125v1
datasets are referred by their ID from Appendix D. As discussed in §5, TabPFN-v2 is unable to run over 4 datasets: C12, because it is a multiclass problem with more than 10 classes, and C02, C14 and R01 because they support inference for up to 500,000 cells. Attempts to run the model over a subset of the examples led to a significantly worse performance, and thus we decide not to report them to allow a fair comparison. Furthermore, CARTE is unable to run over 15 of the datasets in the benchmark due a known bug37in their implementation for the PowerTransformation , which struggles in the presence of features with too less unique values. F.2 Head-to-head comparisons We compare the performance of TabSTAR against each of the models in head-to-head comparisons. We report win rate, which can be seen as a private case of the normalized metric with only two models. We exclude failed runs when comparing to CARTE and TabPFN-v2. Table 21 shows the performance of TabSTAR against all models competing up to 10K examples, for both regression and classification, with 95% CIs over the win rate. Table 22 does the same for TabSTAR-Unlimit. 37https://github.com/soda-inria/carte/issues/23 34 Table 18: Classification performance per dataset (above 10K). The top performance score is bolded first, and then all scores are rounded. We report average AUROC with 95% CIs. ID CatB-T CatB-T-U TabPFN TabSTAR TabSTAR-U XGB-T XGB-T-U C01 90.3±0.4 90.5±0.4 90.3±0.3 90.8±0.3 91.2±0.3 90.2±0.4 90.4±0.3 C02 97.4±0.4 98.3±0.3 97.9±0.5 98.4±0.2 97.6±0.3 98.2±0.3 C03 81.2±0.6 81.5±0.8 82.4±0.3 83.0±0.3 83.8±0.5 82.1±0.3 82.3±0.3 C07 90.6±0.3 91.0±0.4 89.8±0.4 91.5±0.3 91.9±0.3 89.3±0.5 89.9±0.5 C08 93.2±1.0 93.1±1.2 91.3±1.1 93.5±1.5 95.1±0.9 94.4±0.7 94.4±0.9 C09 99.8±0.1 99.8±0.1 99.8±0.1 99.3±0.1 99.5±0.1 99.8±0.1 99.8±0.1 C11 80.7±0.4 81.8±0.4 81.4±0.4 79.2±0.5 81.0±0.5 80.2±0.5 81.4±0.4 C12 97.7±0.1 98.5±0.1 98.3±0.1 99.1±0.1 97.3±0.1 98.4±0.1 C13 74.1±1.0 76.7±0.8 72.3±0.9 75.0±0.7 77.9±1.0 74.1±1.1 76.9±0.9 C14 84.0±0.5 84.8±0.4 84.0±0.3 85.1±0.3 83.2±0.4 84.2±0.4 Table 19: Regression performance per dataset. The top performance score is bolded first, and then all scores are rounded. We report average R2with 95% CIs. ID CARTE CatB CatB-T RF TabPFN TabSTAR XGB XGB-T R01 100±0.0 100±0.0 100±0.0 100±0.0 100±0.0 99.9±0.1 100±0.0 R02 98.3±1.0 98.3±0.9 98.0±1.0 98.4±0.9 98.0±1.1 97.9±1.1 98.3±0.9 R03 71.8±0.5 73.8±0.4 74.1±0.3 68.4±0.6 74.9±0.4 70.9±0.8 69.1±0.5 73.7±0.5 R04 86.2±7.4 85.1±7.1 85.1±6.8 86.1±5.2 81.5±8.7 85.8±7.5 87.5±3.7 R05 93.2±0.8 89.5±1.3 90.0±1.2 86.7±1.3 92.7±0.9 93.2±1.2 87.1±1.4 90.1±1.3 R06 97.7±0.8 98.1±0.5 98.2±0.5 97.2±1.0 98.5±0.6 98.1±0.3 97.1±0.7 97.9±0.7 R07 71.6±1.3 66.9±0.8 67.8±0.8 61.5±1.2 62.2±1.3 71.1±1.8 61.9±1.5 68.8±0.8 R08 93.2±0.8 93.0±0.7 93.2±0.7 92.2±0.7 93.9±0.7 92.8±0.6 92.3±0.7 93.1±0.7 R09 89.2±0.5 89.2±0.6 89.3±0.4 88.6±0.7 89.8±0.5 88.8±0.5 88.2±0.7 89.3±0.5 R10 99.5±0.2 99.0±0.6 98.9±0.6 98.1±0.3 99.1±0.3 99.5±0.1 98.6±0.3 99.4±0.2 R11 94.2±0.9 94.0±0.9 92.5±1.3 94.5±0.9 93.9±1.1 93.3±1.1 94.2±0.8 R12 98.5±0.3 98.5±0.2 98.0±0.3 98.5±0.2 98.3±0.2 98.2±0.3 98.5±0.2 R13 52.8±2.1 57.6±2.7 58.2±2.6 51.8±3.5 53.8±2.7 53.4±3.0 49.0±3.3 57.5±2.7 R14 96.6±0.2 97.3±0.1 97.3±0.1 96.8±0.1 96.2±0.2 96.4±0.1 97.1±0.1 97.1±0.1 R15 79.5±0.9 79.9±1.1 77.3±1.2 79.4±1.0 78.9±1.5 76.8±1.2 80.3±1.0 R16 98.7±0.0 98.7±0.0 98.7±0.0 97.7±0.1 98.8±0.0 98.7±0.0 97.7±0.1 97.8±0.1 R17 95.3±2.0 94.8±2.4 95.2±2.0 94.2±2.5 95.1±2.1 94.7±2.3 94.5±2.6 95.4±1.9 R18 98.1±0.4 98.2±0.4 98.2±0.4 97.7±0.4 93.1±1.3 97.4±0.6 97.5±0.8 98.3±0.4 R19 85.5±0.8 85.8±0.9 85.3±0.9 84.0±1.1 83.8±1.1 83.9±1.0 82.4±2.0 R20 96.4±0.5 96.0±0.6 96.2±0.5 95.7±0.6
https://arxiv.org/abs/2505.18125v1
93.9±1.0 95.6±0.9 95.3±0.9 96.2±0.5 R21 92.3±1.2 92.5±1.1 92.6±1.1 91.8±1.1 93.3±1.1 92.3±1.0 91.6±1.0 92.6±1.0 R22 45.2±5.5 45.2±5.9 39.1±6.4 43.8±5.5 39.7±5.5 36.1±7.1 46.8±5.8 R23 85.0±1.8 85.4±1.2 85.5±1.1 81.8±1.5 84.8±1.6 83.0±1.6 83.3±1.2 85.1±1.2 R24 86.0±0.6 84.0±0.8 85.8±0.7 79.7±1.0 70.3±1.6 81.8±1.8 82.9±0.7 85.9±0.9 R25 94.3±0.6 95.4±0.5 95.3±0.5 94.8±0.5 86.7±1.0 94.8±0.5 94.6±0.6 95.4±0.5 R26 99.8±0.1 99.4±0.1 99.4±0.1 99.0±0.2 99.8±0.1 99.6±0.1 99.1±0.2 99.5±0.1 R27 81.9±1.6 85.2±1.3 85.3±1.4 84.8±1.3 82.2±1.6 82.0±1.6 83.2±1.2 85.5±1.2 R28 52.5±2.6 53.4±2.7 53.3±2.8 46.0±2.7 61.7±2.1 51.5±2.9 45.3±2.3 53.5±2.7 R29 94.4±0.7 94.4±0.7 93.0±1.0 95.7±0.6 94.3±0.9 93.4±0.8 94.5±0.8 R30 23.8±4.2 22.7±4.1 24.6±3.9 23.5±3.5 20.9±4.8 15.7±5.0 17.5±4.2 25.5±3.7 R31 92.1±0.3 92.1±0.4 92.0±0.4 89.9±0.5 93.2±0.3 91.7±0.4 90.3±0.5 92.0±0.4 R32 28.8±6.3 28.2±5.8 28.6±5.9 26.2±5.4 19.4±5.6 22.7±7.3 31.6±6.2 R33 46.6±1.5 47.4±1.7 47.8±1.7 40.2±1.6 44.9±1.6 46.0±1.8 40.3±2.1 47.9±1.6 R34 90.9±2.5 91.4±2.1 88.7±3.1 92.3±1.8 89.1±2.9 88.9±3.6 91.6±2.1 R35 85.9±0.6 84.4±0.5 84.5±0.5 81.2±0.4 85.2±0.7 85.7±0.8 81.6±0.8 84.4±0.3 R36 96.3±0.9 95.2±0.9 95.5±0.9 94.3±1.0 91.2±1.1 95.7±0.8 94.4±0.9 95.5±0.9 35 Table 20: Regression performance per dataset (above 10K). The top performance score is bolded first, and then all scores are rounded. We report average R2with 95% CIs. ID CatB-T CatB-T-U TabPFN TabSTAR TabSTAR-U XGB-T XGB-T-U R01 100±0.0 100±0.0 100±0.0 100±0.0 100±0.0 100±0.0 R02 98.3±0.9 98.8±0.9 98.4±0.9 98.0±1.1 98.4±1.0 98.3±0.9 98.8±0.9 R03 74.1±0.3 75.1±0.4 74.9±0.4 70.9±0.8 72.3±0.6 73.7±0.5 74.6±0.6 R04 85.1±7.1 91.3±0.8 86.1±5.2 81.5±8.7 91.0±0.7 87.5±3.7 91.3±0.6 R07 67.8±0.8 85.2±0.3 62.2±1.3 71.1±1.8 79.9±1.8 68.8±0.8 86.1±0.7 R08 93.2±0.7 93.8±0.6 93.9±0.7 92.8±0.6 93.3±0.6 93.1±0.7 93.9±0.5 R09 89.3±0.4 89.5±0.4 89.8±0.5 88.8±0.5 89.2±0.6 89.3±0.5 89.5±0.4 R12 98.5±0.2 98.9±0.1 98.5±0.2 98.3±0.2 98.4±0.2 98.5±0.2 98.9±0.2 R14 97.3±0.1 97.8±0.1 96.2±0.2 96.4±0.1 96.8±0.2 97.1±0.1 97.5±0.1 R15 79.9±1.1 86.5±0.7 79.4±1.0 78.9±1.5 83.9±1.4 80.3±1.0 87.0±0.7 R16 98.7±0.0 98.8±0.0 98.8±0.0 98.7±0.0 98.8±0.0 97.8±0.1 98.0±0.1 R17 95.2±2.0 97.6±0.6 95.1±2.1 94.7±2.3 97.6±0.6 95.4±1.9 97.7±0.6 R18 98.2±0.4 99.0±0.2 93.1±1.3 97.4±0.6 97.7±0.6 98.3±0.4 98.9±0.3 R20 96.2±0.5 96.3±0.4 93.9±1.0 95.6±0.9 95.3±1.0 96.2±0.5 96.3±0.5 R23 85.5±1.1 86.3±0.7 84.8±1.6 83.0±1.6 85.2±1.7 85.1±1.2 86.1±0.8 R24 85.8±0.7 97.1±0.3 70.3±1.6 81.8±1.8 95.5±0.7 85.9±0.9 97.3±0.3 R25 95.3±0.5 95.9±0.4 86.7±1.0 94.8±0.5 95.1±0.5 95.4±0.5 95.8±0.5 R31 92.0±0.4 93.0±0.3 93.2±0.3 91.7±0.4 92.7±0.4 92.0±0.4 92.9±0.3 R33 47.8±1.7 55.9±1.1 44.9±1.6 46.0±1.8 57.2±1.3 47.9±1.6 55.7±1.3 R36 95.5±0.9 95.5±0.9 91.2±1.1 95.7±0.8 95.5±0.5 95.5±0.9 95.5±0.9 Table 21: Win rates of TabSTAR (up to 10K) against baselines. Win rate with 95% CI. CARTE CatB CatB-T RF TabPFN XGB XGB-T Classification 93.3±5.2 73.6±7.3 67.9±7.8 88.6±5.3 57.3±9.3 89.3±5.1 71.4±7.5 Regression 35.7±5.9 33.6±4.9 32.2±4.8 66.9±4.9 40.9±5.2 69.4±4.8 30.3±4.8 F.3 Running Times and Compute Information In this section we present the running times of the different models (see Table 23). We focus on the condition with up to 10,000 examples, excluding CatBoost-Tuned and XGBoost-Tuned as they run with a 4 hours budget using 8 CPUs from AMD EPYC 7742 64-Core Processor . Compute Information TabSTAR runs on NVIDIA RTX A40 with 48GB memory. TabPFN-v2 runs directly on their API client. CatBoost, XGboost and Random Forest run using Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz CPU cores. CARTE use inferior GPUs of type NVIDIA GeForce GTX 1080 Ti with 11GB memory, to allow for higher parallelization of their incredibly slow runs. Running Times TabSTAR’s average running time for a downstream task ranges from 30 seconds (C04) to 7,692 seconds (R24). We observe that these running
https://arxiv.org/abs/2505.18125v1
times are significantly lower than 14,400 seconds (4 hours), used for CatBoost-Tuned and XGBoost-Tuned, although a single run of any tree model without 5-fold cross-validation is faster. The running times of TabPFN-v2 are considerably better. However, since they use ICL, they are limited to run only up to 10,000 examples and their inference time is just as costly as their fitting time. For CARTE, while their runs use inferior hardware and limited parallelization, the reported times are for a single run. In practice, they run 6 times per dataset split because of their lack of default hyperparameters, making it a poor choice for scale. Table 22: Win rates of TabSTAR-U (above 10K) against baselines. Win rate with 95% CI. CARTE CatB-T CatB-T-U TabPFN TabSTAR XGB-T XGB-T-U Classification 98.6±2.8 86.0±6.8 67.0±9.3 75.7±10.1 94.0±4.7 85.0±7.0 71.0±8.9 Regression 63.5±7.5 53.0±6.9 22.0±5.8 61.1±7.0 82.0±5.3 59.5±6.8 28.5±6.3 36 Table 23: Average model training running time in seconds, per dataset, for up to 10K examples. ID CARTE CatBoost RF TabPFN-v2 TabSTAR XGBoost C01 13,341 44 19 42 302 31 C02 420 346 1,145 491 C03 7,297 38 19 61 324 27 C04 11 6 9 30 9 C05 1,227 10 8 14 68 9 C06 3,068 68 31 112 1,126 63 C07 6,559 61 34 83 685 52 C08 4,631 40 41 49 681 49 C09 20 21 134 544 31 C10 22 22 30 519 23 C11 2,428 19 12 33 252 18 C12 4,411 169 16 765 39 C13 1,774 31 27 58 929 37 C14 500 140 5,254 308 R01 9,439 139 446 1,987 94 R02 9 16 54 188 6 R03 2,174 30 103 357 553 24 R04 10 10 84 115 7 R05 10,639 28 26 62 158 15 R06 3,185 22 43 70 307 13 R07 4,637 22 109 112 352 28 R08 4,252 57 366 396 1,136 58 R09 8,239 7 22 55 198 5 R10 1,505 28 37 33 168 21 R11 11 18 36 99 8 R12 13 22 78 199 9 R13 2,299 24 54 38 639 21 R14 6,396 28 30 167 772 15 R15 25 51 66 223 22 R16 13,380 34 118 156 798 35 R17 3,247 94 478 204 270 143 R18 9,663 34 631 309 341 32 R19 11 14 66 101 6 R20 6,173 22 144 154 637 20 R21 2,215 14 33 25 177 14 R22 16 21 17 149 15 R23 14,401 15 51 58 167 10 R24 25,903 90 170 314 7,692 106 R25 2,404 50 943 261 786 54 R26 636 10 11 13 71 8 R27 2,615 31 59 40 304 22 R28 12,265 45 163 128 389 46 R29 11 10 32 138 5 R30 2,477 9 18 24 45 7 R31 5,102 41 187 240 451 39 R32 8 11 13 32 7 R33 4,512 28 100 136 539 30 R34 26 24 18 61 12 R35 4,816 30 55 48 396 28 R36 15,389 18 81 79 949 23 37 G Extended Analysis In this section we expand on the analysis results discussed in
https://arxiv.org/abs/2505.18125v1
§6. G.1 Evaluation Datasets for Analysis All the experiments are evaluated over 20 datasets from the benchmark in Appendix D. Each experiment reports the performance with AUROC for classification and R2for regression. For conciseness, each tables reports both regression and classification tasks, distinguishable by their ID. G.2 The Role of the Encoder Unfreezing (Q1) Table 24 shows the results for each variant of the experiment presented in §6. It is evident that unfreezing the textual encoder yields a significant performance across datasets. Furthermore, while finetuning a single layer gives a significant boost, it underperforms compared to 6 unfrozen layers. G.3 The Effect of Pretraining (Q2) We pretrain three TabSTAR variants on nested dataset subsets of size 16, 64, and 256. The 64-dataset variant contains the original 16 plus 48 new datasets, and the 256-dataset variant builds on those 64 by adding another 192. This cumulative design minimizes variance between variants so that performance differences reflect only the effect of increasing data volume. While LoRA [ 41] is a very efficient technique, its performance was much worse for a randomly initialized model. Therefore, we perform full finetuning of the non-pretrained model, as explained in Appendix B.2. Table 25 shows the dataset level results. It is evident that for most of the datasets, improvement is observed when scaling, with the 256 datasets variant winning for almost all datasets. G.4 Numerical Verbalization (Q3) We show the full results for the experiment in §6, with Table 26 illustrating the verbalizations in each variant. Note that we do not include an exact value verbalization, since it would increase the number of unique text inputs and place extra memory demands. The two variants which integrate numerical information into the verbalization dominate the experiment, although the improvement seems to be marginal for some datasets. Interestingly, some datasets significantly underperform, with the R27 dataset completely failing the task. The addition of the quantile information on top of the bin seems to have limited impact, although marginally winning on the average performance. 38 Table 24: Downstream performance for Q1: The Role of the Encoder Unfreezing. Results for 20 datasets with 95% CI, for varying number of unfrozen layers. The top performance score is bolded first, and then all scores are rounded. We report AUROC for classification and R2for regression. ID 0 1 3 6 9 C01 87.8±0.3 90.4±0.4 90.8±0.4 91.0±0.4 90.8±0.4 C02 94.9±1.2 97.8±0.3 97.8±0.3 98.1±0.3 97.7±0.3 C03 77.6±0.9 82.4±0.2 82.8±0.3 83.0±0.5 83.0±0.4 C05 87.0±0.7 90.2±0.9 90.8±0.6 91.5±0.9 89.4±0.8 C07 82.1±1.4 89.1±0.9 90.6±0.5 91.3±0.4 91.0±0.4 C11 77.4±0.5 78.4±0.6 78.7±0.7 78.8±0.5 78.3±0.6 C12 94.4±0.2 98.3±0.1 98.3±0.1 98.3±0.1 98.2±0.1 C13 66.5±0.9 71.7±1.1 72.9±0.7 74.1±0.7 73.0±0.8 R02 97.0±1.7 97.9±1.2 98.0±1.1 97.9±1.2 98.0±1.1 R03 67.2±0.8 69.8±0.8 70.9±0.9 71.2±0.8 70.8±0.6 R05 80.8±2.6 91.8±1.5 93.2±0.5 93.1±0.7 92.9±0.8 R09 88.1±0.7 88.7±0.7 89.0±0.5 89.0±0.6 88.9±0.8 R12 97.0±0.6 98.2±0.3 98.2±0.2 98.3±0.3 98.2±0.3 R13 37.6±3.6 45.7±2.1 51.6±3.2 51.4±2.9 52.1±1.8 R18 96.3±1.1 97.1±0.6 97.4±0.6 97.3±0.6 97.0±0.6 R23 79.3±2.4 81.6±1.6 82.8±1.6 83.2±2.2 82.1±1.9 R27 81.3±1.6 81.3±1.6 81.5±1.6 81.3±1.7 81.4±1.6 R30 14.2±4.6 15.6±5.6 19.2±3.1 19.1±4.0 17.3±4.5 R33 36.0±2.0 43.6±1.5 44.3±1.4 46.0±1.8 43.5±2.2 R34 84.3±2.8 87.4±2.9 88.4±3.4 87.9±3.5 88.7±2.7 Table 25: Downstream performance for Q2:
https://arxiv.org/abs/2505.18125v1
The Effect of Pretraining. Results for 20 datasets with 95% CI, for varying number of pretraining datasets. The top performance score is bolded first, and then all scores are rounded. We report AUROC for classification and R2for regression. ID 0 16 64 256 C01 90.8±0.4 90.7±0.4 90.7±0.3 91.0±0.4 C02 98.0±0.4 97.4±0.8 97.8±0.3 98.1±0.3 C03 69.2±8.9 83.1±0.4 83.2±0.3 83.0±0.5 C05 90.7±0.8 90.3±1.1 90.6±0.5 91.5±0.9 C07 87.9±0.5 87.6±0.8 90.9±0.3 91.3±0.4 C11 78.2±0.6 77.4±0.9 78.0±0.4 78.8±0.5 C12 98.3±0.1 98.2±0.1 98.2±0.1 98.3±0.1 C13 74.0±0.5 73.5±0.8 73.7±1.0 74.1±0.7 R02 97.2±1.6 97.6±1.3 97.9±1.2 97.9±1.2 R03 67.3±1.9 68.5±1.0 71.2±0.7 71.2±0.8 R05 88.0±2.8 90.6±2.2 92.2±1.0 93.1±0.7 R09 88.1±1.0 88.4±0.7 88.8±0.5 89.0±0.6 R12 97.7±0.3 97.9±0.3 98.1±0.2 98.3±0.3 R13 49.4±2.7 49.1±3.6 48.0±3.0 51.4±2.9 R18 95.0±2.2 94.9±1.2 96.7±0.6 97.3±0.6 R23 82.2±1.6 81.2±2.3 81.8±2.3 83.2±2.2 R27 80.9±1.7 81.3±1.6 81.5±1.6 81.3±1.7 R30 13.1±4.1 18.5±4.9 18.5±3.9 19.1±4.0 R33 45.4±2.2 42.8±3.3 45.0±1.5 46.0±1.8 R34 83.8±4.3 86.0±3.3 88.0±3.4 87.9±3.5 39 Table 26: Illustrative verbalization of a numerical feature ( Age) for the Q3: Numerical Verbalization experiment. Value Name Name + Bin TabSTAR 17 Age: Numeric Age: Lower than 18 Age: Lower than 18 (Quantile 0%) 20 Age: Numeric Age: 18–23 Age: 18–23 (Quantile 0–10%) 25 Age: Numeric Age: 23–27 Age: 23–27 (Quantile 10–20%) 29 Age: Numeric Age: 27–31 Age: 27–31 (Quantile 20–30%) 33 Age: Numeric Age: 31–35 Age: 31–35 (Quantile 30–40%) 38 Age: Numeric Age: 35–40 Age: 35–40 (Quantile 40–50%) 42 Age: Numeric Age: 40–45 Age: 40–45 (Quantile 50–60%) 48 Age: Numeric Age: 45–51 Age: 45–51 (Quantile 60–70%) 55 Age: Numeric Age: 51–58 Age: 51–58 (Quantile 70–80%) 63 Age: Numeric Age: 58–67 Age: 58–67 (Quantile 80–90%) 83 Age: Numeric Age: 67-87 Age: 67–87 (Quantile 90–100%) 93 Age: Numeric Age: Higher than 87 Age: Higher than 87 (Quantile 100%) – Age: Unknown Value Age: Unknown Value Age: Unknown Value Table 27: Downstream performance for Q3: Numerical Verbalization. Results for 20 datasets with 95% CI, for different verbalizations. The top performance score is bolded first, and then all scores are rounded. We report AUROC for classification and R2for regression. ID Name Name + Bin TabSTAR C01 91.0±0.4 91.2±0.4 91.0±0.4 C02 98.1±0.2 97.9±0.3 98.1±0.3 C03 83.3±0.4 83.3±0.3 83.0±0.5 C05 90.8±0.9 91.1±1.2 91.5±0.9 C07 91.2±0.2 91.4±0.4 91.3±0.4 C11 78.2±0.5 78.2±0.6 78.8±0.5 C12 98.2±0.1 98.4±0.1 98.3±0.1 C13 71.6±0.7 73.8±0.9 74.1±0.7 R02 98.0±1.1 98.0±1.1 97.9±1.2 R03 67.4±0.7 71.3±0.6 71.2±0.8 R05 93.1±1.1 93.2±0.8 93.1±0.7 R09 88.9±0.5 88.9±0.6 89.0±0.6 R12 98.2±0.2 98.3±0.2 98.3±0.3 R13 50.1±3.3 49.7±3.4 51.4±2.9 R18 97.1±0.6 97.1±0.7 97.3±0.6 R23 81.9±2.4 82.7±1.9 83.2±2.2 R27 16.7±6.6 82.0±1.5 81.3±1.7 R30 16.4±4.4 17.7±5.3 19.1±4.0 R33 45.6±1.1 46.2±2.2 46.0±1.8 R34 88.0±3.6 88.6±2.5 87.9±3.5 40
https://arxiv.org/abs/2505.18125v1