text
string
source
string
0.0149 0.0069 36.56 B 3/20/19 0.0415 0.0188 27.64 I 0.0233 0.0105 32.67 D 0.0234 0.0110 32.61 4/16/19 0.0352 0.0170 29.08 E 3/20/19 0.0242 0.0116 32.33 4/16/19 0.0229 0.0101 32.82 Average 0.0252 0.0117 32.42 Spatial A 3/20/19 0.0274 0.0175 31.25 4/16/19 0.0313 0.0188 30.09 E 3/20/19 0.0371 0.0232 28.61 4/16/19 0.0406 0.0229 27.83 Average 0.0341 0.0206 29.44 Temporal A 12/05/18 0.0545 0.0283 25.28 D 12/06/18 0.0359 0.0209 28.90 E 12/05/18 0.0479 0.0270 26.40 Average 0.0461 0.0254 26.86 3.3 Comparison of different super-resolution models We made the SRCNN network the core model for high-resolution MS image reconstruction. SRCNN was the first neural network-based super-resolution model and has long been surpassed by larger, deeper, more complex, and more accurate models. However, our study dataset is much smaller than the training data used to train modern state-of-the-art super-resolution models. We have found that for our task and dataset SRCNN performs very well. In Table 4, we calculated the PSNR for the cross-validation image reconstruction task using the SRCNN network. Table 5 shows the size and performance of five neural networks SRCNN, VGGExt, ResNetExt, SRDB and RCAN using the same dataset considered in Table 4. The networks that performed best were SRCNN and SRDB. We also tested how well these models generalizes on images of different crops. Table 6 shows the results on images of wheat, corn, and miscanthus, 17 Super-resolution for precision farming and we see that the performance is quite good, so we expect that the spectral extension model can be applied to at least these other crops and to other locations in the northeast. SRCNN and SRDB are once again the better performing networks, but the other networks are not far behind. RCAN is only matched by models that resort to generative adversarial network (GAN) architectures. GAN models can be tricky to train and would not be possible to train for the modest dataset in our study. The VGG and ResNet architectures allow for deeper networks through the use of max-pooling and residual connections. However, VGG and ResNet are primarily focused on image classification, and max-pooling can be harmful for super-resolution. We constructed smaller versions of VGG and ResNet meant for our task with only one application of max-pooling that is paired with a conjugate convolution for upsampling later in the network. The VGGExt and ResNetExt architectures can be seen in Figures A.1 and A.2, respectively. We also tried a smaller version of the Densenet architecture with no max-pooling layers that contain only the single dense block component shown in Figure A.3. This super-resolution dense block (SRDB) network is similar in spirit to another state-of-the-art neural network, the residual dense network (RDN), which utilizes many dense blocks in a very deep architecture. Table 5: Comparison of SRCNN, VGGExt, ResNetExt, SRDB and RCAN on the study cross-validation data set. The statistical significance of the numbers is approximately 0.2 DB, so the top two models (SRCNN and SRDB) should be considered comparable. .Model: SRCNN SRDB VGGExt ResNetExt RCAN RDN Size: 0.52MB 2.04MB 3.31MB 16.25MB 15.23MB 2.29MB Field Date PSNR A 3/20/19 35.69 35.70 34.87 33.74 32.43
https://arxiv.org/abs/2505.21746v1
32.86 4/16/19 36.56 36.77 35.85 35.22 35.53 35.40 B 3/20/19 27.64 29.78 25.05 28.67 26.26 26.83 I 32.67 32.29 30.71 32.01 30.49 29.95 D 32.61 30.44 30.73 32.05 30.01 29.40 4/16/19 29.08 30.53 29.90 29.51 29.22 28.33 E 3/20/19 32.33 30.77 30.96 31.93 27.72 28.68 4/16/19 32.82 34.71 31.87 31.50 33.84 33.29 Average 32.42 32.62 31.24 31.83 30.69 30.59 Table 6: Comparison of the SRCNN, VGGExt, ResNetExt, SRDB and RCAN models trained on the Maryland cover crops dataset, but evaluated on a dataset with different crops (wheat, corn, miscanthus/switchgrass). This dataset contains 9 locations, but the images are larger and the total number of pixels is 10 times more than for the study. Model: SRCNN SRDB VGGExt ResNetExt RCAN RDN Size: 0.52MB 2.04MB 3.31MB 16.25MB 15.23MB 2.29MB Site Date Crop PSNR W:A 4/15/20 wheat 36.65 33.57 32.79 32.83 34.68 33.84 W:B 34.93 36.62 31.18 32.16 35.15 35.62 W:C 29.13 25.27 29.01 28.82 25.58 24.59 W:D 32.35 27.85 34.29 31.90 29.04 26.85 W:E 32.06 31.46 31.21 31.70 30.86 26.53 W:F 4/17/20 29.68 29.47 30.97 29.55 27.67 31.52 C:A 8/3/22 corn 28.81 31.16 21.10 28.54 29.38 24.24 7/29/20 25.90 26.79 29.53 25.79 25.51 31.10 C:B 7/8/24 25.80 30.05 21.39 29.31 29.98 29.92 M:A 6/8/20 miscanthus 30.33 31.62 32.42 32.10 31.11 28.24 9/23/20 30.03 29.66 28.72 28.85 29.89 31.30 Average wheat 32.47 30.71 31.57 31.16 30.50 29.79 corn 26.84 29.33 24.00 27.88 28.29 28.56 miscanthus 30.18 30.64 30.57 30.47 30.50 29.67 all 30.52 30.32 29.33 30.14 30.03 29.43 18 Super-resolution for precision farming 3.3.1 Spectral extension on cloudy days Also important when considering using the spectral extension model for real applications is whether cloud-free satellite imagery is even available. In Table 7a we compare an SRCNN model trained using 11 input channels (8 Sentinel-2 bands and 3 UAS RGB bands) versus an SRCNN model trained only on UAS RGB data. Unsurprisingly, the model that used the additional Sentinel-2 bands performed better by about 1.6 dB. However, for two images, the RGB based model actually performed slightly better. This is a testament to the importance of the RGB data. We also had access to UAS data from 11/20/2019 where there was no temporally adjacent cloud-free Sentinel-2 imagery. Before 11/20 there were no cloud-free Sentinel-2 images after 11/2 and after 11/20 there were no cloud-free Sentinel-2 images before 12/12. This is therefore an opportunity to test the RGB only model. Table 7b shows the performance of the model with UAS RGB input only compared to the model that uses Sentinel-2 data from 11/2 and 12/12, respectively. In this case the RGB only model outperforms the competition, but even with such mismatched Sentinel-2 data the competing models are not far behind. Table 7: A comparison of spectral extension SRCNN models utilizing UAS RGB input only versus also taking advantage of Sentinel-2 data. A model relying solely on UAS RGB input is particularly useful in situations where no cloud-free Sentinel-2 data is available. (a) Comparison when recent cloud-free Sentinel-2 data is available. Target Sentinel-2 Yes No Field Date PSNR A 3/20/19 35.69 32.47 4/16/19 36.56 30.89 B 3/20/19 27.64
https://arxiv.org/abs/2505.21746v1
27.06 I 32.67 29.96 D 32.61 31.95 4/16/19 29.08 28.78 E 3/20/19 32.33 32.59 4/16/19 32.82 33.08 Average 32.42 30.85(b) A situation where no recent Sentinel-2 data was available. For the UAS flights on 11/20/19 and 11/21/19 the closest available cloud-free Sentinel-2 images were on 11/2 and 12/12. Target Sentinel-2 None 11/2/19 12/12/19 Field Date PSNR B 11/20/19 27.98 30.67 23.62 H 28.34 29.75 25.46 E 11/21/19 29.39 23.78 26.56 Average 28.57 28.07 25.21 3.4 Experimental results of super-resolution fusion approaches for precision management Table 8 shows the results of the cover crop biomass yield and quality estimation of our RF models on the reconstructed high-resolution MS images. These experiments aimed to reproduce the improvements for the biomass and N estimates that we observed in Section 3.1. The RF model M27, which used reconstructed MS bands (from the Spectral-SRCNN model) as predictors, performed substantially better than the RF models ( M23,M24) that used predictors from the Sentinel-2 benchmark data (ie, ≈18% reduction in RMSE for biomass; ≈31% reduction in RMSE for N). In terms of improvement over flying a high spatial resolution RGB UAS ( M7in Table 3 and M25), our approach ( M27) performed slightly better for biomass yield estimation and substantially improved performance ( ≈31% reduction in the MSE) for N estimation, suggesting that both spatial resolution (i.e., how much plant is there) and spectral range matter for biomass prediction, and the spectral range matters more to N prediction. In other words, both leaf area and plant chemistry matter for biomass prediction, whereas plant chemistry is more important for N content prediction. When the farmer needs to generate datasets on a portion of their farm without flying, we resort to the Spatial-SRCNN model. However, the Spatial-SRCNN does not generalize as well as the Spectral-SRCNN, so the farmer needs a specialized Spatial-SRCNN unless he grows the same crop in the same geographic area (in which case our model can be used). For this, there are two scenarios: the farmer obtains ground-truth data using hyperspectral UAS data or use the Spectral-SRCNN to predict the ground-truth data. The last scenario is possible, since Spectral-SRCNN generalizes better than Spatial-SRCNN. Both RF models M31andM32with reconstructed MS predictors from the corresponding spatial extension SRCNNs (a & b) performed better than the Sentinel-2 benchmark ( M28).M28shows RF model results over the non-flown areas (i.e., where UAS data is not available) using the resampled Sentinel-2 8-band data (1 m) as predictors. The difference between Spatial-SRCNN (a) and Spatial-SRCNN (b) lies in their use of ground-truth data in the model training; the former used the UAS 8-band data as ground-truth, whereas the latter used our Spectral-SRCNN inferred high-resolution data as ground-truth. Thus, Spatial-SRCNN (b) allows the farmer to train a specialized spatial 19 Super-resolution for precision farming Table 8: Impacts of reconstructed spatial and spectral resolution and spectral range on RF model performance for biomass yield and N content estimations. Note that the test data differs as follows: Spectral: sites A, D, E in April and March, sites B and I in March; Spatial: sites A, E in March
https://arxiv.org/abs/2505.21746v1
and April, Temporal: sites A, D, E in December only. We abbreviated ground-truth as GT. Extension Tag Data Source (ground-truth=GT) Biomass Nitrogen Scenario R2RMSE (Mg/ha) R2RMSE (kg/ha) Spectral M23 Sentinel-2 8-band (10m) 68.9 0.87 55.9 25.1 M24 Sentinel 10-band (with SWIRs) (10m) 65.9 0.91 53.4 25.7 M25 UAS RGB (0.125m) 80.7 0.72 59.9 25.1 M26 UAS 8-band (0.125m) 84.5 0.67 82.0 17.7 M27 Spectral-SRCNN (8-fold) 8-band (0.125m) 81.9 0.71 83.7 17.2 Spatial M28 Sentinel-2 8-band (1m) 81.8 0.78 82.1 20.5 M29 UAS RGB (0.125m) 88.5 0.65 87.7 17.8 M30 UAS 8-band (0.125m) 93.2 0.49 93.6 12.6 M31 Spatial-SRCNN (a) 8-band (1m), GT: UAS 8-band 88.1 0.66 86.8 18.3 M32 Spatial-SRCNN (b) 8-band (1m) GT: Spec-SRCNN 89.8 0.6 88.7 16.7 Temporal M33 Sentinel-2 8-band (1m) 38.7 7.05 49.3 177.1 M34 ST-SRCNN 8-band (1m) 58.3 6.84 61.5 168.6 extension model when only an RGB camera is available that can be used to generate the high-resolution ground-truth data required to train a Spatial-SRCNN model. Spatial-SRCNN (a) improved over M28by≈15% reduction in MSE for biomass and ≈11% reduction in MSE for N, while Spatial-SRCNN (b) improved over M28by≈23% reduction in MSE for biomass and ≈19% reduction in MSE for N. This means that in terms of estimating biomass and N, substantial improvements can be made if the UAS 8-band ground-truth is available to train a Spatial-SRCNN model. However, if only the RGB camera data are available for training a Spatial-SRCNN (b), the performance is still better than M28andM31. Moreover, M29results suggest that by using this Spatial-SRCNN (b) approach the biomass and N predictions are better than predicting from actual UAS RGB data. Thus, the farmer can stop flying UAS once a specialized spatial extension model has been trained from the targeted UAS RGB data. The RF model M34with reconstructed MS predictors from the ST-SRCNN model performed better than the Sentinel-2 benchmark (16), highlighting its promise in scenarios when the farmer needs to generate data over their fields from a different temporal period. M34improved over M33by reducing ≈3%MSE for biomass and ≈4.8%for N estimation. Note that this experimental set-up applies to the conditions when UAS data is available, for example, for a farm over the past spring season but not available for the current or next one. 4 Discussion Our study investigated how UAS images (RGB, multispectral, or hyperspectral) compare with Sentinel-2 data to monitor cover crop health, specifically in terms of biomass yield and N content at the farm scale. Building on these insights, we developed a novel approach to sharpen Sentinel-2 imagery and spectrally extend UAS RGB data. We then examined how these reconstructed RS datasets can improve cover crop modeling across spectral, spatial, and temporal domains. Specifically, we explored how the proposed methods generalize across time periods and agricultural fields, particularly under constraints such as limited UAS access or persistent cloud cover. Our findings indicate that effective monitoring of cover crop health depends not only on spatial resolution but also on the spectral range and resolution of multispectral RS imagery. While high-resolution UAS imagery with VNIR spectral coverage consistently outperformed Sentinel-2, the high cost
https://arxiv.org/abs/2505.21746v1
of UAS deployment remains a limiting factor in precision farming. To address this, we demonstrated that UAS-guided super-resolution approaches can yield significant improvements in predictive modeling across spectral, spatial, and temporal dimensions, offering a cost-effective strategy for advancing precision agriculture. Our findings are consistent with previous studies that reported a positive association between spatial resolution and biomass yield estimation accuracy [ Assmann et al., 2020 ,Salehin et al., 2025 ]. Similarly, our results on cover crop nitrogen estimation accuracy align with those that used narrow MSI spectral bands (15 nm bandwidth) centered at 705 nm and 740 nm, such as red-edge bands, to estimate crop and grass chlorophyll and N content [ Clevers and Gitelson, 2013 ], N uptake in maize crop [ Sharifi, 2020 ], and canopy N content in rice cropping system [ Rossi et al., 2023 ]. These confirm that an extended and higher spatial resolution spectral range beyond RGB range is important for accurate nitrogen estimation. These findings support the earlier premise that a farmer with a hyperspectral sensor covering 20 Super-resolution for precision farming the 400-2,500 nm range would achieve the optimal model performance. Our analysis with hyperspectral data further suggests that a spectral resolution of about 100 nm is sufficient for accurate estimation of both biomass and N, which is comparable to the spectral distance between different Sentinel-2 bands. However, collecting high-resolution imagery across large areas using UAS platforms remains impractical due to operational and cost constraints. Therefore, as we have shown via super-resolution fusion strategies, the synergy of UAS and Sentinel-2 imagery at spatial, temporal, and spectral dimensions offers an alternative approach to achieve improved model performance for precision agriculture. In particular, spectral extension allows farmers to use low-cost UAS platforms equipped with RGB cameras while still leveraging the broader spectral range of Sentinel-2 or other satellites without requiring expensive multi- or hyperspectral sensors. Unlike spatial and temporal extension models, the spectral extension model requires both UAS and satellite imagery but retains the high spatial resolution of the original UAS data. As a result, the method produces sharpened and spectrally enriched Sentinel-2 MS bands, enabling more detailed field-level analysis management practices. Figure 13: The figure shows a UAS image from a winter wheat field in Washington state in the center surrounded by Spectral-SRCNN, an annotated weed mask and Sentinel-2 images from the same location. The false color composite uses Narrow band NIR (NNIR), Green and VRE 2. These bands were chosen to enable the annotated streak of Italian rye grass to pop out in the image. The streak appears in bright pink/magenta in the false color image and less contrastive as a lighter green in the UAS image. Another application we demonstrated in this study is the identification of weed infestations. When only RGB imagery is available, the weed patch — slightly brighter green hue — can be difficult to distinguish (Figure 13). However, the inclusion of sharpened VRE and NIR bands makes the weed patch clearly visible, without the need for a costly multi- or hyperspectral sensor. This application is analogous to the well-known
https://arxiv.org/abs/2505.21746v1
use of UV light to reveal otherwise invisible contamination, highlighting the potential of spectral extension to uncover features not detectable with conventional RGB imagery alone [Li et al., 2021]. As with spectral extension, targeted UAS imagery can also be utilized for spatial and temporal extension models. The spatial extension model allows the farmer to fly a small and representative subset of their fields, and this high- resolution imagery is leveraged to provide information to the SRCNN model to interpret the Sentinel-2 imagery. For temporal extension, UAS images collected at different time points help improve the alignment and interpretation of Sentinel-2 data as crops progress through various phenological stages. Although weekly UAS flights may be beneficial to the model, this frequency may not be necessary depending on the type of precision crop management 21 Super-resolution for precision farming decisions that are considered. For N management, for example, decisions are made within a defined narrow range of crop development [ Zhang et al., 2015 ]. Although the UAS imagery collection can be crop- and context-specific (e.g., responsive phenological stages, stress detection, managing variable-rate nitrogen application, etc.) data collected in one season can be leveraged in subsequent years, thus requiring fewer images over time. This strategy of using targeted UAS imagery across space, time and spectrum contrasts with other studies that have fused UAS and satellite imagery with the same spatial and temporal extent [ Brook et al., 2020 ]. Full-coverage fusion can be effective but is impractical from a farmer’s perspective due to operational and cost constraints, as discussed earlier. Contrary to using image fusion at a common spatial extent, we have shown that targeted UAS imagery fused with satellite imagery using SRCNN can achieve many of the same objectives at a sub-meter spatial resolution, which may be consistent with many requirements for precision crop management, such as scouting for problems, monitoring to prevent yield losses, and planning crop management operations [ Hunt Jr and Daughtry, 2018 ]. When only Sentinel-2 data are available, our models produce images at a resolution of 1 m. In overlapping areas where both UAS and Sentinel-2 data are available, we achieve sub-meter resolution (0.125 m) with no apparent loss in fidelity. In this fusion context, our Spectral-SRCNN model is novel in that it generates high-resolution Sentinel-2 data using hyperspectral targets with minimal loss of fidelity, which laid the foundation for the strong performance observed in the context of cover cropping practices. The superior performance of the spectral extension model generated reconstructed high-resolution imagery in cover crop application is not unexpected, given that the model had access to both high-resolution RGB input and wide-range spectral input from Sentinel-2. The spectral bands for Sentinel-2 are designed for agricultural and environmental monitoring, and the spectral extension model achieves performance that closely approaches that of models based on hyperspectral UAS data. The spectral extension model also generalizes well to other regions and crop species, and although there is some loss in performance, the generated images can still serve as ground-truth data for a farmer that needs to train a spatial extension model
https://arxiv.org/abs/2505.21746v1
specific to a region or crop. This scenario deserves consideration because high-quality hyperspectral sensors can cost as much as $175,000 [ Olson and Anderson, 2021 ], and the spatial extension model exhibits reduced generalization compared to the spectral extension model. If there is only one target application, it may make sense to build a task-specific neural network. However, the benefit of harmonizing hyperspectral images with a satellite sensor is that we can take advantage of the models available for that satellite, such as vegetation indices and random forest models. In the case of Sentinel-2 this immediately gives us accessibility to a plethora of models. Building on these proposed and validated multi-dimensional super-resolution capabilities, an AI-driven alert system could be developed to provide early warnings to farmers of potential anomalies developing in their fields due to emerging weed infestations, insect outbreaks, or disease symptoms. Although satellite imagery alone may lack the much needed spatial resolution and UAS may be limited in spatial and spectral coverage, their combination offers both improved resolution and sensitivity to early and more accurate detection of these field-level anomalies. A passive alert system could provide sufficient early warning itself or prompt targeted in-field scouting for further investigation. 4.1 Trade-off between model accuracy and cost to the farmer Cost Considerations: From a farmer’s perspective there is a trade-off between application accuracy and obtaining UAS imagery. Regulations for flying a UAS vary greatly by country [ FAA, 2020 ]. However, any commercial operator in the US requires authorization in a Part 107 Certificate. This knowledge gap leads fewer farmers to operate their own UAS and instead resort to obtaining it through a service. The UAS pilot rates for agricultural services were reported to be on average $162/hour according to [ Simula, 2021 ]. On top of the pilot rates there is the additional cost of managing the imagery and these expenses are proportional to the number of flights. Moreover, [ Olson and Anderson, 2021 ] gives the price of an MS camera as much as $10 K and $175 K for a hyperspectral camera, thus making it significantly more expensive to obtain spectral information beyond the RGB range. Figure 14 highlights cost-effectiveness scenarios for our applied use case of estimating biomass yield and N content from different UAS and satellite datasets. Accuracy Considerations: We have already seen in Table 3 that different applications have different requirements in terms of spatial and temporal resolution and spectral range and resolution. For example, the N content prediction improves with greater spectral range and resolution in contrast to the biomass yield prediction, and therefore a higher quality UAS camera may be beneficial (Figure 14). In the case where the spectral bands of the Sentinel-2 satellite are sufficient, we have shown that an RGB camera together with a cloud-free Sentinel-2 image can suffice. Very little degradation is observed compared to the same sensor observations obtained using a UAS, as can be seen in Table 4 and Figure 15. We have little evidence to assess how well the super-resolution methods will generalize outside of the location and
https://arxiv.org/abs/2505.21746v1
cover crop setting. However, as seen in Figure 13 and Table 6 there is some evidence that the spectral extension model will generalize, while the lower performance of the temporal and spatial extension models is an indication that these will have to be re-trained or trained on a substantially larger training dataset. As we see in Figure 16 the quality of the spatial extension model is not at the level of the spectral extension model. This is understandable as there is no 22 Super-resolution for precision farming Figure 14: Effectiveness of different sensors for super-resolution across spectral, spatial, and temporal domains in estimating biomass yield and N content and their cost-effectiveness. Figure 15: The spectral extension model compared to the ground-truth generated by the hyperspectral camera for site I. The first two columns represents the input to the model and the third column is the output from model M27. longer any high-resolution side information given to the model and as a result the spatial extension model generalizes less well and may have to be retrained for different crops or locations. The question from the farmers viewpoint would then be whether retraining these models requires an actual hyperspectral camera or whether the spectral extension model could suffice to generate synthetic ground-truth data. Trade-off: We have shown that Sentinel-2 imagery could be improved using targeted UAS imagery. However, we do not know if this improvement will always be considered cost-effective or how much imagery is required to achieve 23 Super-resolution for precision farming Figure 16: The spatial extension model M32compared to the ground-truth generated by the hyperspectral camera for site E. The first column is the Sentinel-2 input to the model, while the second column is the super-resolved prediction and the third column is the desired ground-truth. performance gains, which are considered cost-effective. At one end, the farmer could choose not to fly any of the fields, depending only on Sentinel-2 imagery if performance gains are not sufficient. However, for the spectral extension model, we showed that they could use the trained model with little loss in model accuracy without the need for any UAS flights. We do not know how robust or how sensitive this model is to changes in geography and crop type. 5 Limitations Although our analysis has shown that UAS and satellite imagery play a complementary role that can positively impact predictive model performance in precision management practices, there are some limitations that could be improved through further research. While the use of targeted UAS imagery with higher spatial and spectral resolution and range improves the Sentinel-2 model performance for cover crop estimations, it is yet unknown what the optimal extent of imagery coverage or temporal resolution is in the trade-off between performance gains and acquisition costs. We have shown that spatial extension with limited hyperspectral imagery is possible for cover crop management. It remains uncertain how far geographically this modeling approach will extend or whether geographical changes will affect the spectral, spatial, and temporal models differently. It may be required to develop one model for each available satellite platform.
https://arxiv.org/abs/2505.21746v1
The lower the resolution of the satellite platform, the more data is needed. Since hyperspectral data collection is expensive this limits which satellite can be considered. Our current imagery comes from multiple counties across Maryland and Pennsylvania, but the models may generalize further geographically. However, our imagery has limited temporal coverage, which probably reduced model performance for the temporal extension. Covering multiple years and covering all stages of crop development may improve the temporal extension model. Future investigations could include efforts to improve model performance using larger datasets. How much would broader temporal and spatial coverage improve the model’s ability to generalize? In addition, different combinations of spectral bands and sensors should be explored. Further improvement of model performance could be possible by using other state-of-the-art super-resolution models such as SR3 (Super-Resolution via Repeated Refinement) [Saharia et al., 2021 ]. It should be noted that there are some regulations that preclude the use of generative models such as SR3 and possibly even GAN-based models for use in government run programs. Finally, development of an 24 Super-resolution for precision farming alert detection system that incorporates pre-trained fusion models and cover crop estimations, among other relevant applications, could benefit farmers with cost-effective and targeted management practices. 6 Conclusions This study demonstrated a scalable, end-to-end super-resolution system that integrates satellite and UAS imagery to support cost-effective precision farming practices. Although illustrated through a case study on winter cover crops in the Chesapeake Bay region, the methodology is broadly applicable to other crops, regions, and precision agriculture scenarios. By leveraging super-resolution CNN, we showed that spatial, spectral, and temporal limitations of each platform can be mitigated through targeted fusion. For example, the reconstructed sub-meter resolution Sentinel-2 images improved the estimation of biomass and N by up to 18% and 31%, respectively, compared to the 10 m resolution benchmark. This approach offers a practical solution for precision agricultural management that balances resolution, scalability, and affordability, without relying on expensive hyperspectral sensors. We have shown how targeted UAS data (both multispectral and hyperspectral), strategically collected at select locations and times, can be effectively integrated with satellite images to enhance spatial, spectral and temporal resolution while reducing operational costs. By flying a subset of fields with low-cost RGB cameras, farmers can extend the spectral range of these imagery to include critical VRE and NIR bands, and potentially SWIR bands if high-resolution ground-truth data are available, without needing any expensive sensors. Similarly, the spatial and temporal coverage of these enhanced images can be improved by leveraging the wide availability of satellite imagery. While extending the spatial coverage we have shown that the proposed spatial extension SRCNN model yields better biomass and N predictions than those from actual UAS RGB data. Thus, the farmer can stop flying UAS once a specialized spatial extension model has been trained from targeted UAS RGB data. Moreover, a SRCNN model that relies solely on the UAS RGB input can be useful in situations where there are no cloud-free Sentinel-2 data available. These flexible fusion strategies across space, time, and spectrum offer a scalable solution for
https://arxiv.org/abs/2505.21746v1
precision crop management across varying crop types, field sizes, and growing conditions. Future research is needed to further assess the robustness of this super-resolution AI system across space and time. An early warning system can be developed that enables farmers to proactively and affordably address various anomalies in their agricultural fields, including emerging weed infestations, insect outbreaks, and disease symptoms. Acknowledgments Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture. USDA is an equal opportunity provider and employer. This research was a contribution from the Long-Term Agroecosystem Research (LTAR) network. LTAR is supported by the USDA. Not subject to copyright in the USA. Contribution of United States Department of Agriculture, Agricultural Research Service (USDA-ARS). In addition to the support from USDA, the authors would also like to thank ESRI, Andrew Nelson (Nelson Wheat), Ranveer Chandra (Microsoft), Anirudh Badam (formerly Microsoft), and Roberto Estevão (Microsoft) for providing resources, data, and useful feedback. Special thanks to Roberto Estevão for adapting an early version of the spectral extension model for inclusion in the open source repository FarmVibes3. References [Adigun et al., 2022] Adigun, O., Olsen, P. A., and Chandra, R. (2022). Location aware super-resolution for satellite data fusion. In IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium , pages 3758– 3761. IEEE. [Adler et al., 2024] Adler, P. R., Nguyen, H., Rau, B. M., and Dell, C. J. (2024). Modeling n2o emissions with remotely sensed variables using machine learning. Environmental Research Communications , 6(9):091004. [Alvarez-Vanhard et al., 2021] Alvarez-Vanhard, E., Corpetti, T., and Houet, T. (2021). Uav & satellite synergies for optical remote sensing applications: A literature review. Science of remote sensing , 3:100019. [Argento et al., 2021] Argento, F., Anken, T., Abt, F., V ogelsanger, E., Walter, A., and Liebisch, F. (2021). Site-specific nitrogen management in winter wheat supported by low-altitude remote sensing and soil data. Precision Agriculture , 22:364–386. 3https://github.com/microsoft/farmvibes-ai/blob/main/notebooks/spectral_extension/spectral_extension.ipynb 25 Super-resolution for precision farming [Assmann et al., 2020] Assmann, J. J., Myers-Smith, I. H., Kerby, J. T., Cunliffe, A. M., and Daskalova, G. N. (2020). Drone data reveal heterogeneity in tundra greenness and phenology not captured by satellites. Environmental Research Letters , 15(12):125002. [Belgiu et al., 2023] Belgiu, M., Marshall, M., Boschetti, M., Pepe, M., Stein, A., and Nelson, A. (2023). Prisma and sentinel-2 spectral response to the nutrient composition of grains. Remote Sensing of Environment , 292:113567. [Berger et al., 2020] Berger, K., Verrelst, J., Féret, J.-B., Wang, Z., Wocher, M., Strathmann, M., Danner, M., Mauser, W., and Hank, T. (2020). Crop nitrogen monitoring: Recent progress and principal developments in the context of imaging spectroscopy missions. Remote Sensing of Environment , 242:111758. [Brook et al., 2020] Brook, A., De Micco, V ., Battipaglia, G., Erbaggio, A., Ludeno, G., Catapano, I., and Bonfante, A. (2020). A smart multiple spatial and temporal resolution system to support precision agriculture from satellite images: Proof of concept on aglianico vineyard. Remote Sensing of Environment , 240:111679. [Clevers and Gitelson, 2013] Clevers, J.
https://arxiv.org/abs/2505.21746v1
G. and Gitelson, A. A. (2013). Remote estimation of crop and grass chlorophyll and nitrogen content using red-edge bands on sentinel-2 and-3. International Journal of Applied Earth Observation and Geoinformation , 23:344–351. [Dash et al., 2018] Dash, J. P., Pearse, G. D., and Watt, M. S. (2018). Uav multispectral imagery can complement satellite data for monitoring forest health. Remote Sensing , 10(8):1216. [de Oca and Flores, 2021] de Oca, A. M. and Flores, G. (2021). The agriq: A low-cost unmanned aerial system for precision agriculture. Expert Systems with Applications , 182:115163. [Deines et al., 2023] Deines, J. M., Guan, K., Lopez, B., Zhou, Q., White, C. S., Wang, S., and Lobell, D. B. (2023). Recent cover crop adoption is associated with small maize and soybean yield losses in the united states. Global change biology , 29(3):794–807. [do Nascimento Bendini et al., 2024] do Nascimento Bendini, H., Fieuzal, R., Carrere, P., Clenet, H., Galvani, A., Allies, A., and Ceschia, É. (2024). Estimating winter cover crop biomass in france using optical sentinel-2 dense image time series and machine learning. Remote Sensing , 16(5):834. [Dong et al., 2014] Dong, C., Loy, C. C., He, K., and Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In European conference on computer vision , pages 184–199. Springer. [Dong et al., 2015] Dong, C., Loy, C. C., He, K., and Tang, X. (2015). Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence , 38(2):295–307. [Dong et al., 2016] Dong, C., Loy, C. C., and Tang, X. (2016). Accelerating the super-resolution convolutional neural network. In European conference on computer vision , pages 391–407. Springer. [FAA, 2020] FAA (2020). Small unmanned aircraft systems (uas) regulations. Accessed: November 25, 2024. [Fan et al., 2020] Fan, X., Vrieling, A., Muller, B., and Nelson, A. (2020). Winter cover crops in dutch maize fields: Variability in quality and its drivers assessed from multi-temporal sentinel-2 imagery. International Journal of Applied Earth Observation and Geoinformation , 91:102139. [Galar et al., 2020] Galar, M., Sesma, R., Ayala, C., Albizua, L., and Aranda, C. (2020). Super-resolution of sentinel-2 images using convolutional neural networks and real ground truth data. Remote Sensing , 12(18):2941. [Gao et al., 2020] Gao, F., Anderson, M. C., and Hively, W. D. (2020). Detecting cover crop end-of-season using venµs and sentinel-2 satellite imagery. Remote Sensing , 12(21):3524. [Goffart et al., 2021] Goffart, D., Curnel, Y ., Planchon, V ., Goffart, J.-P., and Defourny, P. (2021). Field-scale assessment of belgian winter cover crops biomass based on sentinel-2 data. European Journal of Agronomy , 126:126278. [Gommers et al., 2022] Gommers, R., Virtanen, P., Burovski, E., Weckesser, W., Oliphant, T. E., Haberland, M., Cournapeau, D., Reddy, T., Alexbrc, Peterson, P., Nelson, A., Wilson, J., Endolith, Mayorov, N., Ilhan Polat, Van Der Walt, S., Roy, P., Brett, M., Laxalde, D., Larson, E., Millman, J., , Lars, Peterbell10, Sakai, A., Van Mulbregt, P., CJ Carey, Eric-Jones, Kern, R., , Kai, and Moore, E. (2022). scipy/scipy: Scipy 1.9.0. [Govindasamy et al., 2023] Govindasamy, P., Muthusamy, S. K., Bagavathiannan, M., Mowrer, J., Jagannadham, P. T.
https://arxiv.org/abs/2505.21746v1
K., Maity, A., Halli, H. M., GK, S., Vadivel, R., TK, D., et al. (2023). Nitrogen use efficiency—a key to enhance crop productivity under a changing climate. Frontiers in Plant Science , 14:1121073. [Grüner et al., 2021] Grüner, E., Astor, T., and Wachendorf, M. (2021). Prediction of biomass and n fixation of legume–grass mixtures using sensor fusion. Frontiers in Plant Science , 11:603921. [He et al., 2016] He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778. 26 Super-resolution for precision farming [Hively et al., 2020] Hively, W. D., Lee, S., Sadeghi, A. M., McCarty, G. W., Lamb, B. T., Soroka, A., Keppler, J., Yeo, I.-Y ., and Moglen, G. E. (2020). Estimating the effect of winter cover crops on nitrogen leaching using cost-share enrollment data, satellite remote sensing, and soil and water assessment tool (swat) modeling. Journal of Soil and Water Conservation , 75(3):362–375. [Holzhauser et al., 2022] Holzhauser, K., Räbiger, T., Rose, T., Kage, H., and Kühling, I. (2022). Estimation of biomass and n uptake in different winter cover crops from uav-based multispectral canopy reflectance data. Remote Sensing , 14(18):4525. [Huang et al., 2017] Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4700–4708. [Hunt Jr and Daughtry, 2018] Hunt Jr, E. R. and Daughtry, C. S. (2018). What good are unmanned aircraft systems for agricultural remote sensing and precision agriculture? International journal of remote sensing , 39(15-16):5345–5376. [Javan et al., 2021] Javan, F. D., Samadzadegan, F., Mehravar, S., Toosi, A., Khatami, R., and Stein, A. (2021). A review of image fusion techniques for pan-sharpening of high-resolution satellite imagery. ISPRS journal of photogrammetry and remote sensing , 171:101–117. [Jonak et al., 2024] Jonak, M., Mucha, J., Jezek, S., Kovac, D., and Cziria, K. (2024). Spagri-ai: Smart precision agri- culture dataset of aerial images at different heights for crop and weed detection using super-resolution. Agricultural Systems , 216:103876. [KC et al., 2021] KC, K., Zhao, K., Romanko, M., and Khanal, S. (2021). Assessment of the spatial and temporal patterns of cover crops using remote sensing. Remote Sensing , 13(14):2689. [Kong et al., 2023] Kong, J., Ryu, Y ., Jeong, S., Zhong, Z., Choi, W., Kim, J., Lee, K., Lim, J., Jang, K., Chun, J., et al. (2023). Super resolution of historic landsat imagery using a dual generative adversarial network (gan) model with cubesat constellation imagery for spatially enhanced long-term vegetation monitoring. ISPRS Journal of Photogrammetry and Remote Sensing , 200:1–23. [Lanaras et al., 2018] Lanaras, C., Bioucas-Dias, J., Galliani, S., Baltsavias, E., and Schindler, K. (2018). Super- resolution of sentinel-2 images: Learning a globally applicable deep neural network. ISPRS Journal of Photogram- metry and Remote Sensing , 146:305–319. [Lawson and Hanson, 1995] Lawson, C. L. and Hanson, R. J. (1995). Solving least squares problems . SIAM. [Li et al., 2022] Li, J., Hong, D., Gao, L., Yao, J.,
https://arxiv.org/abs/2505.21746v1
Zheng, K., Zhang, B., and Chanussot, J. (2022). Deep learning in multimodal remote sensing data fusion: A comprehensive review. International Journal of Applied Earth Observation and Geoinformation , 112:102926. [Li et al., 2017] Li, S., Kang, X., Fang, L., Hu, J., and Yin, H. (2017). Pixel-level image fusion: A survey of the state of the art. information Fusion , 33:100–112. [Li et al., 2021] Li, Y ., Al-Sarayreh, M., Irie, K., Hackell, D., Bourdot, G., Reis, M. M., and Ghamkhar, K. (2021). Identification of weeds based on hyperspectral imaging and machine learning. Frontiers in Plant Science , 11:611622. [Liu et al., 2018] Liu, H., Fu, Z., Han, J., Shao, L., and Liu, H. (2018). Single satellite imagery simultaneous super- resolution and colorization using multi-task deep neural networks. Journal of Visual Communication and Image Representation , 53:20–30. [Ma et al., 2020] Ma, J., Yu, W., Chen, C., Liang, P., Guo, X., and Jiang, J. (2020). Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Information Fusion , 62:110–120. [Maimaitijiang et al., 2020] Maimaitijiang, M., Sagan, V ., Sidike, P., Daloye, A. M., Erkbol, H., and Fritschi, F. B. (2020). Crop monitoring using satellite/uav data fusion and machine learning. Remote Sensing , 12(9):1357. [Marx et al., 2017] Marx, A., McFarlane, D., and Alzahrani, A. (2017). Uav data for multi-temporal landsat analysis of historic reforestation: a case study in costa rica. International journal of remote sensing , 38(8-10):2331–2348. [Meng et al., 2024] Meng, C., Shu, L., Han, R., Chen, Y ., Yi, L., and Deng, D.-J. (2024). Farmsr: Super-resolution in precision agriculture field production scenes. In 2024 IEEE 22nd International Conference on Industrial Informatics (INDIN) , pages 1–6. IEEE. [Olson and Anderson, 2021] Olson, D. and Anderson, J. (2021). Review on unmanned aerial vehicles, remote sensors, imagery processing, and their applications in agriculture. Agronomy Journal , 113(2):971–992. [Ouhami et al., 2021] Ouhami, M., Hafiane, A., Es-Saady, Y ., El Hajji, M., and Canals, R. (2021). Computer vision, iot and data fusion for crop disease detection using machine learning: A survey and ongoing research. Remote Sensing , 13(13):2486. 27 Super-resolution for precision farming [Puliti et al., 2018] Puliti, S., Saarela, S., Gobakken, T., Ståhl, G., and Næsset, E. (2018). Combining uav and sentinel-2 auxiliary data for forest growing stock volume estimation through hierarchical model-based inference. Remote sensing of environment , 204:485–497. [Radoglou-Grammatikis et al., 2020] Radoglou-Grammatikis, P., Sarigiannidis, P., Lagkas, T., and Moscholios, I. (2020). A compilation of uav applications for precision agriculture. Computer Networks , 172:107148. [Rossi et al., 2023] Rossi, M., Candiani, G., Nutini, F., Gianinetto, M., and Boschetti, M. (2023). Sentinel-2 estimation of cnc and lai in rice cropping system through hybrid approach modelling. European Journal of Remote Sensing , 56(1):2117651. [Roth and Streit, 2018] Roth, L. and Streit, B. (2018). Predicting cover crop biomass by lightweight uas-based rgb and nir photography: an applied photogrammetric approach. Precision Agriculture , 19:93–114. [S2A, 2024] S2A (2024). Spectral responses (s2a). Accessed: November 25, 2024. [Sagan et al., 2019] Sagan, V ., Maimaitijiang, M., Sidike, P., Maimaitiyiming, M., Erkbol, H., Hartling, S., Peterson, K., Peterson, J.,
https://arxiv.org/abs/2505.21746v1
Burken, J. G., and Fritschi, F. (2019). Uav/satellite multiscale data fusion for crop monitoring and early stress detection. In Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13 , pages 715–722. International Society for Photogrammetry and Remote Sensing. [Saharia et al., 2021] Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., and Norouzi, M. (2021). Image super-resolution via iterative refinement. [Saharia et al., 2022] Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., and Norouzi, M. (2022). Image super- resolution via iterative refinement. IEEE transactions on pattern analysis and machine intelligence , 45(4):4713–4726. [Salehin et al., 2025] Salehin, S. M. U., Poudyal, C., Rajan, N., and Bagavathiannan, M. (2025). Cover crop types influence biomass estimation using unmanned aerial vehicle-mounted multispectral sensors. Remote Sensing , 17(8):1471. [Salgueiro Romero et al., 2020] Salgueiro Romero, L., Marcello, J., and Vilaplana, V . (2020). Super-resolution of sentinel-2 imagery using generative adversarial networks. Remote Sensing , 12(15):2424. [Samadzadegan et al., 2025] Samadzadegan, F., Toosi, A., and Dadrass Javan, F. (2025). A critical review on multi- sensor and multi-platform remote sensing data fusion approaches: current status and prospects. International Journal of Remote Sensing , 46(3):1327–1402. [Schreiber et al., 2022] Schreiber, L. V ., Atkinson Amorim, J. G., Guimarães, L., Motta Matos, D., Maciel da Costa, C., and Parraga, A. (2022). Above-ground biomass wheat estimation: deep learning with uav-based rgb images. Applied Artificial Intelligence , 36(1):2055392. [Sedghi et al., 2023] Sedghi, N., Fox, R., Sherman, L., Gaudlip, C., and Weil, R. (2023). Aerial interseeding and planting green to enhance nitrogen capture and cover crop biomass carbon. Journal of Soil and Water Conservation , 78(3):282–298. [Seelan et al., 2003] Seelan, S. K., Laguette, S., Casady, G. M., and Seielstad, G. A. (2003). Remote sensing applications for precision agriculture: A learning community approach. Remote sensing of environment , 88(1-2):157– 169. [Sharifi, 2020] Sharifi, A. (2020). Using sentinel-2 data to predict nitrogen uptake in maize crop. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing , 13:2656–2662. [Simonyan and Zisserman, 2014] Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 . [Simula, 2021] Simula, A. (2021). Establishing drone technology to agriculture as a service provider. Master’s thesis, Jyväskylä University of Applied Sciences. [Singh et al., 2020] Singh, P., Pandey, P. C., Petropoulos, G. P., Pavlides, A., Srivastava, P. K., Koutsias, N., Deng, K. A. K., and Bao, Y . (2020). Hyperspectral remote sensing in precision agriculture: Present status, challenges, and future trends. In Hyperspectral remote sensing , pages 121–146. Elsevier. [Stark et al., 2018] Stark, D. J., Vaughan, I. P., Evans, L. J., Kler, H., and Goossens, B. (2018). Combining drones and satellite tracking as an effective tool for informing policy change in riparian habitats: a proboscis monkey case study. Remote Sensing in Ecology and Conservation , 4(1):44–52. [Tarasiewicz et al., 2023] Tarasiewicz, T., Nalepa, J., Farrugia, R. A., Valentino, G., Chen, M., Briffa, J. A., and Kawulok, M. (2023). Multitemporal and multispectral data fusion for super-resolution of sentinel-2 images. IEEE Transactions on Geoscience and Remote Sensing
https://arxiv.org/abs/2505.21746v1
, 61:1–19. 28 Super-resolution for precision farming [Thieme et al., 2020] Thieme, A., Yadav, S., Oddo, P. C., Fitz, J. M., McCartney, S., King, L., Keppler, J., McCarty, G. W., and Hively, W. D. (2020). Using nasa earth observations and google earth engine to map winter cover crop conservation performance in the chesapeake bay watershed. Remote Sensing of Environment , 248:111943. [Toosi et al., 2025] Toosi, A., Samadzadegan, F., and Javan, F. D. (2025). Toward the optimal spatial resolution ratio for fusion of uav and sentinel-2 satellite imageries using metaheuristic optimization. Advances in Space Research . [Wang et al., 2021] Wang, T., Liu, Y ., Wang, M., Fan, Q., Tian, H., Qiao, X., and Li, Y . (2021). Applications of uas in crop biomass monitoring: A review. Frontiers in Plant Science , 12:616689. [Watt et al., 2017] Watt, M. S., Heaphy, M., Dunningham, A., and Rolando, C. (2017). Use of remotely sensed data to characterize weed competition in forest plantations. International Journal of Remote Sensing , 38(8-10):2448–2463. [Weiss et al., 2020] Weiss, M., Jacob, F., and Duveiller, G. (2020). Remote sensing for agricultural applications: A meta-review. Remote sensing of environment , 236:111402. [Wu et al., 2021] Wu, M., Jin, X., Jiang, Q., Lee, S.-j., Liang, W., Lin, G., and Yao, S. (2021). Remote sensing image colorization using symmetrical multi-scale dcgan in yuv color space. The Visual Computer , 37:1707–1729. [Xia et al., 2021] Xia, Y ., Guan, K., Copenhaver, K., and Wander, M. (2021). Estimating cover crop biomass nitrogen credits with sentinel-2 imagery and sites covariates. Agronomy Journal , 113(2):1084–1101. [Yuan et al., 2019] Yuan, M., Burjel, J., Isermann, J., Goeser, N., and Pittelkow, C. (2019). Unmanned aerial vehicle– based assessment of cover crop biomass and nitrogen uptake variability. Journal of Soil and Water Conservation , 74(4):350–359. [Yuan et al., 2021] Yuan, M., Burjel, J. C., Martin, N. F., Isermann, J., Goeser, N., and Pittelkow, C. M. (2021). Advancing on-farm research with uavs: Cover crop effects on crop growth and yield. Agronomy Journal , 113(2):1071– 1083. [Zhang and Kovacs, 2012] Zhang, C. and Kovacs, J. M. (2012). The application of small unmanned aerial systems for precision agriculture: a review. Precision agriculture , 13:693–712. [Zhang et al., 2015] Zhang, X., Davidson, E. A., Mauzerall, D. L., Searchinger, T. D., Dumas, P., and Shen, Y . (2015). Managing nitrogen for sustainable development. Nature , 528(7580):51–59. [Zhang et al., 2018a] Zhang, Y ., Li, K., Li, K., Wang, L., Zhong, B., and Fu, Y . (2018a). Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV) , pages 286–301. [Zhang et al., 2018b] Zhang, Y ., Tian, Y ., Kong, Y ., Zhong, B., and Fu, Y . (2018b). Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 2472–2481. [Zhu et al., 2024] Zhu, H., Lin, C., Liu, G., Wang, D., Qin, S., Li, A., Xu, J.-L., and He, Y . (2024). Intelligent agriculture: Deep learning in uav-based remote sensing imagery for crop diseases and
https://arxiv.org/abs/2505.21746v1
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question Answering Chengyue Huang*Brisa Maneechotesuwan*Shivang Chopra Zsolt Kira Georgia Institute of Technology {chuang475,bmaneech3,shivangchopra11,zkira }@gatech.edu Abstract Visual question answering (VQA) systems face significant challenges when adapting to real-world data shifts, espe- cially in multi-modal contexts. While robust fine-tuning strategies are essential for maintaining performance across in-distribution (ID) and out-of-distribution (OOD) scenar- ios, current evaluation settings are primarily unimodal or particular to some types of OOD, offering limited insight into the complexities of multi-modal contexts. In this work, we propose a new benchmark FRAMES-VQA ( Fine-Tuning Robustness AcrossMulti-Modal Shifts in VQA) for evalu- ating robust fine-tuning for VQA tasks. We utilize ten ex- isting VQA benchmarks, including VQAv2, IV-VQA, VQA- CP , OK-VQA and others, and categorize them into ID, near and far OOD datasets covering uni-modal, multi-modal and adversarial distribution shifts. We first conduct a com- prehensive comparison of existing robust fine-tuning meth- ods. We then quantify the distribution shifts by calculat- ing the Mahalanobis distance using uni-modal and multi- modal embeddings extracted from various models. Fur- ther, we perform an extensive analysis to explore the in- teractions between uni- and multi-modal shifts as well as modality importance for ID and OOD samples. These analyses offer valuable guidance on developing more ro- bust fine-tuning methods to handle multi-modal distribution shifts. The code is available at https://github.com/ chengyuehuang511/FRAMES-VQA . 1. Introduction Robust fine-tuning methods aim to adapt pre-trained models to downstream tasks while retaining resilience to distribu- tion shifts [38, 52]. Distribution shifts in the image modal- ity are widely explored, with various datasets designed to evaluate a model’s generalization across diverse visual con- ditions. For example, DomainNet [37] spans multiple vi- sual domains including real images, sketches, paintings, *Equal contribution.and clipart, challenging models to generalize across differ- ent styles and representations. Similarly, various ImageNet variants [10, 22, 39, 50] introduce shifts through image variations, adversarial examples, rendering transformations, and changes in texture or background. Collectively, these datasets provide a comprehensive framework for assessing how well models withstand visual distribution changes. While robust fine-tuning algorithms are widely exam- ined under distribution shifts in a single modality (images), few studies have explored robust fine-tuning for VQA tasks, where distribution shifts are multi-modal and models must adapt to variations across both visual and textual inputs. Apart from visual shift [1], there are question shifts [15, 41] involving variations in phrasing, structure, or vocabulary, as well as answer shifts [2] with changes in answer distribu- tions such as frequency and formatting. Beyond uni-modal shift, these variations may occur simultaneously across vi- sual, question, and answer inputs [9, 31, 42, 44, 49], posing an even greater challenge as models must generalize across complex, combined shifts. Therefore, we build upon our preliminary explo- ration [25] and propose a benchmark FRAMES- VQA ( Fine-Tuning Robustness Across Multi-Modal Shifts in VQA) to systematically evaluate the robustness of fine-tuning in VQA task. We leverage ten existing VQA datasets and categorize distribution shifts into uni-modal and multi-modal types, quantified by Mahalanobis distance across various backbones to capture both near and far OOD scenarios. We
https://arxiv.org/abs/2505.21755v1
conduct a comprehensive comparison of the existing robust fine-tuning baselines on ID and OOD performance using the benchmark. Furthermore, we analyze shift scores and modality importance across fine-tuning methods. To summarize, our contributions are: • We propose FRAMES-VQA for evaluating robust fine- tuning in VQA, including ten VQA datasets categorized by uni-modal (e.g., image, question) and multi-modal shifts. We quantify dataset shifts under different modal- ities using Mahalanobis distance and embeddings from different backbones. • We perform an in-depth comparison of robust fine-tuning 1arXiv:2505.21755v1 [cs.CV] 27 May 2025 methods using the benchmark. We find that FTP [47] has the best far OOD performance while SPD [48] outper- forms others on ID, near and average OOD. • We further provide insightful analyses on the shift scores and modality importance of each baseline across ID and OOD samples. Key observations include: (i) Fine- tuning amplifies question-joint shift correlation, indicat- ing strong influence of question shifts towards multi- modal representations; (ii) more robust fine-tuning meth- ods exhibit low correlation between uni-modal and mul- timodal shifts; (iii) question-to-image attention rises for OOD samples, implying potential shortcuts; and (iv) ro- bust methods emphasize intra- over inter-modality at- tention, underscoring intra-modality’s role in robustness. These findings offer ways to improve fine-tuning robust- ness under multi-modal shifts. 2. Related Work Distribution Shift and OOD Robustness in VQA. Do- mainNet [37] and ImageNet [10] along with its four vari- ants [21, 22, 39, 50] are commonly used to assess model ro- bustness under distribution shifts. Prior work [38, 52] show that while vanilla fine-tuning enhances ID performance, it can degrade results on OOD datasets compared to the pre- trained model. Beyond traditional image classification with distribution shifts in the image modality, VQA datasets such as VQAv2 [19] and its variants [1, 2, 9, 15, 31, 41, 42, 44, 49] introduce both uni-modal (image, question, answer) and multi-modal shifts, posing a greater challenge for models compared to shifts in image-only classification tasks. Var- ious frameworks have been proposed to assess robustness in VQA. [3] explores cross-dataset evaluations across four VQA datasets, while [30, 34] expand this scope by incorpo- rating VQAv2 variants and distinguishing different distribu- tion shift types. [53] quantifies uni-modal shifts for image and question modalities. Building on these studies, we in- troduce finer categorization and distinctions between near and far OOD distributions, along with quantifying both uni- and multi-modal distribution distances. Crucially, while prior work has focused on testing different discriminative models [34] or adaptation methods [8], our study focuses on comparing robust fine-tuning algorithms within the same backbone and extends to generative models. Robust Fine-Tuning of Foundation Models. Robust fine- tuning methods aim to adapt foundation models to new tasks while retaining their pre-trained robustness. LP- FT [28] introduces a two-step approach of linear probing followed by full fine-tuning to mitigate feature distortion. WiSE-FT [52] blends pre-trained and fine-tuned weights through interpolation, balancing the strengths of both em- bedding spaces. L2-SP [32] and MARS-SP [17] add penal- ties on the deviation between fine-tuned and pre-trainedweights, exploring different norm types. More recent meth- ods, such as TPGM [46],
https://arxiv.org/abs/2505.21755v1
frame regularization as a con- straint through bi-level optimization, learning tailored con- straints for each layer. FTP [47] enhances TPGM’s effi- ciency by leveraging prior training steps, while SPD [48] selectively regularizes layers with consistent loss reduction, projecting corresponding layers within the constraint. 3. FRAMES-VQA: Fine-Tuning Robustness across Multi-Modal Shifts in VQA We propose a new setting FRAMES-VQA for benchmark- ing fine-tuning robustness across multi-modal shifts in VQA. We conduct a comprehensive categorization and ex- perimentation on various VQA datasets, chosen to span dif- ferent types of distribution shifts. We then evaluate the model on nine OOD datasets which are not included in the training with image, question, answer, multi-modal and ad- versarial distribution shift. We further distinguish between near and far OOD, concepts initially defined in OOD detec- tion [14, 51], where near OOD represents data that is per- ceptually similar but semantically dissimilar to the training distribution, while far OOD refers to data that is both per- ceptually and semantically dissimilar. In FRAMES-VQA, we categorize six near OOD datasets that exhibit various types of distribution shifts relative to VQAv2, and three far OOD datasets where both image and text sources differ from those in VQAv2. Additional results using GQA [26] and GQA-OOD [27] can be found in Suppl. 14. 3.1. Datasets Fig. 1 and Tab. 1 provide an overview and statistics of all ID and OOD VQA datasets with vision, question, answer, multi-modal, adversarial and far distribution shift. ID Dataset. VQAv2 [19] contains open-ended questions about images with an emphasis on reducing answer bi- ases through balanced pairs of question-image examples. We choose this as our ID datasets as it is widely used as a benchmark for popular vision-language models (VLMs), and most other OOD datasets are derived from it. OOD Datasets. 1)Distribution Shifts to Images. IV- VQA [1] and CV-VQA [1] remove objects irrelevant and relevant to answering the question, resulting in unchanged and changed answers respectively. 2) Distribution Shifts to Questions. VQA-Rephrasings [41] provides three alterna- tive phrasings for each question. 3) Distribution Shifts to Answers. VQA-CP [2] disrupts the usual correlation be- tween question types and answers, creating a shift in answer patterns. 4) Distribution Shifts to Multi-modalities. VQA- CE [9] selects counterexample instances from VQAv2 val- idation set that highlight potential multi-modal shortcuts. 5)Adversarial Distribution Shifts. AdVQA [42] includes human-adversarial examples where the model’s initial an- 2 Figure 1. ID and OOD VQA datasets with Uni-modal (Vision, Question, Answer), Multi-modal, Adversarial and Far Distribution Shifts. swer is incorrect. 6) Far OODs. TextVQA [45] requires textual comprehension within images to answer questions. VizWiz [6] features user-generated images with quality and relevance challenges. OK-VQAv2 [40] involves questions that require external knowledge beyond the image content. Tab. 1 shows the statistics of each dataset. Shift Type Dataset # Samples Source VQAv2 [19]Train: 443752 Val: 214354 ImageIV-VQA [1] 119907 CV-VQA [1] 4141 Question VQA-Rep. [41] 162020 Answer VQA-CP [2] 219928 Multi-modal VQA-CE [9] 63298 Adversarial AdVQA [42] 10000 Far OODTextVQA [45] 5000 VizWiz [6] 3173 OK-VQAv2 [40] 5046 Table 1. ID and OOD VQA datasets. We use VQAv2 train for
https://arxiv.org/abs/2505.21755v1
fine-tuning and VQAv2 val for model selection. For other OOD datasets, we only use the test splits for evaluation. Evaluation and Metrics. We adopt the evaluation metric from VQAv2 [19], which measures accuracy by compar- ing predicted answers to ground truth human-annotated an- swers. Each question is paired with 10 human-providedanswers, and the accuracy is computed as: Accuracy = min number of humans who gave the answer 3,1 . 3.2. Measuring Distribution Shifts across Datasets We aim to quantify distribution shift to more precisely mea- sure model robustness (e.g., relationship between perfor- mance and the degree of shift). In VQA, the presence of both uni-modal and multi-modal shifts complicates qualita- tive analysis of the types of shifts affecting performance. By quantifying these shifts individually, we can better under- stand how each modality—both independently and in com- bination—impacts model robustness. Mahalanobis Distance. We use the Mahalanobis distance to measure the distribution shift following procedures sim- ilar to typical feature-based OOD detection methods [43]. Further analysis using Maximum Mean Discrepancy [20] is shown in Suppl. 15. Specifically, given our input train- ing split Xtrain in, we compute feature representations zof the training samples to estimate the empirical mean µtrainand sample covariance matrix Σtrain. For each test split, we com- pute the test set shift relative to the training domain using the Mahalanobis distance defined in Eq. 1. The overall shift score for each test dataset, denoted as Smaha, is calculated as the average SMaha across all samples. SMaha(ztest) =q (ztest−µtrain)⊤Σ−1 train(ztest−µtrain)(1) We use the VQAv2 training set as our ID dataset. The average Mahalanobis score provides an overall measure of 3 how distant the test set is from the ID set, with higher values indicating more shift. Embedding Extraction. Letqdenote the question and v the image. The input features used in measuring shifts in- clude uni-modal embeddings f(v)andf(q)and joint em- beddings f(v, q). We first leverage ViT [13] to get f(v) and BERT [11] to get f(q)for pure uni-modal embeddings. Both representations are derived by mean-pooling the last layer of their respective pre-trained models. We use the pre-trained and fine-tuned PaliGemma from various meth- ods to extract both uni-modal and joint embeddings, i.e., f(v),f(q), and f(v, q). For the image embedding f(v), we obtain it via masking out the question input tokens and mean-pooling the image portion from the final layer of the model before the language model head. Similarly, to get f(q), we mask out the image tokens and extract the ques- tion portion from the final layer. To obtain f(v, q), we pass in both image and text tokens as input, compute the average embedding for both modalities and then taking the overall mean. All embeddings are summarized in Tab. 2. We dis- play the histograms for the Mahalanobis score distribution between test datasets with the ID set in Suppl. 9. Results of the distances and further analysis will be presented in the following sections. Embeddings Model Modality PT/FT f(v) ViT V PT f(q) BERT Q PT fpt(v) PaliGemma V PT fpt(q) PaliGemma Q PT fpt(v, q) PaliGemma V ,Q PT fftmethod
https://arxiv.org/abs/2505.21755v1
(v) PaliGemma V FT fftmethod (q) PaliGemma Q FT fftmethod (v, q)PaliGemma V ,Q FT Table 2. Embedding extractions from different layers and back- bone models. 4. Robust Fine-Tuning In this section, we summarize several existing robust fine- tuning methods and evaluate their performance on our pro- posed benchmark FRAMES-VQA. 4.1. Robust Fine-Tuning Baselines We include pre-training [16], Vanilla Fine-Tuning, Lin- ear Probing, LP-FT [28], WiSE-FT [52], FTP [47] and SPD [48] as baselines. Fig. 2 provides an overview of all methods. Vanilla fine-tuning updates model parameters to adapt to a new task without constraint, minimizing task-specific lossLwith regularization on the L2 norm of the parameters with a strength of λ: min λ|(x,y)∈DvalL(x, y; arg min θt|(x,y)∈DtrL(x, y;θt) +λ∥θt∥2 2, λ) (2) However, such free fine-tuning can cause feature dis- tortion by excessively altering pre-trained representations. Linear Probing mitigates this by training only the final linear head while keeping other layers frozen, preserving the learned features. LP-FT [28] combines these two ap- proaches and further proposes a two-step strategy of lin- ear probing then full fine-tuning to achieve better adaptation while maintaining pre-trained features. WiSE-FT [52] introduces an interpolation technique that combines the strengths of pre-trained and fine-tuned em- beddings by linearly blending their weights, with α∈[0,1] to control the balance: ˜θ=αθt+ (1−α)θ0, α∈[0,1] (3) L2-SP [32] applies an L2 penalty on the deviation be- tween fine-tuned and pre-trained weights, rather than regu- larizing the L2 norm of the parameters themselves. This ex- plicit penalty encourages the fine-tuned model to stay closer to the pre-trained weights: min λ|(x,y)∈DvalL(x, y; arg min θt|(x,y)∈DtrL(x, y;θt)+λ 2∥θt−θ0∥2 2, λ) (4) TPGM [46] approaches the regularization term in L2-SP as a constraint, reformulating the problem as a bi-level op- timization and enforcing the model to stay within a distance ofγfrom the pre-trained model, as shown in Eq. 5: min λ,γ|(x,y)∈DvalL(x, y; arg min θt|(x,y)∈DtrL(x, y;θt, λ), λ),(5) s.t.∥θt−θ0∥2≤γ, To solve the constrained optimization problem, TPGM utilizes the Projected Gradient Method (PGM) to projects the updated model weights to be within the constraint, il- lustrated in Eq. 6: Πl2(θ0, θt, γ) :˜θ=θ0+1 max 1,∥θt−θ0∥2 γ(θt−θ0)(6) FTP [47] further improves the efficiency of TPGM [46] by learning the constraint from training set of previous step instead of the current validation set, which is highlighted in blue in Eq. 7: min λ,γ|(x,y)∈DtrL(x, y; arg min θt|(x,y)∈DtrL(x, y;θt, λ), λ),(7) s.t.∥θt−θ0∥2≤γ, 4 Figure 2. Overview of Robust Fine-Tuning Methods. The flame icon represents tunable layers, while the snowflake icon represents frozen layers. γin TPGM, FTP and SPD is the constraint for each layer. SPD [48] selectively imposes a strong penalty on certain layers while allowing others to change freely, with a select- ing condition expressed in Eq. 8, where gt+1=∂L(θt) ∂θtrep- resents the gradient in step t: ct:=−g⊺ t+1(θt−θ0) (8) Intuitively, SPD expands and contracts the parameter search space for layers with consistent and inconsistent loss reduction between the descent direction (−gt+1)and the current progress direction (θt−1−θ0), respectively. For the layers that meet the condition Eq. 8, SPD projects the cor- responding layers using PGM in Eq. 6. 4.2. ID,
https://arxiv.org/abs/2505.21755v1
Near & Far OOD Performance We fine-tune Google’s recently released PaliGemma-3B [4] model on the VQAv2 dataset and evaluate on the other OOD datasets. PaliGemma-3B is lightweight and one of the state-of-the-art models on VQAv2, making it a practical op- tion for benchmarking. We apply LoRA [23], a parameter- efficient fine-tuning method, to reduce the computational and memory overhead associated with fine-tuning large models, as demonstrated in prior work [24, 54]. While LoRA limits excessive parameter updates to maintain ro- bustness [5], as shown in Tab. 3 there is still a loss of robust- ness compared to vanilla fine-tuning and other robust fine- tuning methods. The results of the robust fine-tuning meth- ods are shown in Tab. 3. Training details including config- urations for different methods and additional results using full fine-tuning and LLaV A [33] can be found in Suppl. 8 and 13. Below we discuss our observations of the results. Vanilla fine-tuning improves zero-shot performance across ID, near OOD, and far OOD datasets. As shown in Tab. 3, we observe no degradation in OOD performance compared to zero-shot performance following vanilla fine- tuning. This differs from the observation in [52] that vanillafine-tuning for image classification degrades OOD perfor- mance compared to zero-shot. In the image classification task, fine-tuning typically involves using only the image encoder from the backbone model with a linear classifi- cation head, removing the text encoder and employing a cross-entropy loss that differs from the pre-training objec- tive. However, [18] notes that fine-tuning is more robust when retaining the same objective as pre-training, which may explain why vanilla fine-tuning performs robustly in VQA tasks, as the model structure and loss function re- main consistent. Model architecture and task characteristics could also contribute to the observed robustness. WiSE-FT [52] decreases both ID and OOD performance in VQA. [52] demonstrate that ensembling the weights of zero-shot and fine-tuned models can benefit from the ro- bustness of zero-shot and the adaptability of fine-tuning in image classification tasks. However, WiSE-FT is highly dependent on a reduction in robustness after vanilla fine- tuning and the linear connectivity of the model, which does not occur in our VQA setting. In Tab. 3, we see a signif- icant gap between zero-shot and fine-tuned VQA perfor- mance, which limits the effectiveness of WiSE-FT. As a re- sult, combining pre-trained and fine-tuned weights reduces the model’s robustness across all shifts, making WiSE less effective than vanilla fine-tuning for VQA. SPD [48] achieves the highest performance on ID, near and overall OOD. As shown in Tab. 3, SPD achieves the highest scores on ID, near OOD, and overall OOD, demonstrating robustness across various types of distribu- tion shifts, including vision, question, answer, and multi- modal combinations, as well as adversarial shifts. FTP [47] underfits ID data but excels on far OOD datasets. FTP [46] underfits the ID dataset, showing signif- icantly lower ID performance than vanilla fine-tuning, even 5 ID Near OOD Far OOD OOD Avg.VQAv2 (val)Vis. Ques. Ans. M.M. Adv. Near OOD Avg. TextVQA VizWiz OK-VQA Far OOD Avg.IV-VQA CV-VQA VQA-Rep. VQA-CP VQA-CE AdVQA Zero-Shot [16]
https://arxiv.org/abs/2505.21755v1
54.42 63.95 44.72 50.10 54.29 30.68 30.46 45.70 14.86 16.84 28.60 20.10 37.17 Vanilla FT LoRA [23] 86.29 94.43 69.36 78.90 86.21 71.73 49.82 75.08 42.08 22.92 48.30 37.77 62.64 Linear Prob LoRA 78.24 87.83 63.87 69.61 78.48 61.66 42.90 67.39 29.61 18.80 42.27 30.23 55.00 LP-FT LoRA [28] 85.97 93.30 65.93 76.49 86.16 72.73 45.68 73.38 31.41 19.01 43.27 31.23 59.33 WiSE-FT LoRA [52] 71.36 85.06 64.55 66.42 70.89 48.74 43.95 63.27 36.98 22.41 42.35 33.91 53.48 FTP LoRA [47] 81.77 92.61 67.93 76.66 81.41 64.14 50.99 72.29 49.12 25.67 51.07 41.95 62.18 SPD LoRA [48] 87.39 95.25 68.85 79.48 87.27 73.52 50.90 75.88 43.56 23.05 50.11 38.91 63.55 Table 3. Visual Question Answering Fine-Tuning Results for ID, Near OOD and Far OOD datasets. Bold : best. Underline : second best. with a minimal positive gradient annealing factor ( κ= 0), indicating the weakest regularization. However, FTP per- forms exceptionally well on far OOD tasks, achieving the highest scores across the three far OOD datasets. This out- come may be due to FTP’s stronger regularization, as the projection constraints are non-decreasing throughout train- ing, providing consistent regularization. The FTP authors also suggest that κ= 0 is necessary for optimal perfor- mance when underfitting is observed. Compared to SPD, which applies a weaker regularization and performs bet- ter on ID and near OOD, FTP’s strict regularization might uniquely favor far OOD performance. The substantial im- provement in far OOD, despite a weaker zero-shot baseline, presents an interesting area for future investigation. 5. Analysis on the Shift Scores In this section, we aim to deepen our understanding of how multi-modal distribution shifts impact model robustness by analyzing 1) shift distances across different embeddings, datasets and fine-tuning methods, 2) interactions between uni- and multi-modal shifts. We display the Mahalanobis distance of different datasets, embeddings and fine-tuning methods and its heatmap in Fig. 3. The darker blue represents higher dis- tance score which indicates larger distribution shifts. 5.1. Shift Distance across Embeddings, Datasets & Fine-Tuning Techniques As shown in Fig. 3, shift scores increase from left to right, with a corresponding darkening in color, which reflects our intuitive understanding of relative dataset shifts. An outlier is CV-VQA, which has the lowest shift scores, likely due to its small sample size or inherent dataset issues. Addi- tionally, question, image, and joint shifts exhibit high neg- ative correlations with performance, with values of -0.63, -0.74, -0.78 under Vanilla-FT, respectively. Full details onshift-performance correlations for various fine-tuning meth- ods are provided in Suppl. 10, supporting the intuition that larger shifts in all modalities degrade VQA performance. As we move from near to far OOD datasets, question shifts increase significantly, while image shifts increase at a lower rate, suggesting strong adaptivity to visual changes. This indicates that far OOD datasets introduce stronger task-guided and contextual shifts, and VLM tends to cap- ture shifts more strongly in question representations. Additionally, PaliGemma’s visual, question and joint embeddings show similar variability on near OODs, whilst far OODs reveal greater sensitivity in question and joint embeddings, evidenced by higher Mahalanobis distances. In comparison,
https://arxiv.org/abs/2505.21755v1
pure uni-modal embeddings from ViT and BERT remain more stable. This suggests that question and joint embeddings from multi-modal models are more sensi- tive to significant distribution shifts, likely due to their de- pendence on contextual and multi-modal interactions. 5.2. Correlation between Uni- & Multi-Modal Shifts The degree of joint shift can be influenced by both visual and question shifts. We aim to reveal how much direct influence each modality has towards the overall joint shift by computing the Pearson correlation coefficient between {f(v), f(v, q)}and{f(q), f(v, q)}for each test set. Re- sults are shown in Tab. 4. Full correlation breakdown for each dataset can be viewed in Suppl. 11. All fine-tuning methods has a significantly higher question-joint correlation than image-joint correlation, making the joint modality more sensitive to question shifts. However, pre-trained method maintains similar levels of correlation. Further, more robust fine-tuning methods show smaller image-joint and question-joint correlations, indicat- ing that robust methods learn representations less sensitive to specific image-question pairings, making them less im- pacted by uni-modal shifts. 6 Figure 3. Mahalanobis Distance Heatmap of Different Datasets, Embeddings and Fine-Tuning Methods. Darker blue represents higher distance score. From top to bottom the three groups are image, question and joint shift scores respectively. {f(v), f(v, q)}{f(q), f(v, q)} Pre-Train [16] 0.32 0.34 Vanilla FT LoRA [23] 0.29 0.48 Linear Prob LoRA 0.26 0.61 LP-FT LoRA [28] 0.34 0.63 FTP LoRA [47] 0.34 0.58 SPD LoRA [48] 0.17 0.51 Table 4. Uni- and multi-modal shifts correlation averaged across datasets for different fine-tuning methods. 6. Analysis on the Modality Importance We further explore how modality importance impacts model robustness by analyzing intra- versus inter-modality atten- tion, shifts in modality focus between ID and OOD settings, and differences across fine-tuning methods. This analysis reveals how attention to specific modalities influences the model’s ability to generalize under distribution shifts. 6.1. Intra- & Inter-Modality Attention We introduce the following metrics to quantitatively an- alyze the modality importance, inspired by [7]. Denotev1, v2, . . . , v Nandq1, q2, . . . , q Mas the image and ques- tion tokens respectively. The Modality Importance (MI) of a token tknis defined as the ratio of its total attention to all question tokens to its total attention to all image tokens, MI(tkn) =PM j=1Attn(tkn, q j) PN i=1Attn(tkn, v i)(9) where Attn (tkn 1, tkn 2)represents the average attention weights of all heads from tkn 1totkn 2. Note that the sum of the attention weights that a token attends to all the tokens in both modalities equals one. Further, we want to explore the modality importance of the tokens in different modalities. We take the average MI of the tokens in each modality, which is expressed in Eq. 10, MIv=1 NNX i=1MI(vi),MIq=1 MMX j=1MI(qj) (10) . MIm>1indicates that the text modality is more influ- ential than the image modality for tokens in modality m. In Fig. 4, we present the variation of MI vand MI qw.r.t. shift score under vanilla FT, FTP and SPD across VQAv2 7 and VQA-CE. Other datasets and baselines can be found
https://arxiv.org/abs/2505.21755v1
in Suppl. 12. In Tab. 5, we further separate ID and OOD sam- ples from all datasets with a Mahalanobis Distance of 60, chosen as the relative median of shift scores to illustrate the distinction between closer and more distant samples, and show MI v, MIqfor different fine-tune methods. (1) VQAv2, Vanilla FT (2) VQAv2, FTP (3) VQAv2, SPD (4) VQA-CE, Vanilla FT (5) VQA-CE, FTP (6) VQA-CE, SPD Figure 4. Variation of MI vand MI qw.r.t. shift score under vanilla FT, FTP and SPD across VQAv2 and VQA-CE. The blue and or- ange bars represent MI vand MI qrespectively. The red dotted line marks a reference MI of 1. ID MI OOD MI Overall MI MIvMIqMIvMIqMIvMIq Pre-Train [16] 0.90 2.37 0.85 1.93 0.89 2.37 Vanilla FT LoRA [23] 0.75 1.90 0.66 1.42 0.75 1.89 Linear Prob LoRA 0.71 2.10 0.66 1.60 0.71 2.10 LP-FT LoRA [28] 0.94 2.40 0.78 1.51 0.94 2.39 FTP LoRA [47] 0.93 2.72 0.89 1.97 0.92 2.72 SPD LoRA [48] 0.61 1.93 0.57 1.42 0.61 1.93 Table 5. MI vand MI qof ID and OOD samples for different fine- tuning methods. We use the Mahalanobis Distance of 60 to distin- guish between ID and OOD samples. 6.2. Comparison between Intra- & Inter-Modality Attention According to Tab. 5 and Fig. 4, MI v<1while MI q>1, implying dominant image and question attention for image and tokens respectively. The dominance of intra-modality attention is expected, as attention mechanisms tend to focus within the same modality to reinforce contextual coherence. However, Intra-modality attention of text tokens is greater than that of image tokens, since MI vis close to 1 while MI q is significantly greater than 1. This suggests that the ra- tio of question-to-question attention relative to question-to- image attention is higher than that of image-to-image atten- tion relative to image-to-question, indicating stronger intra- modality attention for text tokens.6.3. Comparison of MIs of ID & OOD As illustrated in Fig. 4 and Tab. 5, across each dataset and fine-tuning method, MI vremains relatively stable under dis- tribution shift, whereas MI qshows a marked decrease as the distribution shift intensifies. Such pattern suggest that, in OOD samples, text tokens increasingly attend to image to- kens. Such a shift could indicate a model bias, where ID samples rely heavily on intra-modality shortcuts for text, potentially at the expense of robust cross-modal integration. This reliance on shortcuts and lack of image grounding may reduce the model’s ability to generalize effectively. 6.4. Comparison of Fine-Tuning Methods According to Tab. 5, SPD, which performs best on ID and near-OOD datasets, has the lowest MI v, while FTP, which excels on far-OOD datasets, shows the highest MI q. One possible hypothesis is that for optimal performance across ID, near-OOD, and far-OOD settings, a model might benefit from a high MI qand low MI v, suggesting that each modal- ity should prioritize intra-modality attention over cross- modality attention. Stronger intra-modality focus—where each modality focuses on internal coherence before cross- modal integration—could potentially yield improved ro- bustness across distribution shifts. Analyses in Sec. 5 and 6 suggest that
https://arxiv.org/abs/2505.21755v1
future work should aim to enhance robustness under language shifts and im- plement ways to dynamically handle modality importance. Our findings show that more robust methods exhibit higher intra-modality attention, highlighting the potential of adap- tive attention mechanisms and modality-specific regulariza- tion to better balance modality contributions. 7. Conclusion In conclusion, our paper introduces FRAMES-VQA, a new VQA benchmark to evaluate fine-tuning robustness under various distribution shifts. We categorize the shift types and quantify the degree, and conduct comparisons of ro- bust fine-tuning baselines on the proposed benchmark. Our comprehensive analysis reveals the interactions between uni- and multi-modal shifts and how VLMs perform cross- modal learning in ID/OOD scenarios using the modality im- portance metric. We observe that fine-tuning increases the influence of question shifts on multi-modal representations, with more robust methods showing a lower correlation be- tween uni- and multi-modal shifts. Question-to-image at- tention rises for OOD samples, while robust methods pri- oritize intra-modality attention. Our findings lay the foun- dation and highlight promising directions for future work: (1) Developing training methods to mitigate uni- and multi- modal correlations, and (2) Enhancing techniques for de- tecting and adapting to different types of distribution shifts, both of which proved crucial in our analysis. 8 References [1] Vedika Agarwal, Rakshith Shetty, and Mario Fritz. To- wards Causal VQA: Revealing and Reducing Spurious Cor- relations by Invariant and Covariant Semantic Editing. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) , pages 9687–9695, Seattle, WA, USA, 2020. IEEE. 1, 2, 3 [2] Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Anirud- dha Kembhavi. Don’t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering, 2018. arXiv:1712.00377 [cs]. 1, 2, 3 [3] Aishwarya Agrawal, Ivana Kaji ´c, Emanuele Bugliarello, El- naz Davoodi, Anita Gergely, Phil Blunsom, and Aida Ne- matzadeh. Reassessing Evaluation Practices in Visual Ques- tion Answering: A Case Study on Out-of-Distribution Gen- eralization, 2023. arXiv:2205.12191 [cs]. 2 [4] Lucas Beyer, Andreas Steiner, Andr ´e Susano Pinto, Alexan- der Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Eisensch- los, Rishabh Kabra, Matthias Bauer, Matko Bo ˇsnjak, Xi Chen, Matthias Minderer, Paul V oigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier Henaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, and Xiaohua Zhai. PaliGemma: A versatile 3B VLM for transfer, 2024. arXiv:2407.07726 [cs] version: 1. 5 [5] Dan Biderman, Jacob Portes, Jose Javier Gonzalez Ortiz, Mansheej Paul, Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, and John P. Cunningham. Lora learns less and forgets less, 2024. 5 [6] Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Lit- tle, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. VizWiz: nearly real-time answers to visual questions. 3 [7] Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, and Jingjing Liu. Behind the scene: Revealing the secrets of pre-trained vision-and-language models, 2020. 7 [8] Shuo Chen, Jindong
https://arxiv.org/abs/2505.21755v1
Gu, Zhen Han, Yunpu Ma, Philip Torr, and V olker Tresp. Benchmarking robustness of adaptation methods on pre-trained vision-language models, 2023. 2 [9] Corentin Dancette, Remi Cadene, Damien Teney, and Matthieu Cord. Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answer- ing, 2021. arXiv:2104.03149 [cs]. 1, 2, 3 [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. 1, 2 [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional trans- formers for language understanding, 2019. 4 [12] Xin Dong, Junfeng Guo, Ang Li, Wei-Te Ting, Cong Liu, and H. T. Kung. Neural mean discrepancy for efficient out- of-distribution detection, 2022. 2[13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. 4 [14] Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Ex- ploring the limits of out-of-distribution detection, 2021. 2 [15] Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. VQA-LOL: Visual Question Answering under the Lens of Logic, 2020. arXiv:2002.08325 [cs]. 1, 2 [16] Google. paligemma-3b-pt-224. https : / / huggingface . co / google / paligemma - 3b - pt-224 , 2024. 4, 6, 7, 8, 1 [17] Henry Gouk, Timothy M. Hospedales, and Massimiliano Pontil. Distance-Based Regularisation of Deep Networks for Fine-Tuning, 2021. arXiv:2002.08253 [cs, stat]. 2 [18] Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan. Finetune like you pretrain: Im- proved finetuning of zero-shot vision models. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 19338–19347, 2023. 5 [19] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Ba- tra, and Devi Parikh. Making the V in VQA Matter: Ele- vating the Role of Image Understanding in Visual Question Answering, 2017. arXiv:1612.00837 [cs]. 2, 3 [20] Arthur Gretton, Karsten Borgwardt, Malte J. Rasch, Bern- hard Scholkopf, and Alexander J. Smola. A kernel method for the two-sample problem, 2008. 3, 2 [21] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kada- vath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robust- ness: A critical analysis of out-of-distribution generalization. InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 8340–8349, 2021. 2 [22] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Stein- hardt, and Dawn Song. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , pages 15262–15271, 2021. 1, 2 [23] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen- Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-Rank Adaptation of Large Language Models, 2021. arXiv:2106.09685 [cs]. 5, 6, 7, 8, 1 [24] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka- Wei Lee. Llm-adapters: An adapter family for parameter- efficient fine-tuning
https://arxiv.org/abs/2505.21755v1
of large language models, 2023. 5 [25] Chengyue Huang, Junjiao Tian, Brisa Maneechotesuwan, Shivang Chopra, and Zsolt Kira. Directional gradient pro- jection for robust fine-tuning of foundation models, 2025. 1 [26] Drew A. Hudson and Christopher D. Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering, 2019. 2 [27] Corentin Kervadec, Grigory Antipov, Moez Baccouche, and Christian Wolf. Roses are red, violets are blue... but should vqa expect them to?, 2021. 2 9 [28] Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. Fine-Tuning can Distort Pre- trained Features and Underperform Out-of-Distribution, 2022. arXiv:2202.10054 [cs]. 2, 4, 6, 7, 8, 1 [29] Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Sil- vio Savarese, and Steven C. H. Hoi. Lavis: A library for language-vision intelligence, 2022. 1 [30] Linjie Li, Zhe Gan, and Jingjing Liu. A Closer Look at the Robustness of Vision-and-Language Pre-trained Models, 2021. arXiv:2012.08673 [cs]. 2 [31] Linjie Li, Jie Lei, Zhe Gan, and Jingjing Liu. Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models, 2021. arXiv:2106.00245 [cs]. 1, 2 [32] Xuhong Li, Yves Grandvalet, and Franck Davoine. Ex- plicit Inductive Bias for Transfer Learning with Convolu- tional Networks, 2018. arXiv:1802.01483 [cs]. 2, 4 [33] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 5, 1 [34] Jie Ma, Pinghui Wang, Dechen Kong, Zewei Wang, Jun Liu, Hongbin Pei, and Junzhou Zhao. Robust Visual Ques- tion Answering: Datasets, Methods, and Future Challenges, 2024. arXiv:2307.11471 [cs]. 2 [35] Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Yueqian Lin, Qing Yu, Go Irie, Shafiq Joty, Yixuan Li, Hai Li, Ziwei Liu, Toshihiko Yamasaki, and Kiyoharu Aizawa. Generalized out-of-distribution detection and be- yond in vision language model era: A survey, 2024. 2 [36] Liwen Ouyang and Aaron Key. Maximum mean discrep- ancy for generalization in the presence of distribution and missingness shift, 2022. 2 [37] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF inter- national conference on computer vision , pages 1406–1415, 2019. 1, 2 [38] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Vi- sual Models From Natural Language Supervision, 2021. arXiv:2103.00020 [cs]. 1, 2 [39] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to im- agenet? In International Conference on Machine Learning , pages 5389–5400. PMLR, 2019. 1, 2 [40] Benjamin Z. Reichman, Anirudh Sundar, Christopher Richardson, Tamara Zubatiy, Prithwijit Chowdhury, Aaryan Shah, Jack Truxal, Micah Grimes, Dristi Shah, Woo Ju Chee, Saif Punjwani, Atishay Jain, and Larry Heck. Out- side Knowledge Visual Question Answering Version 2.0. InICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5, 2023. 3 [41] Meet Shah, Xinlei Chen, Marcus Rohrbach, and Devi Parikh. Cycle-Consistency for Robust Visual Question Answering, 2019. arXiv:1902.05660 [cs]. 1, 2, 3 [42] Sasha Sheng, Amanpreet Singh, Vedanuj Goswami, Jose
https://arxiv.org/abs/2505.21755v1
Al- berto Lopez Magana, Wojciech Galuba, Devi Parikh, andDouwe Kiela. Human-Adversarial Visual Question Answer- ing, 2021. arXiv:2106.02280 [cs]. 1, 2, 3 [43] Xiangxi Shi and Stefan Lee. Benchmarking out-of- distribution detection in visual question answering, 2024. 3 [44] Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, and Jie Zhou. Language Prior Is Not the Only Short- cut: A Benchmark for Shortcut Learning in VQA, 2022. arXiv:2210.04692 [cs]. 1, 2 [45] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards VQA Models That Can Read, 2019. arXiv:1904.08920 [cs]. 3 [46] Junjiao Tian, Xiaoliang Dai, Chih-Yao Ma, Zecheng He, Yen-Cheng Liu, and Zsolt Kira. Trainable Projected Gradi- ent Method for Robust Fine-tuning, 2023. arXiv:2303.10720 [cs]. 2, 4, 5 [47] Junjiao Tian, Yen-Cheng Liu, James Seale Smith, and Zsolt Kira. Fast Trainable Projection for Robust Fine-Tuning, 2023. arXiv:2310.19182 [cs]. 2, 4, 5, 6, 7, 8, 1 [48] Junjiao Tian, Chengyue Huang, and Zsolt Kira. Rethinking weight decay for robust fine-tuning of foundation models, 2024. 2, 4, 5, 6, 7, 8, 1 [49] Suraj Jyothi Unni, Raha Moraffah, and Huan Liu. VQA- GEN: A Visual Question Answering Benchmark for Domain Generalization, 2023. arXiv:2311.00807 [cs]. 1, 2 [50] Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. Advances in Neural Information Pro- cessing Systems , 32, 2019. 1, 2 [51] Jim Winkens, Rudy Bunel, Abhijit Guha Roy, Robert Stanforth, Vivek Natarajan, Joseph R. Ledsam, Patricia MacWilliams, Pushmeet Kohli, Alan Karthikesalingam, Si- mon Kohl, Taylan Cemgil, S. M. Ali Eslami, and Olaf Ronneberger. Contrastive training for improved out-of- distribution detection, 2020. 2 [52] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo-Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models, 2022. arXiv:2109.01903 [cs]. 1, 2, 4, 5, 6 [53] Mingda Zhang, Tristan Maidment, Ahmad Diab, Adriana Kovashka, and Rebecca Hwa. Domain-robust vqa with di- verse datasets and methods but no target labels, 2021. 2 [54] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mo- hamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023. 5 10 FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question Answering Supplementary Material 8. Training Details We use the model pretrained with 224∗224input images and128 token input/output text sequences and fine-tune with the precision of bfloat16. We use the LA VIS [29] public repository to fine-tune all methods. Standard hyper- parameters are used for all: learning rate ( 1e−3), weight- decay ( 1e−4), optimizer (AdamW), scheduler (Linear Warmup With Cosine Annealing), warm-up learning rate (1e−4), minimum learning rate ( 1e−4), accumulation steps ( 2), beam size (5). The model is trained for 10epochs with a batch size of 128 for Tab. 3. For LoRA [23], we limit our study to only adapting the attention weights and freeze the MLP modules for parameter-efficiency, specifi- cally apply LoRA to Wq, Wk, Wv, Wowithr= 8in Tab. 3. The regularization hyper-parameter is found through
https://arxiv.org/abs/2505.21755v1
cross- validation, and the model with the best ID validation accu- racy is taken. We use 8 A40 GPU for each experiment. The best training configurations for different methods are listed in Tab. 6. lr wd others Vanilla FT 1e−31e−4 - Linear Prob 1e−31e−4 - LP-FT 1e−31e−4 - WiSE-FT - - α= 0.5 FTP 1e−31e−4κ= 0 SPD 1e−3 0.5 - Table 6. Best Training Configurations for Robust Fine-Tuning Methods. lr and wd stands for learning rate and weight decay. 9. Histograms of Shift Scores We display the histograms for the Mahalanobis score dis- tribution between test datasets with the ID set. Fig. 10, 11 and 12 are visual, question and joint shifts from vanilla FT repectively. The histograms show that under Vanilla FT, visual shifts are minimal across most VQA datasets except for VizWiz, while question shifts are greater for further OOD datasets. Combined visual and question shifts exhibit the largest de- viations across all test splits. 10. Correlation between Shift & Performance Tab. 7 shows the correlation between shift and performance for different embeddings under different fine-tuning meth-ods. Overall, visual and joint shifts exhibit the strongest correlation with performance across all types of methods. The negative correlation supports the intuition that larger shifts in all modalities degrade VQA performance. Method V Q Joint Pre-Train [16] -0.80 -0.66 -0.80 Vanilla FT LoRA [23] -0.74 -0.63 -0.78 Linear Prob LoRA -0.82 -0.55 -0.81 LP-FT LoRA [28] -0.75 -0.58 -0.80 FTP LoRA [47] -0.86 -0.63 -0.75 SPD LoRA [48] -0.52 -0.61 -0.79 Table 7. Correlation between Shift Score vs. Performance for different embeddings under various fine-tuning methods. 11. Correlation between Uni- & Multi-Modal Shifts per Dataset Fig. 5 shows the heatmap of the correlation between uni- modal and multi-modal shifts per dataset. Question-joint shift correlations are higher than image-joint shift corre- lations across all VQA datasets and fine-tuning methods. However, pre-train model maintains similar correlation be- tween both modalities. Vanilla FT and SPD exhibits the lowest question-joint shift correlation shown by the darkest row color across all fine-tuning methods in Fig. 51. Whilst, SPD shows the lowest image-joint shift correlation across the datasets in Fig. 52. 12. Modality Importance of different Datasets and Fine-Tuning Methods Fig. 13 and 14 show the variation of MI vand MI qw.r.t. shift score under all datasets and fine-tuning methods. Over- all, intra-modality attention is more dominant than inter- modality attention. There is a stronger intra-modality at- tention for text tokens than image tokens. In OOD samples, text tokens increasingly attend to image tokens. A more ro- bust model tends to have higher MI qand lower MI v. 13. Additional Results using Full Fine-Tuning and LLaV A We present additional results in Tab. 8, including LLaV A- 7B [33] with LoRA and PaliGemma-3B with full fine- tuning. These results are consistent with PaliGemma with 1 (1) Question-Joint shift correlation heatmap (2) Image-Joint shift correlation heatmap Figure 5. Heatmap of correlation between uni-modal and multi- modal shifts per dataset. LoRA: FTP and SPD remain relatively robust models, which validates the credibility of our analysis. VQAv2 val Near OOD Avg. Far
https://arxiv.org/abs/2505.21755v1
OOD Avg. OOD Avg. (a) LLaV A-7B with LoRA under 10% of VQAv2 (train & val) Zero-Shot 3.27 3.44 0.68 2.52 Vanilla FT 72.49 60.07 28.67 49.60 LP-FT 53.01 28.63 7.64 21.63 WiSE-FT 60.47 43.33 9.07 31.98 FTP 67.95 58.49 26.21 47.73 SPD 73.59 61.98 29.98 51.31 (b) PaliGemma-3B with Full Fine-Tuning under 10% of VQAv2 (train & val) Zero-Shot 54.42 45.70 20.10 37.17 Vanilla FT 95.80 60.73 26.56 49.34 Linear Prob 86.80 59.61 24.17 47.80 LP-FT 94.44 57.13 21.03 45.10 FTP 95.40 64.33 32.55 53.74 SPD 95.84 63.92 32.46 53.43 Table 8. LLaV A-7B (LoRA) and PaliGemma-3B (Full Fine- Tuning) Fine-Tuned on VQAv2. We sample 10% of the VQAv2 training and validation set. Bold : best. Underline : second best.14. Fine-Tuning Results on GQA We use VQAv2 as the ID dataset since most OOD VQA datasets, covering various shifts, are built on it. The only exception, GQA-OOD [27] (based on GQA [26]), has only answer shifts. To further validate our findings, we fine-tune PaliGemma-3B on GQA as ID and evaluate it on GQA- OOD and VQAv2 variants (Tab. 9). The results follow the same trend: FTP and SPD remain relatively robust, with SPD excelling on Near OOD (GQA-OOD) and FTP on Far OOD (VQAv2 and its variants). This consistency reinforces the generalizability of our analysis. ID Near OOD Far OOD GQA GQA-OOD VQAv2 VQAv2 Near OOD Avg. VQAv2 Far OOD Avg. Zero-Shot 41.44 29.33 54.42 45.70 20.10 Vanilla FT 67.00 53.97 64.97 57.08 23.42 Linear Prob 61.70 50.32 54.27 39.64 14.43 LP-FT 61.51 50.72 55.72 43.89 14.95 FTP 64.97 53.15 66.40 58.38 25.26 SPD 66.80 54.04 65.27 57.53 24.55 Table 9. PaliGemma-3B Fine-tuned on GQA with LoRA and Evaluated on GQA-OOD, VQAv2 and its variants. We sample 10% of the GQA training set. Bold : best. Underline : second best. 15. Quantifying Shifts using Maximum Mean Discrepancy Mahalanobis distance is a dominant metric for measuring distribution shifts [35]. We further compare shifts using Maximum Mean Discrepancy (MMD) [12, 20, 36] with RBF kernel in Tab. 10. We observe similar trends as with Mahalanobis distance (i.e., higher scores indicate greater shifts), reinforcing the reliability of our shift scores. ID IVVQA VQA-REP VQA-CE TextVQA VizWiz fvanilla ft(q) 20.02 20.12 20.10 20.18 21.92 23.18 fvanilla ft(v) 20.17 20.20 20.13 20.28 21.98 23.07 fvanilla ft(v, q)20.10 20.12 20.12 20.20 22.44 23.02 fpt(q) 20.16 20.18 20.22 20.28 21.98 23.32 fpt(v) 20.15 20.18 20.20 20.25 22.47 23.75 fpt(v, q) 20.08 20.06 20.16 20.10 22.28 23.26 Table 10. Maximum Mean Discrepancy metric with LoRA and Pretrained embeddings on VQAv2 and its variants. We sample 1000 instances per dataset. Gamma=1.0, scale-up factor= 104 16. Qualitative Analysis: Inspect via Sampling In order to investigate the types of ID and OOD samples under different modalities, we perform sampling on the var- ious regions of the histogram to inspect how the model rep- resents ID/OOD embeddings. This also serves as a veri- fication of the reliability in quantifying shifts via feature- based representations. We select the vanilla fine-tuned (with LoRA) model as our model of choice and consider various regions in the
https://arxiv.org/abs/2505.21755v1
distribution between both train and test splits. 2 Figure 6. Sampling region in histogram Under V , Q and V+Q, we sample 50 instances from 4 regions as shown in Fig. 6 including: •Left tail (top %5): ID samples. •Top region: top occurring samples in the test set. •Intersect region: similar samples between train & test split. •Right tail (bottom %5): clear outlier concepts and ex- hibit uncommon & hard instances (e.g. an image object that barely appears in left tail/peak region). 16.1. Image Shift fft(v) We observe that the ID images involve commonly occuring objects shown in Tab. 11. Under image shifts, we have the following observations and potential hypothesis. Category Examples Animals cats, dogs, giraffe Sports baseball, skateboard, frisbee Objects kite, fire hydrant, pizza Vehicles bus, car, airplane Table 11. Commonly Occurring Objects We observe that 1) There are distinct differences between left tail/peak and right tail (OOD) samples in terms of object categories and compositions (e.g., # objects). Some exam- ples are shown in Fig. 7. 2) Intersecting regions have simi- lar type concepts. 3) Tail samples seem to have less number of objects and contain more close up images. 4) There are still some instances where similar objects appear in signif- icantly different regions, i.e., a commonly occuring object appears in the right region (OOD tail). These instances can be shown in Fig. 8. We hypothesize that 1) There are significant visible shiftsunder the image domain with barely overlapping image ob- jects between ID and OOD regions. However, some odd samples depicted in Fig. 8 suggests some weakness of the fine-tuned model in robustly representing image embed- dings. It fails to represent closer embeddings between im- ages with similar objects. 2) Weight updates under joint inputs causes the two image embeddings of similar objects to steer in different directions. The image embeddings are indirectly conditioned on different questions and answers. 16.2. Question Shift fft(q) Tab. 13 details the ID and OOD question examples. The ID questions are much more straightforward where it in- volves simple identification of colours, activity, yes/no questions etc. OOD questions tend to incorporate more out- side knowledge, which explains the drastic question shift in OKVQA, and more complex visual grounding such as OCR (Optical Character Recognition) tasks. 16.3. Joint Shift fft(v, q) Under the joint shift, there is an added complexity to this since we must consider the possible combinations of shifts under mixed modalities. Region Examples Left Tail ID Object + ID Question Peak Intersect Right TailID Object + OOD Question OOD Object + ID Question OOD Object + OOD Question Table 12. Region Samples Tab. 12 outlines the types of VQA samples we expect to see under different regions. Intuitively, we would expect samples with OOD Question + OOD Object to be found in the right tail region. Whilst finding OOD Question + ID Object may indicate that the specific dataset has more sam- ples with prominent OOD questions with ID images or that the joint shifts are more heavily influenced by text modality, causing samples with OOD questions to have a
https://arxiv.org/abs/2505.21755v1
significant push to the joint embedding towards the right tail region. Similarly, for ID Question + OOD Object, if those sam- ples are found in the right tail, then it may indicate dataset having more OOD samples with ID Question + OOD Ob- ject or the visual modality has a greater influenced in steer- ing the joint embedding. ID and OOD joint samples can be viewed in Fig. 9. Most of the ID samples have objective questions and images with easier to view objects. On the other hand, the right tail sam- ples seem to involve harder questions that are more sub- 3 Category ID Examples OOD Examples Color What color is the cat? What color is on the inside of the speaker? Activity What is this man doing? What is the person washing? Counting How many {ID objects }? How many are chocolate-covered donuts? Outside knowledge -What type of feed does this breed of horse need? What is the name of the knot used on this tie? If you add the two visible numbers, on the jerseys, what is the total sum? Text-in-image What is the number in lights on the bus? What is the name the package delivery company? Brand/SpeciesWhat brand of bike is this? What brand of watch is shown? What species of animal are we looking at? What species of fish is being served? Table 13. Common questions with examples for ID and OOD cases. (1) ID images (2) OOD images Figure 7. Comparison set of ID and OOD images in terms of object categories and compositions (e.g., # objects) jective, require outside knowledge, and more reasoning in- cluding VQA that involves reading text from an image. In- terestingly, most of the right tail samples attribute to OOD questions regardless of whether its image pair is ID/OOD, i.e., the majority of inspected OOD samples are either OOD Question + OOD Object or OOD Question + ID Object. This suggests that amongst the 10 VQA datasets there is a much higher question shift than there is to images as well as a higher sensitivity to question shifts, since farther OODs are OOD Question + ID Object >ID Question + OOD Ob- ject. We also measure the composition of right tail samples under the joint modality by filtering samples that fall under each respective category and using an OOD threshold cutoff for each modality. The percentage of samples that make up the right tail region are shown in Tab. 14. This aligns with our qualitative analysis that OOD questions make up asignificant portion of Joint OOD regardless of whether they are ID/OOD images. OOD V + ID Q ID V + OOD Q OOD V + OOD Q % composition in Joint OOD 9.68 45.83 14.76 Table 14. Percentage of each OOD type in Joint OOD. We use a threshold cutoff of 45 for visual OOD, 50 for question OOD, and 60 for joint OOD. 4 (1) ID images (2) OOD images Figure 8. Comparison of ID and OOD images but with similar objects. (1) ID samples (2) OOD samples
https://arxiv.org/abs/2505.21755v1
Figure 9. Comparison set of ID and OOD joint samples. 5 (1) VQAv2 Val (2) IV VQA (3) CV VQA (4) VQA Rephrasings (5) VQA CP v2 (6) VQA CE (7) ADVQA (8) Text VQA (9) VizWiz (10) OK VQA Figure 10. Histogram for Vanilla FT Visual Shifts: We depict the SMahascore on the visual modality for each sample in the VQAv2 train split in blue and the corresponding test samples in orange. There’s minimal visual shifts for all VQA datasets from the VQAv2 train, except for Figure (9) which shows evidence of greater shifts between the orange distribution and the blue distribution. 6 (1) VQAv2 Val (2) IV VQA (3) CV VQA (4) VQA Rephrasings (5) VQA CP v2 (6) VQA CE (7) ADVQA (8) Text VQA (9) VizWiz (10) OK VQA Figure 11. Histogram for Vanilla FT Question Shifts: We depict the SMahascore on the question modality for each sample in the VQAv2 train split in blue and the corresponding test samples in orange. Similar to the visual shift histograms, far OODs (Figures (8), (9), (10)) also show evidence of greater shifts between the orange distribution and the blue distribution than near OODs. 7 (1) VQAv2 Val (2) IV VQA (3) CV VQA (4) VQA Rephrasings (5) VQA CP v2 (6) VQA CE (7) ADVQA (8) Text VQA (9) VizWiz (10) OKVQA Figure 12. Histogram for Vanilla FT V+Q Shifts : We depict the SMaha score on the V+Q shift for each sample in the VQAv2 train split in blue and the corresponding test samples in orange. For all test splits, V+Q shifts show a greater degree of shift compared to the corresponding visual and question shift. 8 (1) VQAv2 val, PT (2) VQAv2, FT (3) VQAv2 val, LP (4) VQAv2 val, LP-FT (5) VQAv2, FTP (6) VQAv2, SPD (7) IV-VQA, PT (8) IV-VQA, FT (9) IV-VQA, LP (10) IV-VQA, LP-FT (11) IV-VQA, FTP (12) IV-VQA, SPD (13) CV-VQA, PT (14) CV-VQA, FT (15) CV-VQA, LP (16) CV-VQA, LP-FT (17) CV-VQA, FTP (18) CV-VQA, SPD (19) VQA-Rep., PT (20) VQA-Rep., FT (21) VQA-Rep., LP (22) VQA-Rep., LP-FT (23) VQA-Rep., FTP (24) VQA-Rep., SPD (25) VQA-CP, PT (26) VQA-CP, FT (27) VQA-CP, LP (28) VQA-CP, LP-FT (29) VQA-CP, FTP (30) VQA-CP, SPD (31) VQA-CE, PT (32) VQA-CE, FT (33) VQA-CE, LP (34) VQA-CE, LP-FT (35) VQA-CE, FTP (36) VQA-CE, SPD (37) AdVQA, PT (38) AdVQA, FT (39) AdVQA, LP (40) AdVQA, LP-FT (41) AdVQA, FTP (42) AdVQA, SPD Figure 13. Variation of MI vand MI qw.r.t. shift score under PT, FT, LP, LP-FT, FTP and SPD across all ID and near OOD datasets. The blue and orange bars represent MI vand MI qrespectively. The red dotted line marks a reference MI of 1. 9 (1) TextVQA, PT (2) TextVQA, FT (3) TextVQA, LP (4) TextVQA, LP-FT (5) TextVQA, FTP (6) TextVQA, SPD (7) VizWiz, PT (8) VizWiz, FT (9) VizWiz, LP (10) VizWiz, LP-FT (11) VizWiz, FTP (12) VizWiz, SPD (13) OK-VQA, PT (14) OK-VQA, FT (15) OK-VQA, LP (16) OK-VQA, LP-FT (17) OK-VQA, FTP (18) OK-VQA, SPD Figure 14. Variation
https://arxiv.org/abs/2505.21755v1
arXiv:2505.21765v1 [cs.AI] 27 May 2025Don’t Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models Sohyun An1, Ruochen Wang1, Tianyi Zhou2, Cho-Jui Hsieh1 1University of California, Los Angeles 2University of Maryland, College Park sohyun0423@cs.ucla.edu tianyi@umd.edu chohsieh@cs.ucla.edu Abstract While recent success of large reasoning models (LRMs) significantly advanced LLMs’ reasoning capability by optimizing the final answer accuracy using rein- forcement learning, they may also drastically increase the output length due to overthinking —characterized by unnecessarily complex reasoning paths that waste computation and potentially degrade the performance. We hypothesize that such inefficiencies stem from LRMs’ limited capability to dynamically select the proper modular reasoning strategies, termed thinking patterns at the right position. To investigate this hypothesis, we propose a dynamic optimization framework that segments model-generated reasoning paths into distinct thinking patterns, systemat- ically identifying and promoting beneficial patterns that improve the answer while removing detrimental ones. Empirical analysis confirms that our optimized think- ing paths yield more concise yet sufficiently informative trajectories, enhancing reasoning efficiency by reducing attention FLOPs by up to 47% while maintain- ing accuracy for originally correct responses. Moreover, a non-trivial portion of originally incorrect responses are transformed into correct ones, achieving a 15.6% accuracy improvement with reduced length. Motivated by the improvement brought by the optimized thinking paths, we apply a preference optimization technique supported by a pairwise dataset contrasting suboptimal and optimal reasoning paths. Experimental evaluations across multiple mathematical reasoning benchmarks re- veal that our method notably reduces computational overhead while simultaneously improving reasoning accuracy, achieving up to a 12% accuracy improvement and reducing token usage from approximately 5,000 to 3,000 tokens. 1 Introduction Recent advancements in Large Reasoning Models (LRMs) [ 13,44,9] have substantially enhanced their capabilities across various complex reasoning-intensive tasks. These advancements have largely been driven by outcome-only-based reinforcement learning (RL) [ 23,25,12,16], which train models to maximize the final answer accuracy. Models trained under this framework prioritize producing the correct answer over generating an optimal reasoning trajectory, and thus can sometimes produce long reasoning paths, i.e.,test-time scaling , achieving strong performance even in complex scenarios [28,36,15]. While these methods effectively increase the correctness of final predictions, they also reveal inherent limitations [ 19]. One notable issue frequently observed in current LRMs is the tendency toward overthinking [4,29,31], characterized by excessively prolonged reasoning paths or unnecessarily intricate inference steps. This behavior typically results in significant computational inefficiencies, increasing resource usage without improvement in performance, and can even degrade overall performance due to excessive exploration [30, 2, 27, 22]. Preprint. Under review. We hypothesize that this inefficiency largely stems from current LRMs’ limited capability to dynami- cally identify and select optimal reasoning strategies—referred to hereafter as thinking patterns —at critical junctures within the reasoning process. A thinking pattern is defined as a modular reasoning segment that performs a distinct cognitive function, such as hypothesis generation, self-verification, intermediate summarization, or exploring alternative scenarios [ 7]. An ideal LRM should dynamically utilize beneficial thinking patterns while minimizing or discarding unnecessary or detrimental ones, thus improving computational efficiency without compromising accuracy. To investigate this hypothesis, we formulate the enhancement
https://arxiv.org/abs/2505.21765v1
of reasoning efficiency as a constrained optimization problem aimed at minimizing the expected computational cost of reasoning trajectories while preserving or improving task performance. Subsequently, we propose a framework for Dynamic Thinking pattern Optimization ( DTO ), specifically designed to refine the selection of modular reasoning strategies. Our proposed method operates by first segmenting model-generated reasoning trajectories into identifiable thinking patterns. We then systematically evaluate each segment’s contribution, classifying it as positive or negative based on its impact on reasoning efficiency. By identifying appropriate finalization points and selectively pruning negative segments while reinforcing positive ones, our approach not only compresses the original reasoning paths into concise, effective trajectories, but also enables the transformation of flawed reasoning traces associated with incorrect outcomes into more coherent and accurate alternatives. In contrast to prior methods, which rely heavily on heuristic truncation [ 4] or aggregate metrics like token length [ 27,17], our framework explicitly models and optimizes the individual contributions of reasoning segments, enabling a more precise enhancement of reasoning efficiency. To explicitly guide LRMs toward more optimal reasoning behaviors, we integrate a preference optimization approach [ 20], leveraging a specialized pairwise dataset that contrasts suboptimal and optimal reasoning trajectories based on our framework. This encourages models to preferentially adopt thinking patterns demonstrated to yield superior efficiency and effectiveness. We validate our framework through extensive experiments conducted across multiple established mathematical reasoning benchmarks. The results consistently demonstrate that our approach reduces computational requirements while simultaneously enhancing reasoning performance. Our contributions and findings are summarized as follows: •We formulate the enhancement of LRM reasoning efficiency as a constrained optimization problem, and introduce a dynamic optimization framework called DTO, for this purpose. •We empirically validate that dynamically selecting optimal thinking patterns significantly improves reasoning efficiency. •Leveraging a preference optimization strategy based on pairwise trajectory comparisons, we achieve high reasoning efficiency across various mathematical reasoning benchmarks, attaining up to 12% higher accuracy over the original LRM, while reducing token usage from around 5,000 to 3,000. 2 Related Work Large Reasoning Models. LLMs have traditionally exhibited rapid, intuitive decision-making aligned with System 1 thinking. Recent efforts aim to equip them with more deliberate, System 2 reasoning abilities. Early studies primarily adopted prompting-based methods, such as chain-of- thought, to elicit explicit step-by-step reasoning [ 34,32,39,14]. While effective for structured reasoning, these methods often struggled with complex, multi-step problems [ 36]. To address this, later work introduced test-time scaling techniques [ 28,3,35,24], incorporating search and verification during inference to boost accuracy. More recently, Large Reasoning Models (LRMs) have been developed to internalize sophisticated reasoning capabilities within the model itself [9,44,13]. They are typically trained via outcome-based RL that focuses on maximizing final answer accuracy [ 23,25,12]. While effective for improving correctness, it may inadvertently lead to overthinking —unnecessarily long or complex reasoning paths that reduce computational efficiency [4, 29, 31]. Reasoning Efficiency of LRMs. Several recent studies have addressed the issue of reasoning efficiency in LRMs. For example, some studies [ 22,41,2,30,1,38,26,11] have introduced various length-based objectives integrated with RL approaches, while other efforts [ 40,42] have employed SFT supplemented by
https://arxiv.org/abs/2505.21765v1
the collection of effective datasets. Complementary to these strategies, another line of research [ 17,4,27] has investigated post-hoc refinement and scoring mechanisms for reasoning trajectories generated by models, demonstrating promising results with relatively low computational 2 Q: If a and b are integers such that ab ≡ 17 mod 20, then what is the remainder when (a+10)(b+10) is divided by 20? (Answer: 17) Wait , 10a + 10b can be factored as 10 x (a + b). So, 10(a + b) mod 20. Hmm, so that term is 10 times something, and modulo 20. I remember that if you have a multiple of 10 modulo 20, it's either 0 or 10. Because 10 * even number is 0 mod 20, and 10 * odd number is 10 mod 20. So, 10(a + b) is either 0 or 10 mod 20, depending on whether (a + b) is even or odd. So, we have 17 + 10(a + b) mod 20. So, that would be: If a + b is even, then 10(a + b) ≡ 0 mod 20, so the total is 17 + 0 = 17 mod 20. If a + b is odd, then 10(a + b) ≡ 10 mod 20, so the total is 17 + 10 = 27 mod 20, which is 7. Keep Pattern 2Ptn 2 Wait , so depending on whether a + b is even or odd, the result is either 17 or 7. … it's possible that a + b can be either even or odd, so we might have two possible remainders. Remove Pattern 3Ptn 3 Wait, maybe I can express (a + 10)(b + 10) as ab + 10a + 10b + 100, which we already did. ... So, 10(a + b) mod 20 is either 0 or 10, depending on a + b. … Remove Pattern 11Ptn 11 Wait , but maybe we can find the parity of a + b. Let's think about that. Given that ab ≡ 17 mod 20. So, ab is congruent to 17 mod 20. Let's think about the possible values of a and b modulo 20. Since 20 is the modulus, a and b can each be from 0 to 19. ... both a and b are odd, so their sum a + b is even. Because odd + odd = even. So, a + b is even. Therefore, 10*(a + b) mod 20 is 0, since 10*even = multiple of 20. Therefore, 10(a + b) ≡ 0 mod 20. So, going back, (a + 10)(b + 10) ≡ 17 + 0 = 17 mod 20. Keep Pattern 12Ptn 12 But hold on , earlier I thought it could be 17 or 7, but now since both a and b are odd, their sum is even, so 10*(a + b) is 0 mod 20, so 17 + 0 is 17. So, the remainder is 17. Keep Pattern 13Ptn 13… … Wait , this is getting a bit tedious, but I think the key point is that a and b must both be odd because 17
https://arxiv.org/abs/2505.21765v1
is odd and 20 is even, so their product has to be odd, so both a and b are odd. …Ptn 37෩𝜟𝒙𝒇𝒑𝟐=𝟎.𝟎 ⊕𝜹𝒇𝒊𝒏𝒂𝒍𝒊𝒛𝒆 (Hmm, I think this is enough to derive the final answer.) ⊕𝒔∗ 𝒑𝟑=𝟎.𝟎 𝒑𝟏𝟏=𝟎.𝟎 𝒑𝟏𝟐=𝟎.𝟖 𝒑𝟏𝟑=𝟏.𝟎 𝒑𝟑𝟕=𝟏.𝟎Figure 1: Illustration of DTO. We construct a truncated reasoning trajectory ∆f xby identifying the point where the probability score piin Equation (6) exceeds a threshold T= 1.0, and then applying the binary selection function f(·)from Equation (7). We then append the finalization pattern δfinalize and sampled answer s∗(Equation (9)) to form ˜∆f x. Finally, the pruning function g(·)(Equation (11)) refines the trajectory into the optimized version ∆g x, as illustrated in the orange box. overhead. Our method builds on this line of work, leveraging its demonstrated efficiency and broad applicability. Although primarily evaluated on base LRMs, these approaches can be generally applied to models trained or fine-tuned through other efficiency-enhancing strategies. More specifically, Luo et al. [17] introduced the Length-Harmonizing score, combining reasoning accuracy and token length within a PPO-style fine-tuning objective to encourage more concise reasoning paths. Preference- based optimization approaches, as explored by Chen et al. [4]and Shen et al. [27], have also proven effective. These methods construct pairwise datasets to guide models toward more efficient reasoning. Specifically, Chen et al. [4]generates positive samples by instructing an LLM to truncate correct reasoning at the point where an initial solution is followed by a reflective step, while Shen et al. [27] employs a scoring mechanism favoring shorter correct and longer incorrect responses when forming pairwise comparisons. Nevertheless, these methods primarily rely on aggregate-level metrics such as token length or heuristic truncation points. As a result, they are limited in their ability to precisely assess and optimize the contribution of individual reasoning segments, potentially hindering more optimal efficiency. To address this limitation, we propose a dynamic optimization framework that operates at the segment level. 3 Dynamic Optimization of Thinking Patterns Reasoning trajectories generated by LRMs exhibit a discernible structure, where distinct segments serve specific cognitive roles such as self-verification, intermediate summarization, or exploring alternatives [ 7]. We refer to these segments as thinking patterns , which can often be identified through linguistic cues such as “Wait” ,“Alternatively” that signal shifts in reasoning strategy. As illustrated in Figure 1, certain thinking patterns promote reasoning progress by facilitating equation formulation and step-by-step computation. Conversely, other patterns hinder efficiency by repeating prior reasoning or continuing unnecessary verification without contributing to the reasoning process. We hypothesize that such inefficiencies in LRMs stem from their limited ability to dynamically select appropriate thinking patterns at critical moments. For example, models may fail to terminate reasoning even after sufficient evidence has been gathered. To investigate this hypothesis, we formulate the enhancement of reasoning efficiency as a constrained optimization problem, aiming to reduce computational cost while enforcing a lower bound on task performance. Building on this formulation, we introduce a dynamic optimization framework termed DTO that identifies and prunes unproductive segments and reinforces those that contribute positively. 3 3.1 Problem Formulation Let∆xbe a finite list of thinking patterns for problem
https://arxiv.org/abs/2505.21765v1
x∼ D,i.e.,∆x= [δ1, δ2, . . . , δ nx], where D denotes the underlying distribution over input problems. Each thinking pattern δincurs a non-negative computational cost c(δ)≥0, typically quantified by metrics such as token length or FLOPs. The total cost of the reasoning trajectory ∆xis therefore given by: C(∆x) =X δ∈∆xc(δ). (1) Here, our objective is to construct reasoning trajectories that integrate effective thinking patterns to achieve high reasoning efficiency. Formally, we define the constrained optimization problem as follows, aiming to minimize the expected computational cost while maintaining task performance at or above its prior level: minimize ∆x=[δ1,...,δnx]Ex∼D[C(∆x)] subject to Ex∼D[P(∆x)]≥α,(2) whereP(·)denotes a task-specific performance metric such as final answer accuracy, and αrepresents a lower bound on the expected task performance, e.g., the expected performance of the original trajectories generated by a base LRM if we aim to maintain the original performance. 3.2 DTO: Constructing Optimal Reasoning Trajectories To solve the problem defined in Equation (2), we must first characterize what constitutes an optimal trajectory under this formulation. We define such a trajectory as one that incrementally generates thinking patterns conducive to achieving the reasoning objective, such as narrowing down possibilities or providing key insights, and terminates the reasoning process once sufficient information has been accumulated to produce a correct answer. To extract such optimal reasoning trajectory from an LRM-generated response, we follow a two-step procedure: (1) identify the appropriate point at which the reasoning should be finalized, and (2) prune intermediate thinking patterns that do not meaningfully contribute to the reasoning objective. Given a reasoning trajectory ygenerated by a LRM πθfor a problem xwith a known ground-truth answer a∗, we segment yinto a sequence of distinct thinking patterns: y= [δ1, . . . , δ ny], y∼πθ(x). (3) To systematically determine optimal termination points, inspired by [ 6], we define an exitthinking pattern, δexit, expressed as “... Wait, I suddenly got the final answer to the whole problem. Final Answer: \boxed{” . For each candidate termination point i, we construct a partial trajectory τiby appending the exit pattern: τi=δ1⊕δ2⊕ ··· ⊕ δi⊕δexit,fori= 1, . . . , n y. (4) We then employ Monte Carlo estimation, sampling multiple completions: Ri={r1, r2, . . . , r M}, r j∼πθ(τi). (5) Benefiting from highly efficient and parallelizable inference frameworks [ 45], and by restricting generation to a small number of tokens with a slight margin over the token length of the ground-truth answer a∗, this process can be executed in a lightweight manner. We then determine the probability piof deriving the correct answer at index iby: pi=|{r∈ R i|a∗∈r}|/|Ri|. (6) We define the earliest index i=i′such that piexceeds a predefined threshold T, and apply a binary selection function over the thinking patterns: f(δi) =1ifδi≤i′ 0otherwise,(7) yielding the truncated reasoning trajectory: ∆f x= [δi∈y|f(δi) = 1] , (8) 4 (a) DeepSeek-R1 (b) DeepScaleR (c) DeepSeek-R1 Figure 2: Comparison of dynamically optimized vs. original responses, and max ipidistributions in incorrect cases. (a), (b) Dynamic optimization preserves accuracy for correct responses while reducing attention FLOPs (47%, 40%), and improves accuracy for incorrect ones (15.6%,
https://arxiv.org/abs/2505.21765v1
7.8%) with lower FLOPs. (c) shows max ipi, the maximum estimated correctness probability across thinking patterns (Equation (6)). High values suggest that even incorrect trajectories often contain a promising intermediate segment. where yis the whole trajectory in Equation (3). Next, we introduce a special finalize thinking pattern, δfinalize , which signals that sufficient reasoning has been conducted—similar to how a human might intuitively recognize when to stop deliberating and provide an answer. This is expressed as “Hmm, I think this is enough to derive the final answer.” Unlike δexit,δfinalize coherently concludes the reasoning process. We sample completions conditioned on the finalized trajectory: S∆f x={s1, s2, . . . , s K}, s j∼πθ(∆f x⊕δfinalize). (9) Among the correct completions in S∆f x, we select the shortest one, s∗, and construct a refined reasoning trajectory: ˜∆f x= ∆f x⊕δfinalize⊕s∗. (10) To further optimize ˜∆f x, we evaluate the utility of each intermediate thinking pattern using an auxiliary LLM µϕ. Given a dedicated prompt (see Appendix A.4), the model determines whether each δi∈˜∆f x meaningfully contributes to deriving a∗. Note that we instructed the model to remove segments only if they add no meaningful content, are factually incorrect in a harmful way, or are entirely off-topic or unhelpfully redundant, so as to avoid compromising the model’s reasoning capability. To further ensure that the pruned sequence, which excludes δi, still leads to the correct answer, we consider the following trajectory: ˜∆f\δi x=h δj∈˜∆f x δj̸=δi,and before “ \boxed”i , and perform a quick decoding step for validation. Patterns that are deemed redundant and pass validation are removed via the following pruning function: g(δi) =1ifδican be removed from ˜∆f x 0otherwise.(11) This results in the final optimized reasoning trajectory as follows: ∆g x= [δj∈˜∆f x|g(δj)̸= 1]. (12) Please refer to Appendix A.3 for the full algorithm of our framework. 3.3 Analysis In this section, we compare the original trajectory ysampled from the base LRM with the optimized trajectory ∆g xdefined in Equation (12). 5 Setup. We use a dataset consisting of 5,000 samples drawn from the MATH [ 10] training set1 and employ the DeepSeek-R1-Distill-Qwen-1.5B [ 9] and DeepScaleR-1.5B-Preview [ 18] asπθand Llama-3.3-70B-Instruct [ 8] as an auxiliary LLM µϕ. For each problem, we sample 4 responses. If at least one response is correct, we construct a reasoning trajectory ∆g xby applying our proposed framework to a randomly selected correct response. If no correct responses are available, we instead apply the same procedure to a randomly selected incorrect response. We set the temperature to 0.6, top_p to 0.95, and max_tokens to 8192. The value of Min Equation (5) was set to 10, Kin Equation (9) was set to 4, and the threshold Tis set to 1.0, signaling that the model has accumulated sufficient information to produce the correct answer. Results. As illustrated in Figure 2a and Figure 2b, the responses optimized using our method require substantially fewer attention FLOPs compared to the original responses from the base LRM. Specifically, for responses that originally produced a correct answer, our method reduces attention FLOPs consumption by 47% and
https://arxiv.org/abs/2505.21765v1
40% for DeepSeek-R1-Distill-Qwen-1.5B and DeepScaleR- 1.5B-Preview, respectively, while preserving correctness. Interestingly, for responses that initially yielded incorrect outcomes, applying our framework to induce more effective thinking patterns such as finalizing reasoning at appropriate termination points transforms a non-trivial portion of them into correct ones, improving accuracy by 15.6% and 7.8% for DeepSeek-R1-Distill-Qwen-1.5B and DeepScaleR-1.5B-Preview, respectively, while also reducing FLOPs consumption. To better understand why this is possible, we analyze max ipifor responses that were originally incorrect in Figure 2c and Figure 4. As described in Equation (6), pidenotes a Monte Carlo estimate of the probability that the model produces the correct answer when terminated at the i-th thinking pattern. We find that, even in incorrect responses, many intermediate segments exhibit high pivalues, suggesting that the model temporarily arrives at high-quality partial reasoning steps. This observation indicates that our method can effectively restructure these responses with more appropriate thinking patterns, thereby improving accuracy while consistently lowering the computational cost. 4 Preference Optimization Towards Optimal Reasoning Behaviors Thus far, we have demonstrated that our dynamic optimization framework enables the construction of significantly more efficient reasoning trajectories by selecting appropriate thinking patterns at the right points in the reasoning process. In this section, we leverage the framework to construct a contrasting pair dataset of optimized and suboptimal reasoning trajectories, and examine the impact of applying a preference optimization technique using this dataset. 4.1 Preference Optimization with Dynamically Optimized Trajectories To apply the preference optimization technique, we first construct a pairwise dataset (yw, yl)for each problem x∼ D. For each x, we aim to generate Nresponse pairs. Specifically, we begin by sampling 4 responses from the LRM πθ. Let Ncdenote the number of correct responses among them. If Nc≥N, we select the Nshortest correct responses and apply our dynamic optimization framework to generate optimized trajectories, which are used as yw. IfNc< N , we select all Nc correct responses and additionally sample N−Ncincorrect responses at random. We then apply our framework to these Nresponses to obtain optimized trajectories as yw. For each yw, we select the longest unoptimized trajectory among the sampled responses to serve as the corresponding yl, thereby constructing the final pairwise dataset D′. We then apply SimPO [ 20], which demonstrates strong performance while incurring relatively low computational cost, in order to guide the LRM toward more optimal reasoning behaviors: LSimPO(πθ) =−E(x,yw,yl)∼D′" logσ β |yw|logπθ(yw|x)−β |yl|logπθ(yl|x)−γ!# ,(13) where βandγare hyperparameters, σ(·)denotes the sigmoid function, and |yw|and|yl|represent the token lengths of ywandyl, respectively. Please refer to Appendix A for more details. 4.2 Experiments Dataset. We evaluate our method against various existing approaches across several established mathematical reasoning benchmarks, including the test sets of MATH [ 10], GSM8K [ 5], Gaokao (Mathematics) [43], AMC2023, AIME2024, and AIME2025. 1We followed the split used in Luo et al. [17] 6 Table 1: Comparison with other methods on the DeepSeek-R1-Distill-Qwen-1.5B model. We evaluate the effectiveness of our method using a dynamic optimization framework and a preference optimization technique applied to DeepSeek-R1-Distill-Qwen-1.5B by comparing it against existing methods. MATH GSM8K Gaokao Method Acc. ( ↑) #Tokens ( ↓) Eff.
https://arxiv.org/abs/2505.21765v1
( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Instruct ver. [37] 76.36 555.16 N/A 85.37 315.44 N/A 65.13 575.86 N/A Baseline 79.80 3543.44 1.000 82.13 1382.99 1.000 66.62 3725.16 1.000 Fast Prompt 81.17 3354.99 1.074 85.14 1894.73 0.757 69.68 3634.30 1.072 SFT 81.28 3180.10 1.135 80.12 933.89 1.445 67.34 3245.37 1.160 O1-Pruner [17] 82.31 2593.06 1.409 80.67 669.41 2.029 66.69 2827.81 1.319 DAST [27] 83.35 2817.94 1.313 84.02 1174.89 1.204 69.42 3058.96 1.269 FCS + Ref. [4] 84.72 2548.55 1.476 84.29 1080.19 1.314 71.30 2750.35 1.450 DTO (Ours) 85.48 1936.19 1.960 83.91 844.18 1.674 72.66 2137.59 1.901 AMC2023 AIME2024 AIME2025 Method Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Instruct ver. [37] 54.75 786.43 N/A 11.22 956.11 N/A 8.11 887.40 N/A Baseline 58.25 5338.54 1.000 21.44 7359.24 1.000 18.89 7236.66 1.000 Fast Prompt 61.00 5073.15 1.102 21.56 7261.19 1.019 20.33 7137.90 1.091 SFT 61.08 5030.76 1.113 23.78 7151.84 1.141 18.44 7122.97 0.992 O1-Pruner [17] 65.50 4370.83 1.373 21.78 7015.30 1.066 17.67 6742.34 1.004 DAST [27] 66.58 4590.91 1.329 24.00 7077.45 1.164 19.78 6846.85 1.107 FCS + Ref. [4] 68.92 4166.60 1.516 24.44 6698.77 1.252 20.67 6545.62 1.210 DTO (Ours) 70.25 3376.98 1.907 28.00 5877.44 1.635 21.11 5689.38 1.421 Baselines. In line with prior works [ 4,27,17], we compare our method against the following baselines: Instruct ver. refers to the instruction-tuned version of the base model on which the LRM is trained. Baseline denotes the base LRM πθwithout any additional prompting or fine-tuning. Fast Prompt [17] appends the instruction “Solve the problem as quickly as possible” to the original prompt (see Appendix A.4) and uses the resulting prompt to generate responses. SFT represents supervised fine-tuning of πθusing Ncorrect and shortest responses from the model. O1-Pruner [17] scores reasoning trajectories generated by the LRM using the Length-Harmonizing score, which jointly considers accuracy and token length. Ntrajectories per problem xare then used to fine-tune the model via a PPO-style objective. DAST [27] introduces a self-defined token length budget, formulated as a linear combination of the average token length of correct responses and a predefined maximum generation length. Leveraging this, it assigns preference scores to responses sampled from a LRM, favoring shorter correct responses and longer incorrect ones during pairwise data construction. A response pair is formed when the score margin exceeds 0.3, and the resulting dataset is used to apply a preference optimization method with at most Ntrajectories per problem x.FCS + Ref. [4] constructs a pairwise dataset by leveraging heuristic truncation points in LRM-generated trajectories. Specifically, when a correct response is present, the LLM µϕis prompted to identify the locations of the First Correct Solution and the subsequent Reflection. The segment up to the reflection point is extracted as the positive example (at most Nper problem x), while the longest trajectory among the sampled responses is selected as the negative example. Preference optimization technique is then applied to problems for which such pairwise
https://arxiv.org/abs/2505.21765v1
data can be constructed—that is, when at least one correct response is available. Metrics. For each dataset, we report both the average accuracy and the average number of generated tokens. Additionally, inspired by Qu et al. [21], we compute the following efficiency metric η: η=Ex∼D, y∼πθ∗(x)[P(y)] Ex∼D, y0∼πθ(x)[P(y0)]·Ex∼D, y0∼πθ(x)[C(y0)] Ex∼D, y∼πθ∗(x)[C(y)](14) Here,P(·)measures final answer accuracy, C(·)denotes the number of generated tokens, πθrepresents the original base model, while πθ∗corresponds to the model after applying a specific method. A higher value of ηindicates a more favorable trade-off between inference efficiency and performance. Results. Here, we set N= 2, with the remaining experimental setup following Section 3.3. Table 1 and Table 2 summarize the performance comparisons between our method and existing approaches on two models: DeepSeek-R1-Distill-Qwen-1.5B and DeepScaleR-1.5B-Preview, evaluated across six benchmark datasets. Overall, our approach achieves superior efficiency metrics on almost all datasets and models. Specifically, on DeepSeek-R1-Distill-Qwen-1.5B (Table 1), DTO significantly outperforms existing methods in efficiency while maintaining comparable or better accuracy on 5 7 Table 2: Comparison with other methods on the DeepScaleR-1.5B-Preview. We evaluate the effectiveness of our method using a dynamic optimization framework and a preference optimization technique applied to DeepScaleR-1.5B-Preview by comparing it against existing methods. MATH GSM8K Gaokao Method Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Instruct ver. [37] 76.36 555.16 N/A 85.37 315.44 N/A 65.13 575.86 N/A Baseline 88.48 2700.37 1.000 87.38 1601.98 1.000 75.97 3053.67 1.000 Fast Prompt 89.41 2601.49 1.049 88.38 1652.94 0.980 77.01 2941.73 1.052 SFT 89.19 2634.80 1.033 86.43 1469.03 1.079 76.23 2950.22 1.039 O1-Pruner [17] 89.74 2259.56 1.212 86.41 1235.48 1.282 77.47 2539.68 1.226 DAST [27] 89.74 2551.49 1.073 86.71 1502.93 1.058 77.21 2880.06 1.078 FCS + Ref. [4] 88.81 2247.38 1.206 88.10 1254.06 1.288 77.21 2531.05 1.226 DTO (Ours) 89.14 1994.27 1.364 87.23 1184.53 1.350 77.01 2269.98 1.364 AMC2023 AIME2024 AIME2025 Method Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Instruct ver. [37] 54.75 786.43 N/A 11.22 956.11 N/A 8.11 887.40 N/A Baseline 76.88 4268.77 1.000 36.67 6455.16 1.000 29.17 6420.07 1.000 Fast Prompt 76.25 4119.06 1.028 30.83 6505.43 0.834 27.50 6338.55 0.955 SFT 71.88 4405.87 0.906 35.83 6755.42 0.934 27.50 6192.65 0.977 O1-Pruner [17] 79.38 3831.73 1.150 35.83 6242.41 1.010 28.33 6086.82 1.024 DAST [27] 78.75 4158.48 1.051 32.50 6551.17 0.873 27.50 6462.76 0.937 FCS + Ref. [4] 77.50 3755.56 1.146 35.00 6219.26 0.991 26.67 6016.55 0.976 DTO (Ours) 77.50 3403.56 1.264 38.33 5742.61 1.175 27.50 5820.90 1.040 out of 6 datasets. For challenging benchmarks such as AMC and AIME, DTO improves average accuracy by almost 7% over the base model, while also reducing token usage by approximately 1,700 tokens. This improvement likely stems from DTO’s ability to transform some initially incorrect trajectories into correct reasoning paths, even in cases where no correct sampled trajectory is available, which often indicates that the problem is particularly difficult. On the other hand, in
https://arxiv.org/abs/2505.21765v1
such cases, FCS+Ref. fails to construct a pairwise dataset, while DAST relies solely on token length, i.e.,it assigns higher scores to longer incorrect responses, which may limit its ability to generalize. On DeepScaleR-1.5B-Preview (Table 2), DTO continues to achieve the highest efficiency, though with narrower margins due to this model’s inherently higher accuracy and lower baseline token counts. Nonetheless, DTO consistently delivers a favorable balance between accuracy and computational efficiency. 4.3 Analysis In this section, we present several analyses to validate the effectiveness of our framework using the DeepSeek-R1-Distill-Qwen-1.5B model. Frequency of Thinking Pattern Transitions. We further examine the frequency of thinking pattern transitions by analyzing the occurrence of the linguistic cue “Wait” . As shown in Figure 3, all methods exhibit increased frequencies as task difficulty rises. Notably, DTO consistently achieves the lowest average frequency of “Wait” across multiple benchmarks, while maintaining comparable or superior accuracy relative to all baseline methods, with the exception of GSM8k (see Table 1). This result suggests that DTO effectively reduces unnecessary cognitive shifts during reasoning, leading to more concise and coherent reasoning trajectories. Qualitative Analysis of Generated Reasoning Trajectories. As shown in Table 3, our optimized reasoning trajectory achieves the same correct answer with significantly fewer tokens (814 vs. 1790) compared to the baseline. The baseline response exhibits signs of overthinking, including repeti- tive verification, redundant restatements, and unnecessary self-correction. In contrast, our method generates a more concise trajectory by directly applying the relevant formula and confidently pro- gressing through the necessary steps. This example illustrates how dynamic reasoning optimization effectively reduces token overhead without compromising correctness, producing more efficient reasoning paths. Generalization to different domains. We evaluate the generalization capability of our framework by examining whether a model trained on the MATH dataset, as detailed in the previous section, can improve reasoning efficiency beyond the mathematical domain. To this end, we use the MMLU- 8 Figure 3: The average count of “Wait” , which is one of the words signaling a thinking pattern transition. Compared to all baselines, our framework generally results in the lowest average count of “Wait” , suggesting more concise and less interrupted reasoning trajectories. The results are averaged over 4 runs. Table 3: Comparison of Reasoning Trajectories between Baseline and DTO. Due to space constraints, the middle part has been replaced with “ . . .”. The segment highlighted in yellow redundantly restates the same fact as the one in green. For more details, please refer to Appendix B. Question : “What is the degree measure of the acute angle formed by lines with slopes 2and1 3?” (Answer: 45) Baseline (1790 tokens): “Okay, so I need to find the acute angle between two lines with slopes 2 and 1/3. . . . So, if the slopes are m1andm2, the angle θbetween them is given by: θ= arctan( |m2−m1|/(1 +m1m2)). Is that right? Let me check. Yeah, I think that’s correct because thetangent oftheanglebetween twolines isequal totheabsolutevalue of(m2−m1)/(1 +m1m2). . . . So, putting it together, we have: θ= arctan( |(−5/3)/(5/3)|) = arctan( | −1|) = arctan(1) . Because the absolute
https://arxiv.org/abs/2505.21765v1
value of -1 is 1. So, arctan(1) is... 45◦, right? Since tan(45◦) = 1 . Wait, but hold on, the question asks for the acute angle. So, 45◦is already acute, since it’s less than 90◦. So, is that the answer? It seems straightforward. But let me make sure . . . Slope m1= 2,m2= 1/3. . . . So, arctan(1) is 45◦. That seems correct. Wait asecond, butIremem berthattheanglebetween two lines canalsobefound usingtheformula: tan(θ) =|(m2−m1)/(1 +m1m2)|. . . . Final Answer: The acute angle formed by the lines is 45.” DTO (814 tokens): “Alright, so I need to find the acute angle formed by two lines with slopes 2 and 1/3. . . . Let me write that down: tan(θ) =|(m2−m1)|/|1 +m1m2|. So substituting the values: tan(θ) =|(1/3−2)/(1 + (2) ∗(1/3))|. . . . Oh, so tan(θ)is 1. Then θisarctan of 1. I remember that tan(45◦) is 1, so θis 45◦. Since we’re asked for the acute angle, 45◦is already acute because it’s less than 90◦. So, I think that’s it. The acute angle is 45◦.Final Answer: The acute angle is 45.” Table 4: Evaluation on MMLU-Pro to validate generalization beyond mathematics. All methods use the same five CoT few-shot examples. Law Chemistry Engineering Method Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Instruct ver. [37] 11.00 183.98 N/A 21.00 226.18 N/A 20.25 377.11 N/A Baseline 17.50 1716.77 1.000 40.75 4427.44 1.000 22.00 6173.27 1.000 Fast Prompt 17.00 1685.09 0.990 37.25 4508.80 0.898 20.75 5972.20 0.975 SFT 17.75 1590.35 1.095 39.25 4270.96 0.998 19.25 6038.40 0.895 O1-Pruner [17] 16.25 995.03 1.602 39.25 3688.95 1.156 23.00 5225.53 1.235 DAST [27] 16.25 1180.12 1.351 38.00 4433.26 0.931 23.75 6137.51 1.086 FCS + Ref. [4] 15.75 1034.84 1.493 39.25 4096.80 1.041 22.75 5875.08 1.087 DTO (Ours) 16.75 734.20 2.238 39.75 3142.64 1.374 23.00 4186.74 1.542 Pro dataset [ 33], which comprises challenging, reasoning-centric questions across a wide range of domains. Among these, we conduct evaluations on three domains using 100 randomly sampled questions per domain and the same five CoT few-shot examples across all methods. As shown in Table 4, our method consistently achieves the highest reasoning efficiency across all three domains, significantly reducing token usage while maintaining comparable accuracy to baselines. This suggests thatour framework generalizes well beyond the mathematical domain, effectively condensing reasoning in tasks that differ from the training domain . 5 Conclusion In this paper, we introduced a dynamic optimization framework termed DTO to mitigate the ineffi- ciencies arising from overthinking in LRMs. By dynamically identifying and optimizing modular reasoning strategies, our approach substantially reduces computational overhead while improving reasoning accuracy. Empirical evaluations across diverse benchmarks demonstrate the effective- ness of our method when combined with preference optimization, underscoring the importance of strategically managing reasoning processes in LRMs. 9 References [1]Pranjal Aggarwal and Sean Welleck. L1: Controlling how long a reasoning model thinks with reinforcement learning, 2025. URL https://arxiv.org/abs/2503.04697 . [2]Daman Arora and Andrea Zanette. Training language models to reason efficiently. arXiv
https://arxiv.org/abs/2505.21765v1
preprint arXiv:2502.04463 , 2025. [3]Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 , 2024. [4]Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187 , 2024. [5]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [6]Yichao Fu, Junda Chen, Siqi Zhu, Zheyu Fu, Zhongdongming Dai, Aurick Qiao, and Hao Zhang. Efficiently serving llm reasoning programs with certaindex. arXiv preprint arXiv:2412.20993 , 2024. [7]Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, and Noah D Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307 , 2025. [8]Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [9]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [10] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [11] Bairu Hou, Yang Zhang, Jiabao Ji, Yujian Liu, Kaizhi Qian, Jacob Andreas, and Shiyu Chang. Thinkprune: Pruning long chain-of-thought of llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2504.01296 . [12] Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262 , 2025. [13] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [14] Bowen Jin, Chulin Xie, Jiawei Zhang, Kashob Kumar Roy, Yu Zhang, Zheng Li, Ruirui Li, Xianfeng Tang, Suhang Wang, Yu Meng, et al. Graph chain-of-thought: Augmenting large language models by reasoning on graphs. arXiv preprint arXiv:2404.07103 , 2024. [15] Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, et al. From system 1 to system 2: A survey of reasoning large language models. arXiv preprint arXiv:2502.17419 , 2025. [16] Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783 , 2025. 10 [17] Haotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, and Dacheng Tao. O1-pruner: Length-harmonizing fine-tuning for o1-like reasoning pruning. arXiv preprint arXiv:2501.12570 , 2025. [18] Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y . Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca
https://arxiv.org/abs/2505.21765v1
Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl, 2025. Notion Blog. [19] Sara Vera Marjanovi ´c, Arkil Patel, Vaibhav Adlakha, Milad Aghajohari, Parishad BehnamGhader, Mehar Bhatia, Aditi Khandelwal, Austin Kraft, Benno Krojer, Xing Han Lù, et al. Deepseek-r1 thoughtology: Let’s< think> about llm reasoning. arXiv preprint arXiv:2504.07128 , 2025. [20] Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. Advances in Neural Information Processing Systems , 37:124198–124235, 2024. [21] Xiaoye Qu, Yafu Li, Zhaochen Su, Weigao Sun, Jianhao Yan, Dongrui Liu, Ganqu Cui, Daizong Liu, Shuxian Liang, Junxian He, et al. A survey of efficient reasoning for large reasoning models: Language, multimodality, and beyond. arXiv preprint arXiv:2503.21614 , 2025. [22] Yuxiao Qu, Matthew YR Yang, Amrith Setlur, Lewis Tunstall, Edward Emanuel Beeching, Ruslan Salakhutdinov, and Aviral Kumar. Optimizing test-time compute via meta reinforcement fine-tuning. arXiv preprint arXiv:2503.07572 , 2025. [23] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [24] Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for llm reasoning. arXiv preprint arXiv:2410.08146 , 2024. [25] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [26] Jianshu She, Zhuohao Li, Zhemin Huang, Qi Li, Peiran Xu, Haonan Li, and Qirong Ho. Hawkeye:efficient reasoning with model collaboration, 2025. URL https://arxiv.org/abs/ 2504.00424 . [27] Yi Shen, Jian Zhang, Jieyun Huang, Shuming Shi, Wenjing Zhang, Jiangze Yan, Ning Wang, Kai Wang, and Shiguo Lian. Dast: Difficulty-adaptive slow-thinking for large reasoning models. arXiv preprint arXiv:2503.04472 , 2025. [28] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute opti- mally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 , 2024. [29] Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Hanjie Chen, Xia Hu, et al. Stop overthinking: A survey on efficient reasoning for large language models. arXiv preprint arXiv:2503.16419 , 2025. [30] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [31] Rui Wang, Hongru Wang, Boyang Xue, Jianhui Pang, Shudong Liu, Yi Chen, Jiahao Qiu, Derek Fai Wong, Heng Ji, and Kam-Fai Wong. Harnessing the reasoning economy: A survey of efficient reasoning for large language models. arXiv preprint arXiv:2503.24377 , 2025. [32] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. 11 [33] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task
https://arxiv.org/abs/2505.21765v1
language understanding benchmark. arXiv preprint arXiv:2406.01574 , 2024. [34] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [35] Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724 , 2024. [36] Violet Xiang, Charlie Snell, Kanishk Gandhi, Alon Albalak, Anikait Singh, Chase Blagden, Duy Phung, Rafael Rafailov, Nathan Lile, Dakota Mahan, et al. Towards system 2 reasoning in llms: Learning how to think with meta chain-of-though. arXiv preprint arXiv:2501.04682 , 2025. [37] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [38] Junjie Yang, Ke Lin, and Xing Yu. Think when you need: Self-adaptive chain-of-thought learning, 2025. URL https://arxiv.org/abs/2504.03234 . [39] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in neural information processing systems , 36:11809–11822, 2023. [40] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning, 2025. URL https://arxiv.org/abs/2502.03387 . [41] Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, and Xiang Yue. Demystifying long chain-of-thought reasoning in llms. arXiv preprint arXiv:2502.03373 , 2025. [42] Zhaojian Yu, Yinghao Wu, Yilun Zhao, Arman Cohan, and Xiao-Ping Zhang. Z1: Efficient test-time scaling with code, 2025. URL https://arxiv.org/abs/2504.00810 . [43] Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. Eval- uating the performance of large language models on gaokao benchmark. arXiv preprint arXiv:2305.12474 , 2023. [44] Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559 , 2024. [45] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Livia Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E Gonzalez, et al. Sglang: Efficient execution of structured language model programs. Advances in Neural Information Processing Systems , 37:62557–62583, 2024. 12 A Experimental Details A.1 The Distribution of Maximum Correctness Probability for Incorrect Responses. Figure 4: Distribution of max ipiin incorrect responses of DeepScaleR-1.5B-Preview.Due to space limitations, we report the distribution of max ipifor incorrect responses of the DeepScaleR-1.5B-Preview model in Fig- ure 4. In Section 3.3, we analyze the distribution of max ipivalues computed from reasoning trajectories that initially resulted in incor- rect answers. Here, pidenotes a Monte Carlo estimate of the prob- ability that the model would generate the correct final answer if the reasoning process were terminated at the i-th thinking pattern. Our analysis reveals that a non-trivial amount of intermediate segments exhibit high pivalues, indicating that the model frequently pro- duces useful reasoning components
https://arxiv.org/abs/2505.21765v1
even within incorrect responses. These findings underscore the potential of selectively terminating the reasoning process at appropriate points to recover correct answers. A.2 Implementation Details For all training-based methods, the batch size was fixed at 128, and we trained the model using four NVIDIA RTX A6000 GPUs. The value of βin Equation (13) is set to 10.0, and the ratio γ/β is set to 0.3. For DeepSeek-R1-Distill-Qwen-1.5B [ 9] and DeepScaleR- 1.5B-Preview [ 18], we set the sampling parameters to a temperature of 0.6, top_p of 0.95, top_k of -1, and a maximum generation length of 8192 tokens. For the Qwen2.5-Math Instruct model [ 37], we use the same configuration. However, due to a limitation in the SGLang framework [ 45], which raises an error when the maximum number of tokens exceeds 4096, we adjusted our settings accordingly. 13 A.3 Algorithm of DTO We provide a summary of the full optimization procedure in Algorithm 1. Algorithm 1 DTO :Dynamic Thinking pattern Optimization Require: x: input problem, y: reasoning trajectory from LRM πθ,a∗: ground-truth answer, T: correctness threshold, M: samples for exit prediction, K: completions for finalize, µϕ: auxiliary LLM 1: Segment yinto thinking patterns [δ1, . . . , δ ny] 2:i′←ny 3:fori= 1tonydo 4: τi←δ1⊕ ··· ⊕ δi⊕δexit ▷Construct partial trajectory by appending exit thinking pattern 5: Sample Ri={r1, . . . , r M}, rj∼πθ(τi) ▷Monte Carlo sampling 6: pi← |{r∈ R i|a∗∈r}|/|Ri| ▷Estimate correctness probability 7: ifpi≥Tthen 8: i′←i 9: break 10: end if 11:end for 12:f(δi) = 1 ifi≤i′, else 0 ▷Define binary selection function f(·) 13:∆f x←[δi∈y|f(δi) = 1] 14: Define δfinalize←“Hmm, I think this is enough to derive the final answer.” 15:S∆f x← {s1, . . . , s K}, sj∼πθ(∆f x⊕δfinalize) ▷Sample completions 16: Select shortest s∗∈ S∆f xsuch that a∗∈s∗ 17:˜∆f x←∆f x⊕δfinalize⊕s∗ 18:foreachδi∈˜∆f xdo 19: Query the utility of δibased on the µϕ’s evaluation conditioned on the context of xanda∗ (see Appendix A.4) 20: Construct ˜∆f\δix=h δj∈˜∆f x δj̸=δi,and before “ \boxed”i 21: g(δi)←1ifδiis deemed redundant by µϕanddecoding from ˜∆f\δix yields a∗ 22:end for 23:∆g x←[δj∈˜∆f x|g(δj)̸= 1] 24:return dynamically optimized reasoning trajectory ∆g x 14 A.4 Prompts Default Prompt Please reason step by step, and put your final answer within \boxed{}. {{ problem }} Fast Prompt Please reason step by step, and put your final answer within \boxed{}. Solve the problem as quickly as possible. {{ problem }} Prompt for the Auxiliary LLM ( µϕin Section 3) You will be given: 1. A math problem 2. A ground-truth answer 3. A series of reasoning chunks Your task has two parts: STEP 1: Filtering Decide for each chunk whether to KEEP AS IS or REMOVE. KEEP AS IS if the chunk contributes meaningfully: • Narrowing down possibilities • Showing intermediate calculations • Offering partial insight (even if incomplete) • Reinforcing understanding through repetition REMOVE if: • It adds no meaningful content • It is factually incorrect in a harmful way • It is entirely off-topic or unhelpfully redundant STEP 2: Final Reasoning Construction Using only the KEPT chunks: • Concatenate them with minimal
https://arxiv.org/abs/2505.21765v1
edits for flow. • Use each kept chunk verbatim. • Do not paraphrase or skip ideas. • Preserve the original tone (informal, exploratory, etc). End the reasoning with the final answer in this format: \boxed{{...}} Output Format: [OUTPUT] [CHUNK_FILTERING_RESULTS] [Chunk 1] KEEP AS IS [Chunk 2] REMOVE ... [Chunk N] KEEP AS IS [FINAL_RECONSTRUCTED_REASONING] """ <Clean explanation using only the kept chunks. Minimal edits for flow. End with the boxed final answer.> """ Now here is your input: ### Problem: {problem} ### Ground-Truth {ground_truth_answer} ### Reasoning Chunks: {thinking_patterns} 15 B Qualitative Analysis We selected a question from the AMC2023 dataset to qualitatively compare the reasoning trajectories of the baseline model and ours. The question is “What is the degree measure of the acute angle formed by lines with slopes 2and1 3?”, and the correct answer is 45. B.1 A Reasoning Trajectory of the Baseline (1790 tokens) <think> Okay, so I need to find the acute angle between two lines with slopes 2 and 1/3. Hmm, I remember there 's a formula for the angle between two lines based on their slopes. Let me try to recall. I think the formula involves the arctangent of the difference in slopes divided by 1 plus the product of the slopes. So, if the slopes are m1 and m2, the angle \u03b8 between them is given by: \u03b8 = arctan(|(m2 - m1)/(1 + m1*m2)|) Is that right? Let me check. Yeah, I think that 's correct because the tangent of the angle between two lines is equal to the absolute value of (m2 - m1)/(1 + m1*m2). So, this formula gives the angle between them. Alright, so let 's assign m1 = 2 and m2 = 1/3. Plugging these into the formula: \u03b8 = arctan(|(1/3 - 2)/(1 + (2)*(1/3))|) First, let 's compute the numerator: 1/3 - 2. That 's equal to 1/3 - 6/3 = (-5)/3. Then, the denominator: 1 + (2)*(1/3) = 1 + 2/3 = 5/3. So, putting it together, we have: \u03b8 = arctan(|(-5/3)/(5/3)|) = arctan(|-1|) = arctan(1) Because the absolute value of -1 is 1. So, arctan(1) is... 45 degrees, right? Since tan(45\u00b0) = 1. Wait, but hold on, the question asks for the acute angle. So, 45 degrees is already acute, since it 's less than 90 degrees. So, is that the answer? It seems straightforward. But let me make sure I didn 't make any mistakes in my calculations. Let 's go through it again. Slope m1 = 2, m2 = 1/3. Difference in slopes: m2 - m1 = 1/3 - 2 = -5/3. Product of slopes: m1*m2 = 2*(1/3) = 2/3. Denominator: 1 + 2/3 = 5/3. So, the argument inside arctan is |(-5/3)/(5/3)| = | -1 | = 1. So, arctan(1) is 45 degrees. That seems correct. Wait a second, but I remember that the angle between two lines can also be found using the formula: tan\u03b8 = |(m2 - m1)/(1 + m1*m2)| Which is exactly what I used. So, that 's consistent. But just to double-check, let me visualize. If one line has a slope of
https://arxiv.org/abs/2505.21765v1
2, that 's pretty steep, going up 2 units for every 1 unit to the right. The other line has a slope of 1/3, which is a gentle slope, going up 1 unit for every 3 units to the right. So, if you imagine these two lines intersecting, the angle between them is likely to be acute because both slopes are positive but one is steeper. So, 45 degrees is reasonable. Wait, 45 degrees is a common angle, so that seems plausible. Alternatively, I can think of the angle each line makes with the x-axis and then find the difference between those two angles. The angle \u03b1 that a line with slope m makes with the x-axis is given by \u03b1 = arctan(m). So, for m1 = 2, \u03b11 = arctan(2), and for m2 = 1/3, \u03b12 = arctan(1/3). Then, the angle between the two lines would be |\u03b11 - \u03b12|. Since both \u03b11 and \u03b12 are positive angles less than 90 degrees (since slopes are positive), the difference will also be positive, and since we 're taking the absolute value, it 'll give the acute angle. Let me compute that way to verify. First, compute \u03b11 = arctan(2). Let me approximate that. I know that arctan(1) is 45\u00b0, arctan(\u221a3) is 60\u00b0, so arctan(2) is somewhere between 60\u00b0 and 75\u00b0, maybe around 63.4349\u00b0. Similarly, arctan(1/3). Since 1/3 is less than 1, arctan(1/3) is approximately 18.4349\u00b0. So, the difference is 63.4349\u00b0 - 18.4349\u00b0 = 45\u00b0. So, that confirms it. Therefore, the acute angle between the two lines is 45 degrees. 16 Wait, so both methods give the same answer. That gives me more confidence. Alternatively, I can think about the tangent of the angle between them, which we found was 1, so arctan(1) is 45\u00b0, so that 's consistent. So, whether I use the formula with the slopes or compute the angles each line makes with the x-axis and subtract them, I get 45 degrees. Therefore, I think that 's the correct answer. I don 't see any mistakes in my reasoning. The key steps are: 1. Recognize the formula for the angle between two lines: \u03b8 = arctan(|(m2 - m1)/(1 + m1*m2)|) 2. Plug in the slopes m1 = 2 and m2 = 1/3 3. Calculate the numerator and denominator 4. Simplify the expression inside arctan 5. Find that it equals arctan(1) = 45\u00b0 6. Confirm with another method, like computing the individual angles and subtracting them All steps check out, so I 'm pretty sure 45 degrees is the correct answer. **Final Answer** The acute angle formed by the lines is \\boxed{45} degrees.</think> To find the acute angle formed by lines with slopes \\(2\\) and \\(\\frac{1}{3}\\), we use the formula for the angle \\(\\theta\\) between two lines with slopes \\(m_1\\) and \\(m_2\\): \\[ \\theta = \\arctan\\left(\\left|\\frac{m_2 - m_1}{1 + m_1 m_2}\\right|\\right) \\] Assigning \\(m_1 = 2\\) and \\(m_2 = \\frac{1}{3}\\): 1. Compute the numerator: \\[ m_2 - m_1 = \\frac{1}{3} - 2 = \\frac{1}{3} - \\frac{6}{3} = -\\frac{5}{3} \\] 2. Compute the denominator: \\[ 1 + m_1 m_2 = 1 +
https://arxiv.org/abs/2505.21765v1
2 \\cdot \\frac{1}{3} = 1 + \\frac{2}{3} = \\frac{5}{3} \\] 3. Simplify the argument inside the arctangent: \\[ \\left| \\frac{-\\frac{5}{3}}{\\frac{5}{3}} \\right| = \\left| -1 \\right| = 1 \\] 4. Therefore, the angle \\(\\theta\\) is: \\[ \\theta = \\arctan(1) = 45^\\circ \\] To verify, we can also compute the angles each line makes with the x-axis and find the difference. The angles are: - \\(\\alpha_1 = \\arctan(2) \\approx 63.4349^\\circ\\) - \\(\\alpha_2 = \\arctan\\left(\\frac{1}{3}\\right) \\approx 18.4349^\\circ\\) The difference is: \\[ 63.4349^\\circ - 18.4349^\\circ = 45^\\circ \\] Thus, the acute angle formed by the lines is \\(\\boxed{45}\\) degrees. B.2 A Reasoning Trajectory of the Ours (814 tokens) <think> Okay, so I need to find the acute angle between two lines with slopes 2 and 1/3. Hmm, I remember that there 's a formula involving the slopes of two lines to find the angle between them. Let me think. I think it's something with the arctangent of the difference of the slopes divided by something else... Maybe it 's tan(theta) equals the absolute value of (m2 - m1)/(1 + m1*m2). Yeah, that sounds familiar from my geometry class. Alright, so let me write that down. The formula is tan(theta) = |(m2 - m1)/(1 + m1*m2)|. So here, m1 is 2 and m2 is 1/3. Plugging those in, it should be tan(theta) equals the absolute value of (1/3 - 2)/(1 + (2)(1/3)). Let me compute numerator and denominator separately. First, the numerator: 1/3 - 2. Hmm, I need to subtract these. Let me convert 2 into thirds to make it easier. 2 is equal to 6/3. So 1/3 - 6/3 is -5/3. Okay, so numerator is -5/3. Denominator: 1 + (2)(1/3). Let me compute that. 2 times 1/3 is 2/3. Then, adding 1, which is 3/3, so 2/3 + 3/3 is 5/3. So denominator is 5/3. Therefore, tan(theta) equals the absolute value of (-5/3)/(5/3). When you divide those fractions, the 5/3 cancels out, leaving | -1 |. So tan(theta) is 1. Now, if tan(theta) is 1, then theta is arctangent of 1. I remember that tan(45 degrees) is 1. So theta is 45 degrees. Hmm, since it 's asking for the acute angle, which is less than 90 degrees, and 45 degrees is acute, so that should be the answer. 17 I think I did everything right. Let me recap: slopes are 2 and 1/3. Applied the formula, computed numerator as negative, denominator positive, got tan(theta)=1, theta is 45 degrees. Yeah, sounds good. **Final Answer** The acute angle formed by the lines is \\boxed{45} degrees. </think> To find the acute angle formed by lines with slopes \\(2\\) and \\(\\frac{1}{3}\\), we use the formula for the tangent of the angle between two lines: \\[ \\tan(\\theta) = \\left| \\frac{m_2 - m_1}{1 + m_1 m_2} \\right| \\] Here, \\(m_1 = 2\\) and \\(m_2 = \\frac{1}{3}\\). Plugging these values into the formula, we get: \\[ \\tan(\\theta) = \\left| \\frac{\\frac{1}{3} - 2}{1 + 2 \\cdot \\frac{1}{3}} \\right| \\] First, compute the numerator: \\[ \\frac{1}{3} - 2 = \\frac{1}{3} - \\frac{6}{3} = -\\frac{5}{3} \\] Next, compute the denominator: \\[ 1 + 2 \\cdot
https://arxiv.org/abs/2505.21765v1
\\frac{1}{3} = 1 + \\frac{2}{3} = \\frac{3}{3} + \\frac{2}{3} = \\frac{5}{3} \\] Thus, we have: \\[ \\tan(\\theta) = \\left| \\frac{-\\frac{5}{3}}{\\frac{5}{3}} \\right| = \\left| -1 \\right| = 1 \\] Since \\(\\tan(\\theta) = 1\\), we find that \\(\\theta = 45^\\circ\\). \\[ \\boxed{45} \\] C Datasets The information about the dataset we used is as follows: •MATH [ 10]:https://github.com/hendrycks/math . The dataset is released under the MIT license, and we adopt the same data split as used in Luo et al. [17]. •GSM8K [ 5]:https://huggingface.co/datasets/openai/gsm8k . The dataset is released un- der the MIT license. •Gaokao [ 43]:https://github.com/OpenLMLab/GAOKAO-Bench . The dataset is released under the Apache-2.0 license, and we adopt the same data split as used in Luo et al. [17]. •AMC:https://huggingface.co/datasets/zwhe99/amc23 . •AIME:https://huggingface.co/datasets/AI-MO/NuminaMath-CoT . •MMLU-Pro [ 33]:https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro . The dataset is released under the Apache-2.0 license. D Limitations While our method demonstrates consistent improvements in reasoning efficiency and accuracy, it has several limitations. First, the effectiveness of our approach has been primarily validated on mathematical and reasoning-focused benchmarks. While we validated our method on other domains using the MMLU-Pro dataset in Section 4.3, extending the evaluation to broader domains, such as open-ended tasks, remains a promising direction for future work. Second, the approach assumes access to multiple model-generated responses per problem, which may not always be feasible in highly resource-constrained settings. Investigating ways to reduce the computational overhead of the optimization phase itself could further improve the practicality of the approach in real-world deployments. 18 E Broader Impacts This work aims to enhance the reasoning efficiency of Large Reasoning Models (LRMs) by optimizing their internal decision-making processes. By reducing unnecessary reasoning steps while maintaining or improving accuracy, the proposed method has the potential to substantially lower inference cost and latency. This can benefit downstream applications by enabling more computationally efficient and environmentally sustainable deployment of LRMs, particularly in resource-constrained settings. However, if LRMs are misused—for instance, to generate plausible fake information, automate biased decision-making, or produce large-scale deceptive content—they may contribute to negative societal impacts. It is therefore important to remain mindful of the potential for such technologies to be applied in harmful ways, even when developed with efficiency and performance in mind. 19
https://arxiv.org/abs/2505.21765v1
MMTB ENCH : A Unified Benchmark for Complex Multimodal Table Reasoning Prasham Yatinkumar Titiya*Jainil Trivedi*Chitta Baral Vivek Gupta† Arizona State University {ptitiya,jtrived7,chitta.baral,vgupt140} @asu.edu Abstract Multimodal tables—those that integrate semi-structured data with visual elements such as charts and maps—are ubiquitous across real-world domains, yet they pose a formidable challenge to current Vision–Language models (VLMs). While Large Language Models (LLMs) and VLMs have demonstrated strong capabilities in text and image understanding, their performance on complex, real-world mul- timodal table reasoning remains unexplored. To bridge this gap, we introduce MMTB ENCH (MultiModal Table Bench mark), a benchmark consisting of 500 real-world multimodal tables drawn from diverse real-world sources, with a to- tal of 4021 question–answer pairs. MMTB ENCH questions cover four question types (Explicit, Implicit, Answer-Mention, and Visual-Based), five reasoning types (Mathematical, Extrema Identification, Fact Verification, Vision-Based, and Oth- ers), and eight table types (Single/Multiple Entity, Maps and Charts with Entities, Single/Multiple Charts, Maps, and Visualizations). Our extensive evaluation of state-of-the-art models on all types reveals substantial performance gaps, particu- larly on questions requiring visual-based reasoning and multi-step inference. These findings underscore the urgent need for improved architectures that more tightly integrate vision and language processing. By providing a challenging, high-quality resource that mirrors the complexity of real-world tasks, MMTB ENCH underscores its value as a resource for future research on multimodal tables. 1 Introduction In the era of data-driven intelligence, multimodal data, which integrates heterogeneous formats such as text, images, audio, and video, has become indispensable across diverse domains. From education and scientific research [ 15,6] to clinical diagnostics [ 8], the fusion of multiple data modalities enables more nuanced analysis, enhances interpretability, and supports more informed decision-making [ 13]. Among these formats, tabular data remains one of the most prevalent and structurally rich represen- tations of information in real-world applications [ 29,12]. Its organized, two-dimensional structure supports efficient summarization, comparison, and computation, making it a crucial for analyzing trends, guiding informed decisions, and monitoring ongoing activities in various domains [28].[28]. In modern applications, we increasingly encounter semi-structured multimodal tables enriched with visual elements such as charts, annotated text, and graphical markers [ 39]. For instance, financial reports often integrate budget tables with line graphs to illustrate trends more intuitively (see Figure 1); e-commerce platforms combine specification tables with product images and descriptions; and scientific publications regularly pair numerical data with diagrams to frame experimental findings in context. This blend of formats enhances clarity and makes complex information more accessible to a broader range of users. *These authors contributed equally to this work. †Primary superviser of this work. Preprint. Under review.arXiv:2505.21771v1 [cs.CV] 27 May 2025 Q1: Which country had the highest peak? A1:Latvia Q2: By what percentage did the global data increase throughout the years? A2:36.7% Q3: By how many points did the country whose flag does not contain any red grow? A3:3.2 Figure 1: A Multimodal Table in a Financial Context Despite the widespread use of multimodal tabular interfaces, automatically interpreting them remains a largely untapped and complex challenge. Most research in table question answering (QA) and representation learning has concentrated on purely
https://arxiv.org/abs/2505.21771v1
textual tables, leaving multimodal variants under- explored. While LLMs excel at processing sequential text, they often struggle to grasp the inherently two-dimensional and nested structure of tables [ 19,22,7]. On the other hand, VLMs, designed for visual inputs, tend to miss the deeper semantic and relational patterns essential for meaningful reasoning over intricate tabular content [33]. Accurate reasoning over multimodal tables requires the ability to relate cell values to corresponding row and column headers, synthesize information across modalities and data hierarchies, interpret domain-specific visual and textual patterns, and perform multi-step logical and numerical reasoning [39]. To drive progress in this area, we highlight the critical role of human-annotated benchmark datasets designed specifically for understanding multimodal tables [ 2]. Well-crafted QA datasets not only provide a foundation for fair model evaluation but also help reveal performance gaps and offer insights that can inform the design of more effective model architectures. In this work, we introduce MMTBench , a human-curated benchmark dataset for multimodal table question answering. Unlike synthetic datasets generated through rule-based or automated processes, MMTBench is constructed entirely from real-world data sources, offering meaningful contextual depth, specialized reasoning challenges, and diverse language expressions. The dataset is intentionally designed to span a wide array of domains and question types. Through this benchmark, we aim to provide a rigorous foundation for future research, inspire the development of more capable LLMs and VLMs, and bridge the gap between synthetic benchmarks and real-world applicability. 2 Related Work Text-Only Tables Early efforts in table-based question answering (QA) primarily focused on text- only tables, enabling models to reason over structured data through arithmetic, logical inference, or entity retrieval. Prominent datasets in this space include TaT-QA [ 41], KET-QA [ 9], FinQA [ 5], and DynaQA [ 21]. These benchmarks played a central role in advancing table reasoning but remained 2 Table 1: Comparison of structural features in multimodal table QA datasets. Dataset Multimodal Interleaved Tables Images of Tables Hierarchical Tables MMTabQA [26] ✓ ✓ ✓ ✗ MMTab [39] ✓ ✗ ✓ ✓ MultimodalQA [30] ✓ ✗ ✗ ✗ UniMMQA [23] ✓ ✗ ✗ ✗ SPIQA [37] ✓ ✗ ✗ ✓ MMTBench (Ours) ✓ ✓ ✓ ✓ Table 2: Comparison of multimodal QA datasets by visual and symbolic content types. Dataset Charts Maps Visuals Flags/Seals Characters Locations Logos Symbols MMTabQA [26] ✗ ✗ ✗ ✓ ✓ ✗ ✓ ✓ MMTab [39] ✗ ✗ ✗ ✓ ✓ ✓ ✓ ✓ MultimodalQA [30] ✗ ✗ ✗ ✓ ✓ ✗ ✓ ✓ UniMMQA [23] ✗ ✗ ✗ ✓ ✓ ✓ ✓ ✓ SPIQA [37] ✓ ✗✓ ✗ ✗ ✗ ✗ ✗ MMTBench (Ours) ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ limited to unimodal representations, overlooking the multimodal nature of many real-world tables that incorporate both text and visual elements. Initial Multimodal Integrations. To bridge this gap, several datasets began incorporating images into the QA setting. Notably, MMCoQA [ 17], MMTab [ 39], and InfoSeek [ 4] attempted to blend visual and textual data. Among these, MMTab included the largest collection of multimodal tables; however, all were rendered as static images, not
https://arxiv.org/abs/2505.21771v1
structured layouts. Additionally, most cells featured stylized text (e.g., colored or bolded) rather than embedded visual content, limiting their multimodal depth and complexity. Visual Reasoning Without Tables. A parallel line of work has focused on visual QA using standalone charts, diagrams, and plots. Datasets such as ChartQA [ 24], mChartQA [ 35], and MMC [18] emphasize reasoning over visual data. While valuable, these benchmarks do not employ a tabular format and lack the structural organization required for multimodal table comprehension. Disjoint Modalities Other datasets, such as MMQA [ 30] and UniMMQA [ 23], and SPIQA [ 37], expanded scope by including images, tables, and surrounding text. However, these modalities are treated as separate components, requiring models to process them in isolation. This disjoint format fails to capture the cohesive design of multimodal tables in real-world documents, where visual and textual information is often presented side by side within a unified structure. Synthetic Attempts at Multimodal Tables. Recognizing the absence of naturally occurring multi- modal tables, MMTabQA [ 26] proposes converting text-based datasets into multimodal formats by inserting visual features. However, this dataset is built exclusively from Wikipedia-based sources—namely, FetaQA [ 27], HybridQA [ 3], WikiSQL [ 40], and Open-WikiTable [ 14]. As a result, the dataset lacks diversity in domain, structure, and visual context, making it difficult for models trained on it to generalize to real-world multimodal settings. MMTBench is designed to reflect the complexity and diversity of multimodal tables found in real- world applications such as financial reporting, product catalogs, public dashboards, and geographical data summaries. As shown in Tables 1 and 2, it is the only dataset to support all four structural dimensions: interleaved image-text tables, image-based renderings, hierarchical structures, and genuinely multimodal content. In contrast to prior datasets that treat modalities separately or rely on synthetic formats, MMTBench captures naturally occurring tables that blend charts, maps, icons, flags, symbols, and textual data within a unified structure. This integrated design enables more realistic evaluation of systems that must reason jointly over structured and visual information—an increasingly common requirement in practical domains ranging from business intelligence to civic analytics. The dataset can be found at https://huggingface.co/datasets/MMTBench/MMTBench 3 3 Dataset Creation We introduce MMTBench , a curated benchmark designed for question answering over multimodal tables. The dataset comprises 500 tables paired with 4,021 human-written question-answer pairs. Built entirely from real-world sources and 100% human-annotated, MMTBench offers contextually rich, high-quality content that reflects practical multimodal challenges across domains such as e- commerce, sports, geography, governance, and public data reporting. Table 3 provides a detailed overview of the dataset’s structure and content. MMTBench addresses key limitations of synthetic multimodal datasets by focusing on naturally occurring tabular formats and realistic QA needs. The data collection follows a two-step pipeline: table creation followed by question generation, both of which are described in the subsections below. Metric Value Total Tables 500 Total Questions 4041 Avg. Images per Table 23.67 Avg. Rows per Table 19.49 Avg. Columns per Table 10.85 % Rows with Images 89.27 % Columns with Images 28.42 Table 3: Dataset Statistics 3.1 Table Creation To
https://arxiv.org/abs/2505.21771v1
ensure real-world diversity, we collected tables from a variety of publicly accessible sources, including Google Images, Wikipedia, Amazon, Zara, and domain-specific platforms like Premier League and weather websites. This diverse set of sources allows MMTBench to capture a wide range of varying images and table structures. Tables with sensitive or profane images were dropped while extracting the data. We used custom Selenium scripts to extract tables directly from their source, eliminating the need for secondary verification. Images were sourced from the original platform when available, or manually retrieved when necessary. To enhance visual diversity, we varied images for entities appearing multiple times. Each table is paired with metadata describing its structure, image content, and associated questions. Table 3 provides dataset-level statistics, while Table 7 outlines the distribution of embedded image types. We also categorize the dataset by table structure and question-answer formats. As shown in Figures 2, MMTBench includes a variety of table types—ranging from entity listings and maps to chart-based layouts. Correspondingly, the answer types vary from named entities to numerical values and images, supporting a broad spectrum of reasoning styles. Figure 2: Table Types Distribution Figure 3: Answer Types Distribution 4 3.2 Question Formation We designed questions to test a model’s ability to jointly reason over textual and visual elements within a table. Questions were formulated by multiple NLP experts (authors of the paper) and then reviewed amongst themselves. Aferwards, it was reviewed by another NLP expert (an author of the paper different from question creator) to ensure accuracy and consistency.Every question is linked to at least one image, either directly or through intermediate reasoning. We adopt the classification scheme used by [ 26], categorizing questions into four types shown in Figure 5. Whereas answers can be categorized into one of the category shown in Figure 3 •Explicit Questions: These questions directly reference an entity whose image is in the table. •Implicit Questions: In these questions, an entity whose image is neither explicitly men- tioned in the question nor in the answer but plays an important role in the intermediate reasoning process. •Answer-Mention Questions: These questions are characterized by answers containing an entity represented by an image in the table, while the question does not explicitly mention this entity. •Visual-Based Questions: This category encompasses questions that involve visual aspects of images such as color, shape, etc. Unlike [ 26], visual questions are not limited to specific types of images We further classify questions by reasoning type, summarized Figure 4. MMTBench includes a wide variety of reasoning challenges, including: •Extrema identification questions focus on finding the highest or lowest values in a dataset. These include maximum-type questions, which seek the greatest value, and minimum-type questions, which request the smallest value. •Mathematical Questions involve mathematical operations on numerical data. Some ques- tion types are Average-based, sum, difference, product, and division, range, ratio , counting, sorting, ordering, and comparison. •Fact Verification based questions involve asking information for the table and verifing if the given information is correct or not. •Vision Based Questions encompasses specific vision-based questions require analyzing distinct visual elements to extract
https://arxiv.org/abs/2505.21771v1
meaningful insights. This includes Color-based, shape- based, Pattern, analysing Text-in-Image, and Entity Identification Questions. •"Others" , covering a broad range of inquiries that do not align with the specified reasoning types. These include question types such as Geographical Based. Distance Based, Common Sense Reasoning, General Knowledge, etc. Figure 4: Reasoning Types Distribution Figure 5: Question Types Distribution 4 Modelling Strategies To assess the performance of the models on our dataset, we use the five benchmarking strategies used in [26]: 5 1. Missing Image Baseline This baseline serves as a lower bound for performance, as all images are excluded from the dataset. It is then required to infer the contents of these missing images based solely on the surrounding textual context. This baseline allows us to evaluate how well models can predict missing visual entities and reason over absent data. 2. Entity Replaced Baseline This baseline represents an upper bound for model performance by manually converting all images into their correct textual form. This approach enables us to assess how models reason when explicitly provided with all necessary entities, eliminating ambiguity from missing visual inputs. 3. Image Captioning Baseline In this baseline, the multimodal task is transformed into a text-only format by replacing images with their corresponding captions. VLMs generate textual descriptions of the images, which are then inserted into the tables. The resulting tables are provided to the model for question-answering tasks. While computationally expensive due to the high number of images and the need for context-specific captions, this baseline provides critical insights into the ability of VLMs to extract and convey relevant visual information in a textual format. 4. Table as an Image Baseline This baseline presents the entire table as a single image. Many models struggle with processing large visual inputs or multiple images simultaneously, making this a rigorous test of their capacity to analyze tabular structures. By using Selenium to convert HTML tables into images, we ensure that all information is preserved. This baseline allows us to evaluate the effectiveness of vision encoders in parsing and reasoning over structured tabular data. 5. Interleaved Baseline The interleaved baseline integrates both visual and textual data while maintaining the original multimodal structure. Images remain embedded within the table, requiring the model to reason over both textual and visual modalities simultaneously. This approach preserves the full complexity of the task, as models must correctly interpret table structure, recognize relationships between text and images, and integrate information across modalities. By analyzing performance on this baseline, we gain insights into how well models with both text and vision encoders can jointly reason over structured multimodal data. 5 Experiments To evaluate performance across various modalities, we benchmarked both open-source and closed- source models using our dataset. Our assessment included multiple experimental settings to capture different aspects of model capabilities. For the Partial Input Baseline and Entity Replaced Baseline, we tested Google’s Gemini 1.5 Flash [31] and Gemini 2.0 Flash, OpenAI’s GPT-4o Mini, Meta’s Llama3-8b-8192 [ 32], and Mixtral-8x7B [10]. In the Image Captioning Baseline, we focused on Google’s Gemini 1.5 Flash and Gemini 2.0 Flash. The
https://arxiv.org/abs/2505.21771v1
Table as Image Baseline encompassed a broader range of models, including Google’s Gemini 1.5 Flash and Gemini 2.0 Flash, OpenAI’s GPT-4o Mini, OpenGVLab’s InternVL2.5-8B [ 14], TIGER- Lab’s Mantis-8B-Idefics2 [ 11], Microsoft’s Phi-3.5-Vision-Instruct [ 1], QWEN 2.5-VL-7b-Instruct [34], and Table-Llava-1.5-7b-hf [ 39]. For the Interleaved Baseline, we evaluated the same set of models as in the Table as Image Baseline, with the exception of Table-Llava-1.5-7b-hf. For all text baselines (Missing Image and Entity Replaced), we used 1-shot prompting. For image baselines (Image Captioning, Table as Image, and Interleaved), we used 0-shot prompting due to the high number of images per table, which would require more computational resources. 6 Results and Analysis 6.1 Performance Across Baselines As discussed in Section 3, the Missing Image Baseline was anticipated to be the lowest-performing baseline, and the results aligned with these expectations. The Image Captioning Baseline demon- strated improved performance compared to the Missing Image Baseline; however, the gap between this and the stronger baselines suggests that captions fail to fully capture the semantic richness or spatial layout of actual images. Among these, the Table as an Image and Interleaved Table baselines 6 Table 4: Detailed Analysis on all Question Type. EM- Exact Match, SS - Substring Match, F1 - F1 Score Answer Mention Explicit Implicit Visual-Based Question Model EM SS F1 EM SS F1 EM SS F1 EM SS F1 Missing Image Baseline Gemini 1.5 Flash 26.59 27.39 0.128 19.52 20.92 0.085 15.32 15.29 0.063 12.91 13.84 0.054 Gemini 2.0 Flash 27.98 30.15 0.089 19.31 21.03 0.075 14.12 14.77 0.052 17.60 18.51 0.064 GPT-4o mini 38.99 38.40 0.294 33.97 36.24 0.251 24.14 25.71 0.143 27.00 27.33 0.163 Llama 3-8B 32.50 32.27 0.219 29.39 28.69 0.194 22.91 23.09 0.129 20.74 20.84 0.133 Mixtral-8x7B 42.84 46.31 0.321 36.21 40.70 0.282 28.56 33.46 0.202 30.29 34.48 0.241 Entity Replaced Baseline Gemini 1.5 Flash 59.89 67.20 0.394 54.71 54.61 0.295 43.73 47.16 0.238 - - - Gemini 2.0 Flash 59.50 62.46 0.293 59.93 60.04 0.300 39.71 41.26 0.177 - - - GPT-4o mini 68.14 70.38 0.538 65.99 69.67 0.496 50.59 52.73 0.340 - - - Llama 3-8B 61.49 62.57 0.478 54.92 57.85 0.409 41.56 44.79 0.285 - - - Mixtral-8x7B 59.74 68.01 0.531 60.77 66.71 0.475 43.67 48.70 0.308 - - - Image Captioning Baseline Gemini 1.5 Flash 29.70 30.79 0.224 30.12 32.74 0.219 18.91 19.45 0.126 21.44 24.04 0.156 Gemini 2.0 Flash 36.82 38.45 0.261 36.82 38.45 0.261 19.69 20.50 0.124 25.09 27.23 0.185 Table as an Image Baseline Gemini 1.5 Flash 38.39 36.22 0.178 30.16 31.30 0.148 25.14 27.52 0.113 25.66 27.80 0.103 Gemini 2.0 Flash 40.44 38.98 0.212 38.55 38.18 0.214 33.83 35.92 0.199 30.49 34.05 0.195 GPT-4o mini 48.96 50.59 0.357 47.53 49.78 0.345 38.86 40.49 0.265 38.56 41.11 0.291 Intern-VL-2.5 19.55 40.26 0.199 18.55 38.53 0.176 16.42 36.90 0.153 14.47 38.63 0.162 Mantis-8B-Idefics2 20.85 23.23 0.109 19.72 20.90 0.113 20.88 21.49 0.110 18.60 20.26 0.107 Phi-3.5 21.63 23.86 0.111 18.09 19.80 0.076 15.67 16.96 0.057 17.81 19.66 0.093 Qwen-2.5-VL 34.61 38.86 0.174 30.62 34.58 0.159 19.64 22.64 0.108 21.35 24.38 0.124 Table LLava-1.5-7B 10.30
https://arxiv.org/abs/2505.21771v1
11.43 0.062 12.68 14.49 0.063 15.77 16.52 0.060 10.95 11.30 0.050 Interleaved Baseline Gemini 1.5 34.38 35.24 0.247 31.55 31.52 0.210 20.33 20.47 0.119 26.29 25.65 0.175 Gemini 2.0 Flash 37.27 38.47 0.272 34.08 37.46 0.231 24.59 25.75 0.142 26.38 28.76 0.176 GPT-4o mini 47.74 49.88 0.376 46.92 48.96 0.348 36.41 37.84 0.260 40.39 42.64 0.303 Mantis-8b-Idefics2 24.76 26.45 0.156 24.37 26.57 0.150 24.92 26.58 0.113 20.70 23.12 0.126 Phi-3.5 20.85 23.72 0.120 21.63 23.61 0.114 23.83 26.85 0.134 17.71 18.95 0.100 Qwen-2.5-VL 35.66 53.45 0.271 30.35 57.02 0.258 17.95 50.59 0.146 23.04 47.94 0.200 produced comparable results, with the Interleaved Table Baseline showing a slight edge in perfor- mance. Finally, as expected, the Entity Replaced Baseline outperformed all other baselines, achieving the highest performance overall. 6.2 Performance Across Models For textual data, both closed-source and open-source models performed similarly, with open-source models showing a slight edge—suggesting they are becoming increasingly competitive, even in chal- lenging inference tasks. Among open-source models, Mixtral-8x7B achieved the highest performance for the Missing Image Baseline. However, other open-source models lagged behind closed-source ones. This discrepancy may stem from differences in how models handle uncertainty, as closed- source models appeared more likely to respond with "unknown" when data was missing. This trend is supported by findings in Table 8. For the Entity Replaced Baseline, GPT-4o Mini delivered the best performance, though all open-source models performed comparably to their closed-source counterparts on this task. For vision tasks, closed-source models generally outperformed open-source models, except QWEN 2.5-VL-7b-Instruct, which demonstrated performance comparable to Google’s Gemini 1.5 Flash and Gemini 2.0 Flash showing that select open-source VLMs are closing the gap with proprietary systems, particularly in tasks requiring visual grounding. In the Table as an Image Baseline, GPT-4o Mini achieved the best results across Exact Match, Substring Match, and F1 scores. For the Interleaved Table Baseline, QWEN 2.5-VL-7b-Instruct consistently achieved the highest Substring Match values, while GPT-4o Mini led in Exact Match and F1 scores. These results suggest that while QWEN effectively understands tabular context and image content, it struggles with interpreting tabular data. 7 Table 5: Detailed Analysis on Question Reasoning Types. EM- Exact Match, SS - Substring Match, F1 - F1 Score Fact Verification Mathematical Extrema Vision Based Model EM SS F1 EM SS F1 EM SS F1 EM SS F1 Missing Image Baseline Gemini 1.5 Flash 28.86 29.85 0.179 16.42 17.52 0.059 16.73 17.47 0.070 5.23 5.52 0.030 Gemini 2.0 Flash 27.62 27.61 0.099 15.96 16.35 0.049 15.80 16.20 0.059 11.00 11.04 0.032 GPT-4o Mini 44.75 47.64 0.361 23.80 23.29 0.124 27.45 27.07 0.146 24.09 25.64 0.177 Llama 3-8B 38.48 40.60 0.258 20.25 22.18 0.095 18.15 20.17 0.115 23.01 23.85 0.168 Mixtral-8x7B 47.43 52.33 0.429 24.66 30.91 0.174 27.29 32.58 0.184 29.07 33.38 0.235 Entity Replaced Baseline Gemini 1.5 Flash 68.18 72.65 0.322 40.10 42.48 0.210 57.01 60.21 0.324 - - - Gemini 2.0 Flash 71.69 73.28 0.333 42.11 41.20 0.191 64.80 65.86 0.347 - - - GPT-4o Mini 79.45 75.11 0.628 39.00 43.92 0.255 54.06 57.00 0.357 - - - Llama 3-8B 70.45 74.84 0.586 32.15 37.40
https://arxiv.org/abs/2505.21771v1
0.205 41.02 44.32 0.270 - - - Mixtral-8x7B 80.17 83.34 0.643 34.05 39.97 0.237 44.16 49.18 0.317 - - - Image Captioning Baseline Gemini 1.5 Flash 47.90 49.29 0.369 13.68 15.38 0.074 25.13 27.47 0.165 22.72 23.76 0.186 Gemini 2.0 Flash 48.14 51.87 0.358 19.60 21.43 0.092 28.48 33.31 0.179 29.15 30.22 0.235 Table as an Image Baseline Gemini 1.5 Flash 47.17 45.77 0.207 23.42 25.33 0.095 21.30 23.01 0.113 30.40 32.38 0.094 Gemini 2.0 Flash 45.75 46.04 0.254 29.57 32.16 0.174 29.92 30.27 0.179 29.11 30.72 0.163 GPT-4o Mini 63.34 64.06 0.490 34.00 36.52 0.214 41.62 43.86 0.275 38.50 41.45 0.306 InternVL 2.5-8B 19.94 46.19 0.222 15.80 38.13 0.154 15.94 33.66 0.148 11.76 38.97 0.141 Mantis-8B-Idefics2 23.28 27.00 0.155 19.63 22.24 0.084 15.54 16.20 0.071 19.07 22.51 0.118 Phi-3.5-vision-instruct 32.16 35.79 0.238 13.87 16.18 0.052 13.73 15.01 0.063 15.24 17.70 0.075 Qwen-2.5-VL-7b 53.12 55.19 0.265 21.45 25.08 0.113 27.90 31.39 0.158 22.54 24.40 0.102 Table-llava-1.5-7b-hf 24.57 27.06 0.133 14.57 15.51 0.038 8.42 9.54 0.031 9.88 10.93 0.049 Interleaved Baseline Gemini 1.5 Flash 41.37 41.14 0.283 17.96 18.00 0.079 27.78 27.21 0.155 26.81 27.28 0.213 Gemini 2.0 Flash 43.66 48.04 0.332 20.08 21.56 0.092 30.18 32.88 0.153 31.51 32.62 0.232 GPT-4o Mini 61.40 61.91 0.479 28.84 29.66 0.174 38.78 40.97 0.265 39.69 41.87 0.316 Mantis-8B-Idefics2 29.13 32.50 0.190 20.01 21.94 0.082 18.83 20.40 0.097 27.06 26.81 0.144 Phi-3.5-vision-instruct 25.67 28.31 0.168 21.98 24.72 0.107 16.39 17.39 0.093 17.33 19.14 0.106 Qwen-2.5-VL 43.03 62.76 0.381 15.77 48.33 0.106 24.90 52.69 0.198 25.99 52.47 0.225 6.3 Performance Across Question Types The results across question types follow the same trend across all baselines. Answer-Mention Questions and Explicit Questions achieve the highest scores and perform at similar levels. This is because both question types offer clear linguistic cues that guide the model toward the correct answers. While Answer-Mention Questions allow for direct entity prediction with minimal contextual dependency, explicit questions require the model to interpret and reason over the tabular context to derive accurate answers. Implicit Questions and Visual-Based Questions perform worse compared to Answer-Mention and Explicit Questions. This indicates that questions requiring implicit reasoning or visual grounding are more challenging for models. Visual-Based Questions show a slight advantage, as they reference specific image regions, providing clearer cues. In contrast, Implicit Questions demand deeper reasoning, making them harder for models to resolve. 6.4 Performance Across Question Reasoning Types Across all reasoning types, Fact Verification Questions achieved the highest performance, likely due to the availability of relevant pre-trained data that aligns closely with this type of reasoning. In contrast, Mathematical Questions performed the worst among all reasoning types, reflecting the inherent difficulty models face when handling numerical reasoning tasks. Extrema Questions and Vision-Based Questions showed comparable performance, with neither demonstrating a clear advantage over the other. While other question types also achieved high scores, the diversity and volume of these question types make it challenging to pinpoint which specific type contributes the most to overall performance. 8 6.5 Performance Across Table Types For this analysis we have divided our tables into multiple types such as Single Type
https://arxiv.org/abs/2505.21771v1
Entity Only where there is only one type of entity, Multiple Entities where there are more than one type of entities in the table, Entities along with Maps, Entities along with Charts, Single Chart Type, Multiple Chart Types, Maps only and Visualizations. Since these types refer to an image, for this analysis we have considered only the multi modal data. Tables with Single entity performed better than Multiple Entities. The same applies for charts where Single Chart Type was a lot easier to evaluate as compared to Multiple Chart Types. Visualization Questions performed slightly worse, followed by maps only. We observed that models struggled the most for Tables that include either a map or a chart along with an entity. Table 9 Shows the exact scores achieved my models for each unique type 6.6 Performance Across Answer Types To analyze across answer types we divide the answers into single entity, multiple entity, single number where answer is a single number, multiple number where gold answer has more than one number, single image as answer and multiple types combined. Single Image answer type questions performed the worst among all types. This shows how difficult images were to interpret for LLMs and VLMs. Multiple entities had best performance across all types. Single entities, multiple entities, and multiple numbers performed in similar range. multiple type and single number type perform better than Single image but not at par with single entity, multiple entities or multiple numbers. Table 10 Shows the exact scores achieved by models for each unique type 7 Conclusion In this work, we introduce MMTBench, a comprehensive benchmark for multi-modal tabular reason- ing, built from hundreds of real-world tables enriched with images, maps, charts, and other visual elements. The dataset includes thousands of carefully curated question–answer pairs spanning a wide range of reasoning types and domains. Our experiments with both open- and closed-source VLMs show a significant performance gap compared to other multimodal QA benchmarks, underscoring the difficulty current models face in integrating structured and visual information. Multimodal tabular data appears across nearly all domains—finance, science, education, e-commerce, and public reporting—yet remains underexplored in automated reasoning. MMTBench addresses this gap by providing a diverse, high-quality benchmark grounded in real-world data and tasks. By reflecting the complexity of practical settings, it offers a rigorous evaluation framework for advancing LLMs and VLMs and supports the development of models better suited to the challenges of multimodal understanding in societal contexts. Limitations A primary limitation of this study is the dataset size. While MMTBench offers diversity in structure, modality, and reasoning types, expanding the dataset would allow for more comprehensive coverage across domains and question formats. However, sourcing high-quality, real-world multimodal tables remains a considerable challenge. Generalized scraping tools are difficult to design, and manual question formulation requires significant domain expertise across multiple fields. Resource constraints also limited the scope of our experimental evaluations. Due to financial and computational limitations, we were unable to evaluate larger-scale models which could have provided further insights into model capabilities on complex multimodal tasks. Beyond technical constraints, the societal impact of
https://arxiv.org/abs/2505.21771v1
this work warrants consideration. As benchmarks guide model development, there is a risk of overfitting to evaluation tasks at the expense of real- world generalization. In high-stakes domains—such as public health or finance—models that misinterpret visual or tabular data may produce inaccurate summaries or misleading trends. For example, an automated system might misalign values in a government spending report, leading to public misinformation. While MMTBench is intended for research, its influence on model behavior and downstream applications in society must be carefully considered. 9 Acknowledgements The authors would like to thank Vivek Gupta for their invaluable guidance and insightful feedback throughout the course of this work. I am also grateful to Complex Data Analysis and Reasoning Lab (CoRAL) at Arizona State University for providing the necessary resources and a conducive research environment. Special thanks to my colleagues and peers, including Ritam and Himanshu, for the helpful suggestions. References [1]Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 , 2024. [2]B Barla Cambazoglu, Mark Sanderson, Falk Scholer, and Bruce Croft. A review of public datasets in question answering research. In ACM SIGIR Forum , volume 54, pages 1–23. ACM New York, NY , USA, 2021. [3]Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Trevor Cohn, Yulan He, and Yang Liu, editors, Findings of the Association for Computational Linguis- tics: EMNLP 2020 , pages 1026–1036, Online, November 2020. Association for Computational Linguistics. [4]Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, and Ming- Wei Chang. Can pre-trained vision and language models answer visual information-seeking questions? ArXiv , abs/2302.11713, 2023. [5]Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. FinQA: A dataset of numerical reasoning over financial data. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 3697–3711, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. [6]Charlotte Cloutier and Davide Ravasi. Using tables to enhance trustworthiness in qualitative research. Strategic organization , 19(1):113–133, 2021. [7]Xi Fang, Weijie Xu, Fiona Anting Tan, Jiani Zhang, Ziqing Hu, Yanjun Qi, Scott Nickleach, Diego Socolinsky, Srinivasan Sengamedu, and Christos Faloutsos. Large language models (llms) on tabular data: Prediction, generation, and understanding–a survey. arXiv preprint arXiv:2402.17944 , 2024. [8]Abhishek Gupta, Shreshta Rajakumar Deshpande, and Marcello Canova. An algorithm to warm start perturbed (wasp) constrained dynamic programs. IEEE Open Journal of Control Systems , 1:1–14, 2022. [9]Mengkang Hu, Haoyu Dong, Ping Luo, Shi Han, and Dongmei Zhang. KET-QA: A dataset for knowledge enhanced table question answering. In Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, and Nianwen Xue, editors, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and
https://arxiv.org/abs/2505.21771v1
Evaluation (LREC-COLING 2024) , pages 9705–9719, Torino, Italia, May 2024. ELRA and ICCL. [10] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 , 2024. [11] Dongfu Jiang, Xuan He, Huaye Zeng, Cong Wei, Max Ku, Qian Liu, and Wenhu Chen. Mantis: Interleaved multi-image instruction tuning. arXiv preprint arXiv:2405.01483 , 2024. 10 [12] Jun-Peng Jiang, Si-Yang Liu, Hao-Run Cai, Qile Zhou, and Han-Jia Ye. Representation learning for tabular data: A comprehensive survey. arXiv preprint arXiv:2504.16109 , 2025. [13] Abhinav Joshi. Multimodal representation learning for real-world applications. ICMI ’22, page 717–723, New York, NY , USA, 2022. Association for Computing Machinery. [14] Sunjun Kweon, Yeonsu Kwon, Seonhee Cho, Yohan Jo, and Edward Choi. Open-WikiTable : Dataset for open domain question answering with complex reasoning over table. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Com- putational Linguistics: ACL 2023 , pages 8285–8297, Toronto, Canada, July 2023. Association for Computational Linguistics. [15] Gyeong-Geon Lee, Lehong Shi, Ehsan Latif, Yizhu Gao, Arne Bewersdorff, Matthew Nyaaba, Shuchen Guo, Zihao Wu, Zhengliang Liu, Hui Wang, et al. Multimodality of ai for education: Towards artificial general intelligence. arXiv preprint arXiv:2312.06037 , 2023. [16] Kaixin Li, Ziyang Meng, Hongzhan Lin, Ziyang Luo, Yuchen Tian, Jing Ma, Zhiyong Huang, and Tat-Seng Chua. Screenspot-pro: Gui grounding for professional high-resolution computer use. arXiv preprint arXiv:2504.07981 , 2025. [17] Yongqi Li, Wenjie Li, and Liqiang Nie. MMCoQA: Conversational question answering over text, tables, and images. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4220–4231, Dublin, Ireland, May 2022. Association for Computational Linguistics. [18] Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, and Dong Yu. MMC: Advancing multimodal chart understanding with large-scale instruction tuning. In Kevin Duh, Helena Gomez, and Steven Bethard, editors, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 1287–1310, Mexico City, Mexico, June 2024. Association for Computational Linguistics. [19] Tianyang Liu, Fei Wang, and Muhao Chen. Rethinking tabular data understanding with large language models. In Kevin Duh, Helena Gomez, and Steven Bethard, editors, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 450–482, Mexico City, Mexico, June 2024. Association for Computational Linguistics. [20] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. [21] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610 , 2022. [22] Weizheng Lu, Jing Zhang, Ju Fan, Zihao Fu, Yueguo
https://arxiv.org/abs/2505.21771v1
Chen, and Xiaoyong Du. Large language model for table processing: A survey. Frontiers of Computer Science , 19(2):192350, 2025. [23] Haohao Luo, Ying Shen, and Yang Deng. Unifying text, tables, and images for multimodal question answering. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 9355–9367, Singapore, December 2023. Association for Computational Linguistics. [24] Ahmed Masry, Xuan Long Do, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. In Findings of the Association for Computational Linguistics: ACL 2022 , pages 2263–2279, 2022. [25] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision , pages 2200–2209, 2021. 11 [26] Suyash Vardhan Mathur, Jainit Sushil Bafna, Kunal Kartik, Harshita Khandelwal, Manish Shrivastava, Vivek Gupta, Mohit Bansal, and Dan Roth. Knowledge-aware reasoning over multimodal semi-structured tables. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 14054– 14073, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [27] Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry´sci´nski, Hailey Schoelkopf, Riley Kong, Xiangru Tang, Mutethia Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev, and Dragomir Radev. FeTaQA: Free-form table question answering. Transactions of the Association for Computational Linguistics , 10:35–49, 2022. [28] Alexey Shigarov. Table understanding: Problem overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , 13(1):e1482, 2023. [29] Ravid Shwartz-Ziv and Amitai Armon. Tabular data: Deep learning is not all you need. Information Fusion , 81:84–90, 2022. [30] Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, and Jonathan Berant. Multimodalqa: Complex question answering over text, tables and images. arXiv preprint arXiv:2104.06039 , 2021. [31] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [32] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [33] Mohit Vaishnav and Tanel Tammet. Cognitive paradigms for evaluating vlms on visual reasoning task. arXiv preprint arXiv:2501.13620 , 2025. [34] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. [35] Jingxuan Wei, Nan Xu, Guiyong Chang, Yin Luo, BiHui Yu, and Ruifeng Guo. mchartqa: A universal benchmark for multimodal chart question answer based on vision-language alignment and reasoning. arXiv preprint arXiv:2404.01548 , 2024. [36] Zhibo Yang, Jun Tang, Zhaohai Li, Pengfei Wang, Jianqiang Wan, Humen Zhong, Xuejing Liu, Mingkun Yang, Peng Wang, Yuliang Liu, et
https://arxiv.org/abs/2505.21771v1
al. Cc-ocr: A comprehensive and challenging ocr benchmark for evaluating large multimodal models in literacy. arXiv preprint arXiv:2412.02210 , 2024. [37] Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, and Kevin Bailly. Spiq: Data-free per- channel static input quantization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , pages 3869–3878, 2023. [38] Jiwen Zhang, Jihao Wu, Yihua Teng, Minghui Liao, Nuo Xu, Xiao Xiao, Zhongyu Wei, and Duyu Tang. Android in the zoo: Chain-of-action-thought for gui agents. arXiv preprint arXiv:2403.02713 , 2024. [39] Mingyu Zheng, Xinwei Feng, Qingyi Si, Qiaoqiao She, Zheng Lin, Wenbin Jiang, and Weiping Wang. Multimodal table understanding. arXiv preprint arXiv:2406.08100 , 2024. [40] Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103 , 2017. [41] Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance. arXiv preprint arXiv:2105.07624 , 2021. 12 A More Dataset Statistics Table 7 below shows the distribution of the types of images present in the dataset. It is evident that the dataset encompasses a broad array of visual domains which are relevant to the real world. Table 7: Image Type Distribution Image Type % of Total Human / Fictional Character 21.85 Flag / Coat of Arms / Seals 18.19 Logo 15.73 Map 9.01 Visualizations 2.02 Chart 6.85 Poster / Covers 5.76 Location 5.34 Symbol 6.19 Scientific 0.95 Nature 0.81 Entity 0.79 Other 6.50 Total 11836 B Dataset and Question Types Examples B.1 Example 1 In this section we will show 2 multimodal tables found in real-world settings and clarify on the question types Figure 6: An example of a multimodal table in a Financial Context Figure 6 presents a multimodal financial dashboard displaying Monthly-to-Date (MTD) and Fiscal- Year-to-Date (FYTD) comparisons between actual and budgeted values across five performance metrics. It also includes visual representations of cumulative variance trends using colored area charts. Below are some sample questions on this and their explanation as to why they are classified in that type Question: What is the MTD variance for the third metric on January 18, 2013? Type: Explicit Explanation: This question directly references the "third metric" and the "MTD variance," both of which are clearly labeled and numerically presented in the table. Question: Which metric shows the greatest underperformance YTD in percentage terms? Type: Implicit 13 Explanation: The question does not explicitly mention any table element, but answering it requires interpreting and comparing FYTD percentage variances from the image. Question: What performed better than expected in the most recent month? Type: Answer-Mention Explanation: The question is vague and does not mention specific metrics, but the answer involves identifying a particular metric (first) from the table data. Question: Which cumulative variance graph shows a steady decline over time? Type: Visual-Based Explanation: The answer relies on visually interpreting the trend in the blue cumulative variance area, not on any textual value in the table. B.2 Example 2 Figure 7: A Multimodal Table
https://arxiv.org/abs/2505.21771v1
found in online archives This figure displays three vintage science fiction book covers from the mid-20th century, each featuring imaginative artwork by noted illustrators. The covers are for books by Philip K. Dick, Isaac Asimov, and Kendell F. Crossen. Question: Who illustrated the 1957 edition of *Eye in the Sky* by Philip K. Dick? Type: Explicit Explanation: The question directly refers to a clearly labeled element in the image—Ed Valigursky and the 1957 edition of the book. Question: Which author has the most recent cover design shown in the image? Type: Implicit Explanation: The question doesn’t mention dates or artists but requires inferring the latest publication year from the visual and text data. Question: Which illustrator created the artwork for Isaac Asimov’s *Nine Tomorrows*? Type: Answer-Mention Explanation: The question does not mention Paul Lehr, but the answer requires identifying and mentioning him from the image. Question: Which book cover features a large eye and people in red suits running on a futuristic platform? Type: Visual-Based 14 Explanation: The answer depends on interpreting visual elements (eye, clothing, setting), not just reading the text. C Inter Annotater Agreement We ensured inter-annotator agreement by having each question drafted between all annotators, with any discrepancies reconciled through consensus according to shared annotation guidelines. Followed by this, another NLP expert not responsible for annotating, rechecked the validity of the question and whether they are correct or not. This dual-review process promoted consistency, minimized subjective bias, and reinforced the clarity and reliability of our question set. D Additional Result Tables This section contains the results of the performance of models across all Table types and all Answer types as discussed in Section 6 as well as the table for percentage of unknown answers given by the models for the lower bound Table 8: Percentage of "Unknown" Answers per Model for missing Image Baseline Model Percentage Unknown Gemini 1.5 Flash 36.23% Gemini 2.0 Flash 37.99% GPT-4o Mini 45.96% Llama 3-8B 41.54% Mixtral-8x7B 12.13% 15 Table 9: Analysis on all Table Types. EM – Exact Match, SS – Substring Match, F1 – F1 Score Single Entity Multiple Entities Single Chart Multiple Charts Model EM SS F1 EM SS F1 EM SS F1 EM SS F1 Table as an Image Baseline Gemini 1.5 Flash 39.36 39.83 0.22 38.41 39.07 0.20 40.78 41.55 0.23 31.39 33.46 0.22 Gemini 2.0 Flash 44.42 46.37 0.27 41.63 42.65 0.25 45.38 48.33 0.30 32.92 33.92 0.19 GPT-4o mini 47.49 49.75 0.29 44.99 46.85 0.28 54.96 57.88 0.37 36.43 39.72 0.22 Intern-VL-2.5 33.08 45.69 0.25 35.03 53.24 0.27 34.55 56.89 0.25 35.58 50.58 0.25 Mantis-8B-Idefics2 30.04 30.80 0.16 32.77 34.51 0.19 44.98 46.31 0.24 37.55 47.14 0.24 Phi-3.5 31.75 32.72 0.18 26.82 28.15 0.16 36.09 37.87 0.20 31.62 32.28 0.20 Qwen-2.5-VL 39.65 41.64 0.23 36.53 39.51 0.23 36.58 38.96 0.21 35.63 36.81 0.23 Table_LLaV A 26.23 27.35 0.14 28.75 30.01 0.14 27.47 29.07 0.15 30.75 33.49 0.20 Interleaved Baseline Gemini 1.5 Flash 40.89 41.72 0.22 28.43 29.53 0.17 30.48 32.52 0.18 25.51 26.01 0.14 Gemini 2.0 Flash 40.96 42.42 0.22 32.77 33.99 0.19 40.77 43.63 0.22 33.33
https://arxiv.org/abs/2505.21771v1
34.67 0.20 GPT-4o mini 45.83 47.26 0.28 43.14 44.48 0.27 52.99 55.61 0.34 38.56 41.39 0.27 Mantis-8B-Idefics2 34.80 36.59 0.19 34.24 36.19 0.18 40.66 41.27 0.22 42.38 42.99 0.23 Phi-3.5 39.75 41.65 0.21 38.28 42.32 0.24 44.23 46.71 0.25 33.28 34.08 0.20 Qwen-2.5-VL 29.65 61.37 0.19 29.06 64.65 0.18 30.10 76.22 0.18 30.53 59.01 0.17 Maps Only Visualizations Entities & Maps Entities & Charts Table as an Image Baseline Gemini 1.5 Flash 25.44 26.50 0.15 50.57 51.60 0.27 35.23 36.30 0.22 22.63 23.65 0.20 Gemini 2.0 Flash 28.27 29.30 0.18 55.56 56.60 0.30 38.66 39.70 0.25 24.64 25.65 0.22 GPT-4o mini 27.12 28.15 0.17 54.40 55.45 0.29 40.03 41.05 0.27 23.35 24.38 0.21 Intern-VL-2.5 15.42 16.45 0.09 30.68 31.70 0.17 20.23 21.20 0.14 12.88 13.89 0.09 Mantis-8B-Idefics2 15.51 16.50 0.10 35.30 36.25 0.18 22.27 23.25 0.16 14.98 15.95 0.10 Phi-3.5 15.95 16.90 0.09 32.00 33.00 0.17 18.58 19.50 0.13 12.58 13.50 0.08 Qwen-2.5-VL 26.87 27.85 0.16 52.85 53.90 0.26 37.63 38.60 0.23 22.36 23.35 0.19 Table_LLaV A 12.28 13.28 0.08 28.71 29.75 0.15 19.73 20.70 0.12 10.84 11.85 0.07 Interleaved Baseline Gemini 1.5 Flash 23.44 24.50 0.13 48.57 49.60 0.25 33.23 34.30 0.20 20.63 21.65 0.18 Gemini 2.0 Flash 26.27 27.30 0.16 53.56 54.60 0.28 36.66 37.70 0.23 22.64 23.65 0.20 GPT-4o mini 25.12 26.15 0.15 52.40 53.45 0.27 38.03 39.05 0.25 21.35 22.38 0.19 Mantis-8B-Idefics2 13.51 14.50 0.08 33.30 34.25 0.16 20.27 21.25 0.14 12.98 13.95 0.08 Phi-3.5 13.95 14.90 0.07 30.00 31.00 0.15 16.58 17.50 0.11 10.58 11.50 0.06 Qwen-2.5-VL 24.87 25.85 0.14 50.85 51.90 0.24 35.63 36.60 0.21 20.36 21.35 0.17 E Human Evaluation and Error Analysis We conducted a two-person human evaluation on 50 tables (11% of the total dataset) and 393 questions (12% of the total dataset), wherein participants were required to answer based solely on their knowledge, without the ability to search for images. The results indicate that human performance consistently outperforms all baseline models. However, despite this, the highest exact match scores for human responses remain below 0.7, suggesting that while human performance is superior to that of the baseline models, the complexity of the dataset presents a significant challenge. For entity replaced baseline (upper bound), humans would perform extraordinary as all the required information to answer the question is present in text format. However, in missing image baseline (lower bound), models would outperform humans as missing information can be compensated using LLMs and VLMs training knowledge. This is not the case for humans. So, to conduct fair comparison, we perform in depth analysis of best performing model across table as image and interleaved baselines with human. 16 Table 10: Performance of Models on Different Answer Types. SE – Single Entity, ME – Multiple Entity, SN – Single Number, MN – Multiple Number, SI – Single Image, MT – Multiple Types (combination of entities, numbers, and images). SE ME SN MN SI MT Model exact f1 exact f1 exact f1 exact f1 exact f1 exact f1 Missing Image Baseline Gemini 1.5 Flash 18.9 0.071 21.1 0.081 13.3 0.037 20.4 0.084 15.4 0.028 22.1 0.112 Gemini
https://arxiv.org/abs/2505.21771v1
2.0 Flash 19.0 0.084 16.9 0.094 15.8 0.037 24.8 0.090 2.6 0.014 25.9 0.067 GPT-4o mini 32.0 0.230 32.0 0.201 27.4 0.188 33.0 0.224 10.6 0.073 37.3 0.240 Llama 3-8B 28.1 0.188 29.7 0.182 21.3 0.137 29.1 0.145 0.0 0.000 25.4 0.160 Mixtral-8x7B 35.3 0.273 33.3 0.247 28.4 0.209 36.7 0.233 3.4 0.025 38.0 0.316 Entity Replaced Baseline Gemini 1.5 Flash 37.8 0.114 57.4 0.385 7.2 0.615 25.7 0.193 8.0 0.300 36.7 0.308 Gemini 2.0 Flash 41.7 0.211 59.0 0.315 15.8 0.558 30.8 0.248 11.0 0.167 16.7 0.257 GPT-4o mini 48.1 0.290 66.9 0.481 22.8 0.598 36.8 0.334 13.4 0.227 24.1 0.368 Llama 3-8B 39.8 0.210 57.5 0.412 16.2 0.504 31.7 0.308 13.8 0.106 14.2 0.317 Mixtral-8x7B 41.3 0.293 56.6 0.446 29.7 0.593 35.0 0.348 14.8 0.205 27.7 0.350 Image Captioning Baseline Gemini 1.5 Flash 28.3 0.217 22.7 0.155 25.3 0.188 12.9 0.051 5.8 0.058 14.7 0.056 Gemini 2.0 Flash 33.3 0.243 34.5 0.230 25.8 0.183 23.2 0.109 4.0 0.029 29.6 0.131 Table As Image Baseline Gemini 1.5 Flash 38.0 0.223 41.6 0.274 21.6 0.078 46.6 0.255 23.2 0.113 34.3 0.199 Gemini 2.0 Flash 30.3 0.147 35.6 0.168 19.7 0.072 38.6 0.174 26.9 0.119 32.5 0.149 GPT-4o mini 47.0 0.355 46.5 0.305 33.4 0.228 50.1 0.318 26.7 0.214 42.5 0.304 Intern-VL-2.5 17.7 0.187 18.9 0.189 12.1 0.125 27.6 0.292 8.6 0.080 23.0 0.227 Mantis-8B-Idefics2 22.6 0.140 23.8 0.106 10.7 0.040 24.6 0.088 25.6 0.122 22.7 0.083 Phi-3.5 18.2 0.089 21.2 0.111 12.5 0.038 17.6 0.023 0.1 0.002 14.0 0.056 Qwen-2.5-VL 29.7 0.160 32.1 0.190 15.3 0.059 29.8 0.161 14.1 0.067 29.2 0.181 Table_LLaV A 12.9 0.078 14.0 0.050 10.8 0.030 21.9 0.065 0.0 0.000 16.6 0.050 Interleaved Baseline Gemini 1.5 Flash 29.2 0.205 29.0 0.177 25.2 0.175 25.2 0.120 4.5 0.034 26.6 0.131 Gemini 2.0 Flash 33.2 0.223 34.6 0.210 27.9 0.195 29.4 0.156 26.6 0.131 27.3 0.103 GPT-4o mini 48.1 0.372 47.7 0.301 30.9 0.223 40.0 0.252 21.6 0.127 24.4 0.184 Mantis-8B-Idefics2 26.0 0.175 29.2 0.144 15.6 0.064 15.8 0.061 27.0 0.139 21.7 0.066 Phi-3.5 21.4 0.136 27.7 0.144 13.7 0.060 28.8 0.131 13.7 0.071 27.0 0.185 Qwen-2.5-VL 29.4 0.261 21.5 0.177 23.5 0.185 22.0 0.158 5.7 0.060 20.2 0.165 Table 11: Table of best performing baselines across baselines on the sample set. Baseline Model Answer Mention Questions Explicit Question Implicit Question Visual-Based Question Exact Substring F1 Exact Substring F1 Exact Substring F1 Exact Substring F1 Table as an Image Baseline GPT-4o-mini 34.8 38.7 0.32 44.8 46.9 0.34 40.7 42.3 0.32 41.0 45.2 0.36 Interleaved Baseline GPT-4o-mini 41.6 45.9 0.35 46.4 47.8 0.37 31.4 33.0 0.26 32.2 34.7 0.27 Interleaved Baseline Human answers 56.25 57.85 0.42 68.7 70.8 0.53 54.25 56.35 0.42 61.6 64.5 0.59 17 E.1 Types of Errors in Image Interpretation We have performed error evaluation and manually classified each error in one of the following categories •Entity Disambiguation Issues – These errors occur when an image is incorrectly identified, leading to misinterpretation of its content. Misidentification can lead to entirely incorrect conclusions about the image. •Entity Identification Issues – This error refers to the complete failure to recognize
https://arxiv.org/abs/2505.21771v1
or identify the image. •Reasoning Errors – This category includes mistakes where the image is correctly identified, but the logic used to answer the question is flawed. Such errors typically involve incorrect inferences, faulty assumptions, or logical inconsistencies. •Identification of Visual Attributes – Errors in this category involve the failure to recognize key visual components of an image. This could mean missing out on crucial details such as shapes, colors, textures, or patterns. •Structural Errors – These errors specifically pertain to the misinterpretation of tabular or structured data within an image. Failure to correctly identify rows, columns, and hierarchical relationships can result in inaccurate conclusions when analyzing tables, charts, or diagrams. •Mathematical Errors – These occur when an individual or AI system miscalculates nu- merical information within the table. Errors can include incorrect counting, misreading of numerical values, or computational mistakes when deriving conclusions from visual data that involves numbers. •Partial Answers – This type of error involves providing an incomplete response where some crucial details are missing. The responder may identify and interpret part of the image correctly but fail to provide a comprehensive answer. •Extra Information or Hallucination – This occurs when irrelevant or incorrect details are introduced into the response. The additional information may not be present in the table itself but might be inferred incorrectly based on prior pre trained knowledge. Table 12: Distribution of error types across Human, Interleaved GPT-4o-mini, and Table-as-an-image GPT-4o-mini answers. Type of Error Interleaved Human (%) Interleaved GPT-4o-mini (%) Table as Image GPT-4o-mini (%) Entity Identification Issues 74.44 13.23 18.67 Entity Disambiguation Issues 5.21 11.00 8.00 Identification of Visual Attributes 11.32 17.08 16.00 Partial Answer 6.02 12.08 14.00 Reasoning Errors 3.01 14.38 15.33 Structural Errors 0.00 11.15 7.33 Mathematical Errors 0.00 10.31 8.67 Extra Info / Hallucination / Pretrained Knowledge 0.00 10.77 12.00 Total 100.00 100.00 100.00 E.2 Analysis of Human Errors The analysis highlights key patterns in human performance, with Entity Identification Issues being the most common, suggesting difficulties in recognizing entities, particularly without domain knowledge. Partial Answers reveal a tendency to overlook essential details, while Entity Disambiguation Issues indicate occasional struggles in distinguishing between similar entities. These three challenges emphasize the importance of domain knowledge in accurately solving tasks and identifying the correct entity. Visual Attribute Identification Errors suggest that even humans can miss fine details in images. How- ever, Reasoning Errors are relatively low, indicating that humans generally follow logical processes. Notably, there are no Mathematical Errors, Structural Errors, or Hallucinations, demonstrating strong numerical reasoning and structured data interpretation. Overall, humans excel in logical analysis and structured comprehension but face challenges in entity recognition, visual attention to detail, and providing complete answers. 18 E.3 Performance and Analysis of Errors in Interleaved baseline GPT 4o-mini GPT-4o-mini exhibits significant errors across multiple categories, with each exceeding the 10% threshold. Reasoning Errors are the most prominent, as the model struggles with multi-step deductions and logical coherence—an ongoing challenge for VLMs. Visual Attribute Identification is another major limitation, with failures in extracting key image features, underscoring the need for better vision encoders and fine-tuning.
https://arxiv.org/abs/2505.21771v1
Entity Identification and Disambiguation errors occur at similar rates, as the model misidentifies or fails to recognize entities, likely due to insufficient training data and over-reliance on context. Structural Errors show difficulty in interpreting complex tabular data, including hierarchical structures and nested tables, while Hallucination Errors highlight the model’s tendency to generate irrelevant information. Mathematical Errors reveal persistent struggles with quantitative reasoning. These issues demonstrate that while GPT-4o-mini has some multimodal reasoning capability, it lacks depth in entity recognition, visual interpretation, and structured data comprehension. Addressing these challenges will require advancements in vision encoders, contextual learning, and fine-tuning strategies to improve accuracy in multimodal analysis. E.4 Performance and Analysis of Errors in Table as an Image baseline GPT 4o-mini GPT4o-mini makes many errors across all categories. From table 12, in table as image baseline, we notice that GPT4o-mini had most errors in Entity Identification. This is likely due to insufficient training data to much reliance on context. It also struggles with Identification of visual attributes and reasoning errors in similar range as Interleaved baseline. We see similar error rates among Mathematical, Hallucinations, Partial and Entity Disambiguation error types showing that GPT4o-mini struggles with these issues irrespective of input table format. Interestingly, We observe that structural Errors decreased by ~4% in table as image baseline. This may be due to the fact that looking at entire table at once through image helped understand the structure better. So, passing tables as image does help GPT4o-mini to understand the structure of tables better. From above both analysis we can say GPT4o-mini has scope of improvement in identifying visual attributes, on entity identification and reasoning errors. F Comparison with similar datasets Our dataset poses substantially greater challenges than MMTabQA, as reflected by lower performance across every baseline. The largest declines occur in the Partial Input/Missing Image and Image Captioning settings, implying that MMTabQA models could more readily infer missing entities. Even the more advanced Gemini 2.0 Flash underperforms Gemini 1.5 Flash on our benchmark, highlighting its increased difficulty. The Table-as-Image baseline also degrades sharply—larger tables and more images per table make comprehensive visual extraction harder. Although the Interleaved baseline shows a smaller (8%) drop, Qwen 2.5 VL 7B still outpaces GPT-4o on several external benchmarks (e.g., DocVQA [ 25], CC-OCR [ 36], MathVista [ 20], AITZ [ 38], ScreenSpot [ 16]), underscoring the competitive stakes. In sum, the uniformly lower scores confirm that our dataset is a markedly more demanding test of multimodal reasoning. Table 13: Substring match accuracy results for different baseline comparisons on MMTabQA and MMTBench. Baselines MMTabQA MMTBench Partial Input / Missing Image (GPT 4o vs Mistral) 62.72 37.46 Entity Replaced (GPT 4o vs GPT 4o - mini) 69.94 63.00 Image Captioning (Gemini 1.5 vs 2.0 Flash) 49.16 30.55 Table as Image (GPT 4o vs GPT 4o - mini) 57.45 44.60 Interleaved (GPT 4o vs Qwen 2.5 VL) 58.23 50.23 19 G Prompts This section contains the exact prompts used in our experiments. Figure 8 shows the prompt for the Missing Image Baseline; Figure 9 shows the prompt for the Entity Replaced Baseline;
https://arxiv.org/abs/2505.21771v1
Figure 10 shows the prompt for the Image Captioning Baseline; Figure 11 shows the prompt for the Table as Image Baseline; and Figure 12 shows the prompt for the Interleaved Baseline. Prompt: You will be provided a table in a pipe-separated table where all the entities have been removed. Your task is to: Step 1: UNDERSTAND THE TABLE CONTEXT - Carefully analyze the table structure and identify its purpose and what it mentions. Step 2: FILL IN THE GAPS - Use the table context and your real-world knowledge to deduce the missing entities logically. Step 3: ANALYZE THE QUESTIONS - Read all the questions provided and explore **ALL TYPES OF REASONING** to find answers, including but not limited to Numerical reason- ing(relationships, totals, and comparisons), Visual reasoning (Colors, shapes, or patterns), Contextual reasoning, (Real-world connections or logic), etc. Step 4: PROVIDE ANSWERS IN Format - Ensure that all answers adhere strictly to the FORMAT specified. Avoid deviating from this format or including unnecessary explanations. if you are unable to answer it, simple answer UNKNOWN. I will provide one example to show you: One shot . . . ALWAYS PROVIDE YOUR ANSWERS IN THIS FORMAT. Now I will provide you with the table and questions. TABLE Questions Based on the examples that I have provided and the steps I mentioned above, answer the questions. Figure 8: Prompt for Missing Image Baseline Prompt: You will be provided a pipe-separated table format that contains some entities. Your task is to: Step 1: UNDERSTAND THE TABLE CONTEXT - Carefully analyze the table structure and identify its purpose and what it mentions. Step 2: ANALYZE THE QUESTIONS - Read all the questions provided and explore **ALL TYPES OF REASONING** to find answers, including but not limited to Numerical reasoning (relationships, totals, and comparisons), Visual reasoning (colors, shapes, or patterns), Contextual reasoning (real-world connections or logic), etc. Step 3: PROVIDE ANSWERS IN Required Format - Ensure that all answers adhere strictly to the FORMAT specified. Avoid deviating from this format or including unnecessary explanations. **IMPORTANT** ALL answers are there in the table. ANALYZE the question and table properly. I will provide one example to show you: One Shot Example ALWAYS PROVIDE YOUR ANSWERS IN THIS FORMAT. Now I will provide you with the table and questions. TABLE Questions Based on the examples that I have provided and the steps I mentioned above, answer the questions. Figure 9: Prompt for Missing Image Baseline 20 Prompt: You will be provided a table in a pipe-separated table with images included. Your task is to: Step 1: UNDERSTAND THE TABLE CONTEXT - Carefully analyze the table structure and identify its purpose and what it mentions. Step 2: CAPTION EVERY IMAGE - Based on the image, provide a caption for that image. Your job is to reason, predict, and replace image entity tags and provide visual descriptions. Step 3: CREATE A TABLE - Based on the image captions, create a pipe-separated table where the image placeholders or cells have been replaced with their captions. Step 4: ANALYZE THE QUESTIONS - Read all the
https://arxiv.org/abs/2505.21771v1
questions provided and explore **ALL TYPES OF REASONING** to find answers. Table with Captions Step 5: PROVIDE ANSWERS IN Format Ensure that all answers adhere strictly to the FORMAT specified. ALWAYS PROVIDE YOUR ANSWERS IN THIS FORMAT. Now I will provide you with the questions. Questions Based on the steps I mentioned above, answer the questions. Figure 10: Prompt for Image Captioning Baseline Prompt: You will be provided an image of a table. Your task is to: Step 1: UNDERSTAND THE IMAGE CONTEXT - Carefully analyze the image content and understand the tabular structure and all text and visual aspects inside the image. Step 2: ANALYZE THE QUESTIONS - Read all the questions provided and explore **ALL TYPES OF REASONING** to find answers. Step 3: PROVIDE ANSWERS Avoid deviating from this format or including unnecessary explanations. **IMPORTANT** ALL answers are there in the image. Now I will provide you with the image. IMAGE For this image, you will answer the following questions. QUESTIONS Based on the steps I mentioned above, answer the questions. Figure 11: Prompt for Table as an Image Baseline Prompt: You will be provided a table where some cells are images. Your task is to: Step 1: UNDERSTAND THE TABLE CONTEXT - Carefully analyze the table structure and understand the intricate relationship between image and text. Step 2: ANALYZE THE QUESTIONS - Read all the questions provided and explore **ALL TYPES OF REASONING** to find answers. Step 3: PROVIDE ANSWERS Avoid deviating from this format or including unnecessary explanations. **IMPORTANT** ALL answers are there in the image. Now I will provide you with the table. INTERLEA VED TABLE For this table, answer the following questions. QUESTIONS Based on the steps I mentioned above, answer the questions. Figure 12: Prompt for Interleaved Baseline 21
https://arxiv.org/abs/2505.21771v1
DualSchool: How Reliable are LLMs for Optimization Education? Michael Klamkin∗†‡Arnaud Deza†‡ Sikai Cheng‡Haoruo Zhao‡Pascal Van Hentenryck‡ Abstract Consider the following task taught in introductory optimization courses which addresses challenges articulated by the community at the intersection of (generative) AI and OR: generate the dual of a linear program . LLMs, being trained at web- scale, have the conversion process and many instances of Primal to Dual Conversion (P2DC) at their disposal. Students may thus reasonably expect that LLMs would perform well on the P2DC task. To assess this expectation, this paper introduces DUALSCHOOL , a comprehensive framework for generating and verifying P2DC instances. The verification procedure of DUALSCHOOL uses the Canonical Graph Edit Distance , going well beyond existing evaluation methods for optimization models, which exhibit many false positives and negatives when applied to P2DC. Experiments performed by DUALSCHOOL reveal interesting findings. Although LLMs can recite the conversion procedure accurately, state-of-the-art open LLMs fail to consistently produce correct duals. This finding holds even for the smallest two-variable instances and for derivative tasks, such as correctness, verification, and error classification. The paper also discusses the implications for educators, students, and the development of large reasoning systems. 1 Introduction Large Language Models (LLMs) have garnered significant interest for their potential to serve as always-available personalized education assistants, automating time-consuming tasks such as tutoring and grading in STEM education. To fully realize this potential, however, LLMs must demonstrate the ability to reliably execute detailed multi-step procedures. In particular, real-world tasks are often nuanced, making attention to detail paramount to success – a higher bar than plausible-looking text. This paper proposes the relatively simple primal-to-dual conversion (P2DC) task as a benchmark to evaluate whether LLMs can execute detailed procedures reliably. P2DC is an interesting task for several reasons. (1) P2DC is commonly taught in introductory optimization courses. (2) LLMs, being trained on a web scale, have the conversion process and many instances of Primal to Dual Conversion (P2DC) in their training corpus. Indeed, when asked directly how to do P2DC, most LLMs respond with a correct strategy. (3) P2DC requires a clear understanding of the procedure, since there are many ways to obtain a dual. (4) P2DC captures several challenges recently articulated at the intersection of (generative) AI and OR, including the verification of optimization models, the availability of datasets, and the design of evaluation criteria and methodologies [ 1]. (5) P2DC is also an inherently structured task since the input and output are linear programs which can be represented in different formats (e.g., JSON, XML, MPS files). This makes the P2DC task an attractive test-bed ∗Corresponding author: klam@isye.gatech.edu †Equal contribution ‡NSF AI Institute for Advances in Optimization, Georgia Institute of Technology, Atlanta, GA, USA Preprint. Under review.arXiv:2505.21775v1 [cs.LG] 27 May 2025 CanonicalizationAutomatic Dualization CanonicalizationPrompt Template Structured OutputLLMPrimal Ground Truth Dual Candidate Graph Candidate DualGround Truth GraphFigure 1: The P2DC task of DUALSCHOOL : it illustrates the primal-to-dual conversion, the canonical representation of linear programs and the evaluation using CGED, which is the concatenation of the canonicalization step and the Graph Edit Distance comparison. for reasoning
https://arxiv.org/abs/2505.21775v1
models specialized to structured data, a relatively under-studied but extremely valuable competency. Because of the simplicity of P2DC and the availability of the P2DC instructions and instances in the training corpus of LLMs, students in optimization classes may reasonably expect that LLMs would perform well on the P2DC task. To assess this expectation, this paper introduces DUALSCHOOL , a comprehensive framework for generating and verifying P2DC instances. To generate P2DC instances, DUALSCHOOL leverages automatic symbolic dualization, converting new synthetic and existing primal-only datasets (e.g. NLP4OPT [ 2], ComplexOR [ 3], and EasyLP [ 4]), to P2DC datasets. To enable automatic evaluation, DualSchool includes a graph-based equivalence detection algorithm called Canonical Graph Edit Distance (CGED) . CGED is similar to the NGED algorithm of Xing et al. [5], but it adds a crucial pre-processing canonicalization step specifically designed to allow for differences in dualization procedure conventions. As such, DUALSCHOOL overcomes the limitations of existing validation techniques in the optimization setting which are either overly restrictive or overly permissive. Indeed, existing validations either force particular convention choices or forget much of the problem structure, complicating their use in post-training techniques to improve performance (e.g., [6,7]). P2DC is illustrated in Figure 1, which exemplifies the canonical form used for comparing linear programs. Preliminary experimental results with DUALSCHOOL reveal interesting findings: they show that the P2DC is surprisingly challenging for leading open LLMs. This discrepancy – being able to recite the procedure but not carry it out reliably – underscores a critical limitation of LLMs: they yield duals with mistakes that may be minor in terms of token count but are clearly wrong (e.g., an unbounded dual for a feasible primal). These findings hold even for the smallest two-variable instances and for derivative tasks, such as error correction, error classification, and verification. In CORRECTION , the LLM is asked to correct the error; in CLASSIFICATION , the LLM is asked what the error is; and in VERIFICATION , the LLM is asked if the primal-dual pair is valid. The main contributions of this paper can be summarized as follows: 2 1. The paper proposes D UALSCHOOL , a comprehensive framework to evaluate the reliability of LLMs for a relatively simple optimization task, whose instructions are widely available. The multi-task framework includes the P2DC (primal to dual conversion) task and derivative tasks CORRECTION ,CLASSIFICATION , and VERIFICATION . 2.The paper designs a robust automatic evaluation using a Canonical Graph Edit Distance (CGED) algorithm that simultaneously allows for differences in dualization convention while robustly checking the correctness of all problem data. 3.The paper is associated with a repository of open data and code : over 1,300 primal-dual pairs as well as error-injected variants for each are published alongside the paper, including the code to automatically generate more samples if needed. 4.Experimental results show that P2DC is a compelling challenge despite its simplicity, as state-of-the-art open LLMs struggle even for very small instances. P2DC, and its derivative tasks, also address several challenges recently articulated at the intersection of (generative) AI and OR [1]. The rest of the paper is
https://arxiv.org/abs/2505.21775v1
organized as follows. Section 2 discusses related work. Section 3 introduces the P2DC task and Section 4 introduces the CGED algorithm. Then, Section 5 presents the experimental results and Section 6 concludes the paper. 2 Related Work This section reviews related work at the intersection of large language models and optimization. LLMs for Optimization Recent years have witnessed a surge of interest in leveraging large language models (LLMs) for various optimization tasks (LLM4OPT). Notable examples include natural language modeling [ 8,3,2], where LLMs are given a natural language description of an optimization problem and are asked to formulate it, conversational interfaces to configure and customize existing models [ 9,10], algorithmic configuration for cutting plane subroutines in MILP solvers [ 11], explaining infeasibilities [ 12], and even solving optimization problems directly [ 13]. These efforts highlight the growing recognition of LLMs as a versatile tool that can be applied beyond traditional natural language processing tasks and into the realm of mathematical optimization. Evaluation Methods Despite the increasing attention, existing work in this area primarily rely on the optimal objective value as the sole criterion for evaluating correctness of LLM-generated optimization formulations. This approach has inherent limitations, as it can silently ignore major errors in the formulation such as missing or incorrect constraints/variables if those errors happen to not affect the optimal value. The problem of equivalence detection between different optimization formulations remains largely under-explored. [ 5] shows that their normalized graph edit distance (NGED) more closely aligns with human assessments of correctness than token and optimal-value based evaluation. However, in the context of P2DC, the direct use of NGED results in many false negatives due to benign conventional differences, while optimal value yields many false positives. LLM for Education and Tutoring The potential of LLMs in education and tutoring has attracted significant interest with several recent works exploring their use in personalized learning and auto- mated assessment. [ 14] investigates the use of LLMs in personalized tutoring systems. Benchmarks such as MathTutorBench [ 15] have been developed to evaluate the capabilities of LLMs in educa- tional settings, and case studies such as [ 16] and [ 17] explore their potential in physics and power engineering education respectively. 3 3 Primal-to-Dual Conversion This section introduces P2DC, the main task of DUALSCHOOL . A linear program (LP) is a constrained optimization problem with linear objective and affine constraints, i.e., a problem that can be stated as min x∈Rnc⊤x (1a) s.t.a⊤ jx≤bj ∀j∈ I≤ (1b) a⊤ jx≥bj ∀j∈ I≥ (1c) a⊤ jx=bj ∀j∈ I = (1d) where x∈Rnis the vector of decision variables, c∈Rnis the objective vector, aj∈Rnare the constraint coefficient vectors, and bj∈Rare the constraint right-hand-sides. Any xwhich satisfies (1b),(1c)and(1d) (i.e., a feasible x) provides an upper bound on the optimal value of the LP. Any feasible solution to the dual of an LP, which itself is an LP4, provides a lower bound on the optimal LP value. Moreover, in many practical applications, dual programs often have useful interpretations, as demonstrated in the example below. Readers are referred to Nemirovski [18, Section 1.3 ]for a
https://arxiv.org/abs/2505.21775v1
detailed introduction to linear programming duality. Example: Production Planning Consider the production planning problem where a factory manager is tasked with finding the most profitable production plan given a fixed amount of resources; wood ( W) and steel ( S). In this example, the factory can produce a number of doors ( d) and tables (t), each using varying amounts of resources. All products are sold for a profit, pdfor doors and pt for tables. The amount of wood (resp. steel) needed to produce a door is denoted by awd(resp. asd) and likewise the amount of wood (resp. steel) needed to produce a table is denoted by awt(resp. ast). The full formulation is stated below as Model 2. The dual of this program, given below as Model 3, is known as the resource-valuation problem [ 19], with variables ywandysdenoting the so-called “shadow-price” of wood and steel respectively. max d,tpdd+ptt s.t.awdd+awtt≤W asdd+astt≤S d≥0, t≥0(2) =⇒min yw,ysWyw+Sys s.t.awdyw+asdys≥pd awtyw+astys≥pt yw≥0, ys≥0(3) Performing this transformation – converting Model 2 to Model 3 – is the main task in DUALSCHOOL , called Primal-to-Dual Conversion (P2DC). Note that there exists several different procedures for deriving the dual of an LP; the most common methods are summarized in Appendix A.1. The dual program is not unique in that there exist different procedures and convention choices that can yield different, but valid, dual programs. For instance, consider Model 4, that differs from Model 3 only in its use of slack variables zdandztto convert the inequality constraints to equalities. In the context of P2DC, one may arrive at Model 4 if dualizing by first converting the primal to the standard form minc⊤xs.t.Ax≤bthen writing the dual as max b⊤ys.t.A⊤y=c, y≥0. min yw,ys,zd,ztWyw+Sys s.t.awdyw+asdys−zd=pd awtyw+astys−zt=pt yw≥0, ys≥0, zd≥0, zt≥0(4) Models 3 and 4 are both considered “correct” duals of Model 2. This complicates equivalence detection, since directly applying NGED on the graphs corresponding to Models 3 and 4 would result in a non-zero edit-distance due to the missing slack variable nodes, missing edges denoting the slack constraint coefficients, and changed constraint senses. Thus, in the context of P2DC, it is important to use a “convention-invariant” matching procedure that explicitly treats such differences. Section 4 expands on the shortcomings of existing approaches and proposes a new metric, Canonical Graph Edit Distance (CGED) , that meets the requirements of the P2DC setting. 4The dual program corresponding to Model 1 is stated in Appendix A.1 as Model 7. 4 4 Automatic Evaluation for P2DC This section begins by explaining why existing evaluation methods are insufficient to determine the correctness of a candidate dual in the P2DC setting. It then proposes a correctness detection algorithm that extends that of Xing et al. [5]in order to enable a “convention-invariant” matching of the candidate dual program to a known correct dual formulation obtained by automatic dualization. NER-based matching Several prior works [ 2,20,21], including the NL4OPT “generation” sub- task, use a declaration-level accuracy [ 2, Equation 2] that matches tokens within declarations. Although, compared to a naive token-matching, this metric allows to handle permutations of declara- tions, it
https://arxiv.org/abs/2505.21775v1
cannot handle basic symmetries that are either within declarations, such as the order of terms in a constraint, or span across multiple declarations, such as variable sign convention. Optimal Value Most prior work uses an optimal value check, often referred to as execution accuracy, to establish the correctness of a given formulation [ 8,22]. Although this is a necessary condition for correctness, it often yields false positives. For example, if the formulation omits a constraint that is not tight at the optimal solution, the optimal value check will still mark the formulation correct. Furthermore, in the P2DC setting, simply echoing back the primal model always gives the same objective value, even for problems which are not self-dual (due to strong duality). This makes the optimal value check easy to “reward-hack” [23], limiting its applicability as a reward signal. Polyhedral congruence and isomorphism Since the feasible set of an LP is a polyhedron, poly- hedral computation libraries, e.g., polymake [24], can be used to evaluate whether the polyhedra corresponding to the candidate and ground truth feasible sets are congruent or isomorphic. However, checking congruency does not fit the P2DC setting since, although it does treat permutations and variable sign convention differences and, in some cases it may be useful to forget constraint scaling and redundant constraints, polyhedral congruence also allows for arbitrary transformations such as rotation and translation, breaking the primal-dual correspondence. Similarly, polyhedral isomorphism is not a good fit since it verifies only the incidence structure of the polyhedron, effectively forgetting most of the structure imposed by the problem data. Graph Edit Distance Recent work in ML for optimization use graph representations of optimization problems [ 25,26]. Xing et al. [5]proposes to use graph edit distance (GED) algorithms on these graph representations to evaluate equivalence between a candidate and a ground truth program. Their method, called NGED, is attractive due to its ability to handle variable and constraint permutations. Furthermore, GED is a rich reward signal in that it outputs not only a boolean value but the optimal edit path between the two formulations. However, NGED is still overly restrictive in the P2DC setting since dualization procedures and conventions can result in dual programs that have different graph representations, i.e. additional variable nodes and edges when slack variables appear. Thus, directly applying NGED is too restrictive for P2DC. Finally, note that each of these metrics rely on successful parsing of the LLM output into the required representation for comparison, e.g. XML for NER or the bipartite graph for GED. In cases where the parser fails due to incorrect formatting of the LLM response, the paper deems the formulations not equivalent. However, as shown in Section 5, the vast majority of responses are correctly parsed. 4.1 Canonical Graph Edit Distance This paper proposes the Canonical Graph Edit Distance (CGED) for correctness detection: it modifies the NGED method to include a canonicalization step in order to control for variations in dualization procedure. In particular, the paper notes that these convention differences result in (combinations of) two kinds of “symmetries” in the dual programs: variable
https://arxiv.org/abs/2505.21775v1
sign and slack variables. Although NGED itself includes some canonicalization such as converting the objective sense to minimization and single-sided inequalities to less-than sense, it fails to treat these convention differences, leading to many false negatives as demonstrated in Section 5. The following paragraphs describe the proposed modifications in detail. Slack variables As demonstrated using Models 3 and 4 in Section 3, slack variables appear in the dual when the dualization procedure treats primal variables as free, either by explicitly including their 5 bounds in the main constraints or due to the particular standard form or ruleset used. The correction detection algorithm treats these convention differences by eliminating the resulting slack variables. Equation (5) demonstrates such an elimination. min x1+x2 s.t.x1+x2+s= 1 s≤0=⇒min x1+x2 s.t.x1+x2≥1(5) Variable Sign The second common symmetry that arises due to variations in dualization convention is variable sign, since as pointed out in Step 2 of the Lagrangian dualization method in Appendix A.1, the sign constraint given to a dual variable only has to match how the corresponding constraint’s residual is formed. In other words, as long as the memorized procedure/standard form/ruleset is consistent, the dual variables can be given any sign, including free variables (corresponding to equality constraints in the primal). For instance, the programs in Equation 6 are deemed equivalent for the purposes of P2DC. min x1+x2 s.t.x1+x2≥1 x1≥0, x2≥0⇐⇒min−x′ 1+x2 s.t.−x1+x2≥1 x′ 1≤0, x2≥0(6) To establish equivalence up to variable sign, both the candidate and ground truth are reformulated to convert variables whose bound constraint reads x≤utox≥ −uby flipping the sign of its constraint and objective coefficients. In order to allow for sign convention differences in free variables and those with double-sided finite bounds, CGED exploits the variable permutation property of GED by using the difference-of-positive variables transformation x∈R=⇒x=x+−x−, x+≥0, x−≥0. An example for the double-sided finite bounded variable case is given in Appendix A.2.1. When these modifications are used as pre-processing steps for GED, the overall procedure is able to recognize the correctness of a dual even if it differs from the ground truth in the following ways: 1.Objective sense – flipping the objective sense is allowed as long as all objective coefficient signs are also flipped. In P2DC, this corresponds to a common post-processing that is applied if, for instance, all the objective coefficients are negative. 2.Inequality sense – flipping the sense of an inequality is allowed as long as the signs of the coefficients and RHS are also flipped. In P2DC this corresponds to a common post- processing that is applied if, for instance, all the problem data in a constraint is negative. 3.Variable and constraint permutation – reordering constraints and variables is allowed. 4.Slack variables – using slack variables to turn inequalities into equalities is allowed. In P2DC, slack variables appear in the dual when treating primal variables as free. 5.Variable sign – flipping the sign of a variable is allowed as long as the sign of its con- straint and objective coefficients are also flipped. In P2DC, this corresponds to a common convention choice when defining dual variables associated with
https://arxiv.org/abs/2505.21775v1
primal inequality constraints. Note that although CGED is designed specifically for the P2DC setting, the canonicalization proce- dures can be used more broadly to detect equivalence between formulations or as a normalization procedure for systems that take a linear program as input. For other applications, it is important to consider exactly what should be preserved and what should be forgotten. In particular, due to its specialization to P2DC, CGED does not treat several symmetries which may be natural to forget in other areas such as scaling of variables or constraints and variable substitutions. 5 Experiments This section describes the experiments conducted to evaluate the performance of leading open LLMs on the D UALSCHOOL dataset. Benchmark Instances DUALSCHOOL comprises over 1300 LP instances drawn from three main sources: (1) two dimensional LPs from bounded toy poltyopes, (2) continuous relaxations of small- scale combinatorial optimization instances and (3) LP instances from prior work on natural language 6 Table 1: Mean (max) number of variables and constraints for each dataset. ComplexOR [3] Easy LP [4] NL4OPT [2] NLP4LP [8] 2D CO-Small Variables 4.1 (9) 2.8 (5) 2.0 (3) 2.2 (6) 2.0 (2) 3.9 (5) Constraints 4.5 (12) 4.3 (14) 2.9 (5) 3.1 (6) 5.7 (12) 3.5 (6) Instances 15 585 205 266 108 140 modeling benchmarks[ 2–4,8]. For each instance, DUALSCHOOL includes ground-truth duals gener- ated using symbolic dualization [ 27]. For the CORRECTION ,VERIFICATION , and CLASSIFICATION tasks, DUALSCHOOL also includes duals with (labeled) errors; the error types are described in Appendix C. Table 1 summarizes the data sources. Language Models Due to resource limitations, the experiments consider only small and medium- sized open-weight LLMs. The models are evaluated in both the zero-shot and one-shot in-context learning settings. Readers are referred to Appendix A.3 for more details about model configurations and compute resources. Evaluation Pipeline ForGENERATION andCORRECTION , the LLM is prompted to produce gurobipy code that formulates the dual. The code is executed in a sandbox environment to create thegurobipy.Model object which is then written to MPS for evaluation. That MPS file is then compared to that of the ground truth dual using NGED [ 5], OBJ [ 8,22], and the proposed CGED (Section 4). Instances that crash or cannot be parsed are counted as incorrect. For the VERIFICATION and CLASSIFICATION tasks, two methods for extracting an answer from the LLM response are tested: 1) XML from free-flow output and 2) enforced JSON schema.5Problems are rendered to the model in LATEX using the prompt template described in Appendix D.6 TheDUALSCHOOL tasks have two kinds of outputs: models ( GENERATION andCORRECTION ) and choices ( VERIFICATION andCORRECTION ). For tasks with model outputs, three metrics are reported: the canonical graph edit distance (CGED) from Section 4, the normalized graph edit distance (NGED) from Xing et al. [5], and the objective-match (OBJ). Each of these is reported as an accuracy, i.e. how often the edit distance to the ground truth dual is equal to zero. For tasks with choice outputs, the classification accuracy is reported, i.e. how often the LLM chose the correct choice. 5.1
https://arxiv.org/abs/2505.21775v1
Results for P2DC G ENERATION Table 2 reports the CGED, NGED, and OBJ accuracies across four benchmark datasets under both 0-shot and 1-shot prompting conditions. The Execution accuracy (Exec%) column reports the percentage of instances for which the LLM code successfully produced an MPS file. Overall, even though Exec% is high for most models, no model reliably produces correct duals – even when prompted with small, synthetic instances. The best-performing model, Phi 4 - 14B, reaches 47.8% CGED and 53.7% objective accuracy on the NL4OPT samples in the 0-shot setting. Due to space limitations and relatively poor performance, the results for the instances coming from CO small and Easy LP are presented in Appendix A.3 in Table 4. In all cases, OBJ consistently exceeds CGED. This highlights a key pitfall of objective-based evaluation: LLMs often produce duals with the correct objective value but incorrect or malformed structure. Based on an informal analysis of samples in this category – CGED is nonzero while OBJ is true – the most common mistake is omitting a (dual) variable bound that happened to be redundant. Conversely, NGED is too restrictive in this setting, giving consistently lower scores than CGED. These results underscore the need for convention-invariant evaluation like CGED in the P2DC setting. Surprisingly, one-shot prompting provides no consistent benefit and occasionally degrades performance (e.g. Phi-4 drops from 35.7% to 28.6% on the COMPLEX ORsamples), suggesting limited applicability of in-context learning in the P2DC setting. 5using the Structured Outputs feature in Ollama https://ollama.com/blog/structured-outputs 6LATEX is generated using https://jump.dev/JuMP.jl/stable/manual/models/#Print-the-model 7 Table 2: Aggregated accuracy results for the G ENERATION task ComplexOR NL4OPT NLP4LP 2D Model Prompt Exec% NGED OBJ CGED NGED OBJ CGED NGED OBJ CGED NGED OBJ CGED Mistral-7B0-shot 24.8 0 0 0 0 0 0 0 3.9 0 0 11.5 0 1-shot 22.5 0 50.0 0 0 0 0 0 3.3 0 0 7.7 0 Phi 4-14B0-shot 99.1 7.1 50.0 35.7 24.4 53.7 47.8 10.5 46.8 30.8 1.0 5.7 1.0 1-shot 99.7 7.1 42.9 28.6 2.0 34.5 27.6 2.9 34.0 18.9 14.0 18.7 14.0 Gemma 3-12B0-shot 69.9 0 0 0 0.6 3.7 3.0 0 4.6 0 0 8.5 0 1-shot 92.5 0 0 0 0 0 0 0 3.0 0 0 5.8 0 Qwen 2.5–7B0-shot 93.7 0 0 0 0 3.1 0.5 1.3 3.5 1.8 0 1.9 0 1-shot 94.2 0 0 0 0 0 0 0 1.8 0 0 2.9 0 Qwen 2.5–14B0-shot 92.2 7.1 14.3 7.1 4.3 20.7 13.6 2.7 10.9 3.6 0 2.1 0 1-shot 93.6 0 0 0 0 0 0 0 3.1 0.4 10.4 23.6 10.4 Llama 3.1-8B0-shot 94.8 0 0 0 0 1.7 1.1 0 1.4 0 0 6.6 0 1-shot 95.3 0 0 0 0 0 0 0 4.6 1.4 0 9.7 0 Llama 3.3-70B0-shot 73.8 16.7 41.7 25.0 17.0 45.8 39.9 8.2 30.6 18.9 0 11.1 0 1-shot 100.0 0 21.4 14.3 0 19.0 17.6 0.4 24.8 16.0 1.9 6.5 1.9 5.2 Results for P2DC C ORRECTION Figure 2 reports the performance of LLMs on the CORRECTION task. All models struggle to reliably repair the incorrect duals,
https://arxiv.org/abs/2505.21775v1
with accuracies below 60% across all models and error types. Similarly to theGENERATION task, the Phi 4 and Llama 3.3 models outperform the others. These uniformly low accuracies – even on error types that are relatively easy to detect as shown in the next section – reveal that CORRECTION is essentially just as challenging as GENERATION for the open LLMs considered. Figure 2: Accuracy for the C ORRECTION task by model and error type. 5.3 Results for P2DC V ERIFICATION and C LASSIFICATION Figure 3 reports the accuracies for the primal–dual pair VERIFICATION task (left) and the CLASSIFICA - TION task (right), with a red dashed line denoting the random-guess baseline (50% for VERIFICATION , 25% for CLASSIFICATION ). Overall, results are slightly negative: most models cluster at or below the random-guess baseline, reflecting limited reliability of predictions. Importantly, note that in the VERIFICATION task, across all models there is a clear bias towards predicting “no”, resulting in high accuracy on all but the “Correct” category (black); the only one where the right answer is “yes”. This is even more prevalent when using structured outputs, as shown in Appendix B in Figure 4. In the error classification task, detecting the flipped objective sense stands out as the easiest error type, with the best-performing models achieving accuracies between 70% and 90%. However, the more nuanced error types remain challenging with most models’ accuracies hovering near or below the random-guess line. 8 Figure 3: Accuracy for the V ERIFICATION and C LASSIFICATION tasks by model and error type. 6 Conclusion This paper introduced DUALSCHOOL , the first comprehensive benchmark for probing an LLM’s ability to perform and critique primal-to-dual conversions in linear programming. DUALSCHOOL combines four structured tasks (generation, verification, correction, and error classification) with a graph-based correctness detector that goes beyond simple token matching or objective-value checks, and is specifically designed to avoid false positives and negatives for the primal-to-dual task. Code is published alongside the paper as well as a dataset of P2DC samples based on both synthetic and real-world LPs, each including a set of duals with injected errors. Preliminary experiments using leading open models show that the best LLMs tested achieve a best-case dualization accuracy of only 47.8%, with similarly low performance on the derivative tasks. From an education standpoint, it is important to communicate to students, especially those without knowledge of generative AI and its implementation, that LLMs are not in the same equivalence class as e.g. Matlab or Julia. Although LLMs can return the "recipe" for various mathematical tasks, they struggle to follow these recipes even for simple tasks that are expected from undergraduate students in introductory classes. Moreover, it is important to be aware of the fact that LLM responses often feature high-quality writing and rich formatting, making it easy to believe the response is accurate. From a research standpoint, DUALSCHOOL provides a simple, yet meaningful benchmark to measure progress in LLMs and reasoning systems over the next years. Furthermore, thanks to the automatic labeling and evaluation algorithms, DUALSCHOOL can be directly used in fine-tuning methods such
https://arxiv.org/abs/2505.21775v1
as reinforcement learning with symbolic feedback [ 7] that can leverage rich reward signals. Future directions include extending DUALSCHOOL to quadratic and conic formulations and evaluating its efficacy as a fine-tuning dataset. Acknowledgements This research was partly funded by NSF awards 2112533 and DGE-2039655. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF. Experiments were run on the PACE Phoenix [28] cluster. References [1]Segev Wasserkrug, Leonard Boussioux, Dick den Hertog, Farzaneh Mirzazadeh, Birbil ¸ S. Ilker, Jannis Kurtz, and Donato Maragno. Enhancing decision making through the integration of large 9 language models and operations research optimization. Proceedings of the AAAI Conference on Artificial Intelligence , 39(27):28643–28650, Apr. 2025. doi: 10.1609/aaai.v39i27.35090. URL https://ojs.aaai.org/index.php/AAAI/article/view/35090 . [2]Rindranirina Ramamonjison, Timothy Yu, Raymond Li, Haley Li, Giuseppe Carenini, Bissan Ghaddar, Shiqi He, Mahdi Mostajabdaveh, Amin Banitalebi-Dehkordi, Zirui Zhou, and Yong Zhang. Nl4opt competition: Formulating optimization problems based on their natural language descriptions. In Marco Ciccone, Gustavo Stolovitzky, and Jacob Albrecht, editors, Proceedings of the NeurIPS 2022 Competitions Track , volume 220 of Proceedings of Machine Learning Research , pages 189–203. PMLR, 28 Nov–09 Dec 2022. URL https://proceedings.mlr. press/v220/ramamonjison23a.html . [3]Ziyang Xiao, Dongxiang Zhang, Yangjun Wu, Lilin Xu, Yuan Jessica Wang, Xiongwei Han, Xiaojin Fu, Tao Zhong, Jia Zeng, Mingli Song, et al. Chain-of-experts: When llms meet complex operations research problems. In The twelfth international conference on learning representations , 2023. [4]Xuhan Huang, Qingning Shen, Yan Hu, Anningzhe Gao, and Benyou Wang. Mamo: a mathematical modeling benchmark with solvers. arXiv preprint arXiv:2405.13144 , 2024. [5]Linzi Xing, Xinglu Wang, Yuxi Feng, Zhenan Fan, Jing Xiong, Zhijiang Guo, Xiaojin Fu, Rindra Ramamonjison, Mahdi Mostajabdaveh, Xiongwei Han, et al. Towards human-aligned evaluation for linear programming word problems. In Proceedings of the 2024 Joint International Con- ference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 16550–16556, 2024. [6]Subbarao Kambhampati, Kaya Stechly, and Karthik Valmeekam. (how) do reasoning models reason? Annals of the New York Academy of Sciences , 2025. [7]Piyush Jha, Prithwish Jana, Pranavkrishna Suresh, Arnav Arora, and Vijay Ganesh. Rlsf: Reinforcement learning via symbolic feedback. arXiv preprint arXiv:2405.16661 , 2024. [8]Ali AhmadiTeshnizi, Wenzhi Gao, Herman Brunborg, Shayan Talaei, and Madeleine Udell. Optimus-0.3: Using large language models to model and solve optimization problems at scale. arXiv preprint arXiv:2407.19633 , 2024. [9]Dimos Tsouros, Hélène Verhaeghe, Serdar Kadıo ˘glu, and Tias Guns. Holy grail 2.0: From natural language to constraint models. arXiv preprint arXiv:2308.01589 , 2023. [10] Connor Lawless, Jakob Schoeffer, Lindy Le, Kael Rowan, Shilad Sen, Cristina St. Hill, Jina Suh, and Bahareh Sarrafzadeh. “i want it that way”: Enabling interactive decision support using large language models and constraint programming. ACM Transactions on Interactive Intelligent Systems , 14(3):1–33, 2024. [11] Connor Lawless, Yingxi Li, Anders Wikum, Madeleine Udell, and Ellen Vitercik. Llms for cold-start cutting plane separator configuration. arXiv preprint arXiv:2412.12038 , 2024. [12] Hao Chen, Gonzalo E Constante-Flores, and Can Li. Diagnosing infeasible optimization problems using large language models. INFOR: Information Systems and Operational Research , 62(4):573–587, 2024. [13] Chengrun Yang, Xuezhi Wang,
https://arxiv.org/abs/2505.21775v1
Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. arXiv preprint arXiv:2309.03409 , 2023. [14] Pepper Miller and Kristen DiCerbo. Llm based math tutoring: Challenges and dataset, 2024. [15] Jakub Macina, Nico Daheim, Ido Hakimi, Manu Kapur, Iryna Gurevych, and Mrinmaya Sachan. Mathtutorbench: A benchmark for measuring open-ended pedagogical capabilities of llm tutors. arXiv preprint arXiv:2502.18940 , 2025. [16] Ryan Mok, Faraaz Akhtar, Louis Clare, Christine Li, Jun Ida, Lewis Ross, and Mario Campanelli. Using large language models for grading in education: an applied test for physics. Physics Education , 60(3):035006, 2025. 10 [17] Alan Hickey, Cathal Ó Faoláin, and Paul Cuffe. Large language models in power engineering education: A case study on solving optimal dispatch coursework problems. In 2024 21st International Conference on Information Technology Based Higher Education and Training (ITHET) , pages 1–5. IEEE, 2024. [18] Arkadi Nemirovski. Introduction to linear optimization . World Scientific, 2024. [19] Bryan Carsberg. On the linear programming approach to asset valuation. Journal of Accounting Research , pages 165–182, 1969. [20] Rindranirina Ramamonjison, Haley Li, Timothy T Yu, Shiqi He, Vishnu Rengan, Amin Banitalebi-Dehkordi, Zirui Zhou, and Yong Zhang. Augmenting operations research with auto-formulation of optimization models from problem descriptions. arXiv preprint arXiv:2209.15565 , 2022. [21] Ganesh Prasath and Shirish Karande. Synthesis of mathematical programs from natural language specifications. arXiv preprint arXiv:2304.03287 , 2023. [22] Chenyu Huang, Zhengyang Tang, Dongdong Ge, Shixi Hu, Ruoqing Jiang, Benyou Wang, Zizhuo Wang, and Xin Zheng. Orlm: A customizable framework in training large models for automated optimization modeling. arXiv e-prints , pages arXiv–2405, 2024. [23] Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems , 35:9460– 9471, 2022. [24] Benjamin Assarf, Ewgenij Gawrilow, Katrin Herr, Michael Joswig, Benjamin Lorenz, Andreas Paffenholz, and Thomas Rehn. Computing convex hulls and counting integer points with polymake. Mathematical Programming Computation , 9:1–38, 2017. [25] Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. Advances in neural information processing systems , 32, 2019. [26] Prateek Gupta, Maxime Gasse, Elias Khalil, Pawan Mudigonda, Andrea Lodi, and Yoshua Bengio. Hybrid models for learning to branch. Advances in neural information processing systems , 33:18087–18097, 2020. [27] Guilherme Bodin, Joaquim, Benoît Legat, Oscar Dowson, Pietro Monticone, Mathieu Besançon, LukasBarner, Miles Lubin, and martincornejo. jump-dev/dualization.jl: v0.6.0, May 2025. URL https://doi.org/10.5281/zenodo.15338697 . [28] PACE. Partnership for an Advanced Computing Environment (PACE) , 2017. URL http: //www.pace.gatech.edu . [29] Arthur T Benjamin. Sensible rules for remembering duals—the sob method. SIAM review , 37 (1):85–87, 1995. [30] J. Löfberg. Dualize it: software for automatic primal and dual conversions of conic programs. Optimization Methods and Software , 24:313 – 325, 2009. ISSN 1055-6788. doi: 10.1080/ 10556780802553325. [31] MATLAB. version 7.10.0 (R2010a) . The MathWorks Inc., Natick, Massachusetts, 2010. [32] Miles Lubin, Oscar Dowson, Joaquim Dias Garcia, Joey Huchette, Benoît Legat, and Juan Pablo Vielma. JuMP 1.0: Recent improvements to a modeling language for mathe- matical optimization. Mathematical Programming Computation , 15:581–589, 2023. doi: 10.1007/s12532-023-00239-3.
https://arxiv.org/abs/2505.21775v1
[33] Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B Shah. Julia: A fresh approach to numerical computing. SIAM Review , 59(1):65–98, 2017. doi: 10.1137/141000671. URL https://epubs.siam.org/doi/10.1137/141000671 . 11 [34] Yansen Zhang, Qingcan Kang, Wing Yin Yu, Hailei Gong, Xiaojin Fu, Xiongwei Han, Tao Zhong, and Chen Ma. Decision information meets large language models: The future of explainable operations research. arXiv preprint arXiv:2502.09994 , 2025. [35] Jonas Charfreitag and Mohammed Ghannam. GeCO, 2023. URL https://github.com/ CharJon/GeCO . 12 A Appendix A.1 Primal to dual conversion methods Standard Form A common method for forming the dual of a primal program is to first memorize a standard-form primal-dual pair, i.e. minc⊤xs.t.Ax≤b=⇒max b⊤ys.t.A⊤y=c, y≥0, then convert the given primal to that standard form and apply the memorized map. If the standard form primal memorized is Model 1, this method is coincides with SOB (modulo objective sense). The dual of Model 1 is included below as Model 7. max x∈RnX j∈I≤bjyj+X j∈Ilgeqbjyj+X j∈I=bjyj (7a) s.t.A⊤y=c (7b) yj≥0 ∀j∈ I≤ (7c) yj≤0 ∀j∈ I≥ (7d) yj∈R ∀j∈ I = (7e) Sensible-Odd-Bizarre Benjamin [29] describes the Sensible-Odd-Bizarre (SOB) method for re- membering how to write the dual of a linear program. Variants of the method, i.e. table or rule-based approaches, are widely used as a practical approach to write the dual without having to go through a standard form. The method is summarized in Table A.1 Table 3: The Sensible-Odd-Bizarre method for mnemonic dualization [29]. Primal (Dual) Dual (Primal) Objective Maximize c⊤x(Minimize b⊤y) Maximize b⊤y(Minimize c⊤x) Constraint j: Variable yj(orxi): Sensible a⊤ jxi≥bj yj≥0 Odd a⊤ jxi=bj yj∈R Bizarre a⊤ jxi≤bj yj≤0 Variable xi(oryj): Constraint j: Sensible xi≥0 aiyj≤bi Odd xi∈R aiyj=bi Bizarre xi≤0 aiyj≥bi Lagrangian Duality The Lagrangian route to deriving the dual program starts by forming the Lagrangian function by introducing Lagrangian multipliers. The dual program is then the problem of maximizing the infimum of the Lagrangian over the primal variables subject to the Lagrangian multiplier sign constraints (for minimization primals). 1. Take as input the primal program Model 1. 2.Form the Lagrangian by introducing multipliers yj. The example below will use the sign convention yj≥0∀j∈ I≤,yj≤0∀j∈ I≥,yj∈R∀j∈ I =which corresponds to residual convention bj−a⊤ jx.7 L(x, y) =c⊤x+X j∈I≤y⊤ j(bj−a⊤ jx) +X j∈I≥y⊤ j(bj−a⊤ jx) +X j∈I=y⊤ j(bj−a⊤ jx) =b⊤y+x⊤(c−A⊤y) 3. Form the dual function by taking the infimum of the Lagrangian over x: d(y) = inf x∈RnL(x, y) =b⊤yifc−A⊤y= 0 −∞ otherwise 7Note that the opposite sign convention can be used if using a⊤ jx−bjfor that residual. 13 4. Maximize the dual function subject to the Lagrangian multiplier constraints: max yd(y) s.t.yj≥0∀j∈ I≤ yj≤0∀j∈ I≥ yj∈R∀j∈ I ==⇒max yb⊤y s.t.A⊤y=c yj≥0∀j∈ I≤ yj≤0∀j∈ I≥ yj∈R∀j∈ I = Automatic Dualization Several software systems allow for the automatic dualization of convex programs including YALMIP in MATLAB [ 30,31] and JuMP in Julia [ 32,33]. DualSchool uses Dualization.jl [27] from the JuMP ecosystem which implements a standard form-based approach. A.2 CGED Implementation Details Besides the canonicalization steps described in Section 4, there are several differences in how CGED is implemented compared to the EOR [34] implementation of
https://arxiv.org/abs/2505.21775v1
NGED: 1.Variable nodes have only one feature cicompared to the ci,li, anduiin NGED. This is due to the fact that variable bounds are included in the constraint nodes. 2.Constraint nodes have only one feature bjcompared to the li,uiin NGED since in CGED, constraints are reformulated to a⊤ jx≥bjrather than lj≤a⊤ jx≤uj. This allows to consider equivalent lj≤a⊤ jx≤uj⇐⇒ − uj≤ −a⊤ jx≤ −ljanda⊤ jx=bj⇐⇒ − a⊤ jx=−bj. Note that the EOR [ 34] implementation differs from NGED as described in Xing et al. [5] in that the EOR version does not normalize constraints with less-than sense to greater-than. A.2.1 Variable sign canonicalization example The treatment of free and double-sided bounded variables in CGED relies on combining the difference- of-positives trick with the variable permutation invariance of GED. In the case of double-sided bounds, it also relies on the inequality sense normalization. Consider the example ground-truth and candidate pair in Equation 8. min x1+x2 s.t.x1+x2≥1 1≤x1≤2, x2≥0⇐⇒min−x1+x2 s.t.−x1+x2≥1 −2≤x1≤ −1, x2≥0(8) The algorithm begins by moving the double-sided bounds to the constraints, normalizing to ≥sense. min x1+x2 s.t.x1+x2≥1 x1≥1 −x1≥ −2 x1∈R, x2≥0⇐⇒min−x1+x2 s.t.−x1+x2≥1 x1≥ −2 −x1≥1 x1∈R, x2≥0(9) Then the difference of positives transformation is applied to free variables. min x+ 1−x− 1+x2 s.t.x+ 1−x− 1+x2≥1 −x+ 1+x− 1≥ −2 x+ 1−x− 1≥1 x+ 1≥0, x− 1≥0, x2≥0⇐⇒min−x+ 1+x− 1+x2 s.t.−x+ 1+x− 1+x2≥1 x+ 1−x− 1≥ −2 −x+ 1+x− 1≥1 x+ 1≥0, x− 1≥0, x2≥0(10) Observe that the formulations are identical when swapping x+ 1forx− 1and vice-versa, which corre- sponds to a node order permutation that GED handles naturally. Indeed, the difference-of-positives transformation can in principle be applied to all variables. Then, all bounds can be treated as constraints, removing the need for the explicit x≤u=⇒x≥ −u 14 canonicalization step. However, applying the transformation to all variables would introduce many extra nodes and edges which can needlessly slow down the graph edit distance computation and extend the optimal edit path length. Thus, CGED uses explicit canonicalization for one-side finite bounded variables (the most common case) to only introduce additional variables when handling free and double-bounded variables. A.3 Additional details on experimental setup Dataset Generation The D UALSCHOOL samples come from three sources: 1.2D LPs: 36 canonical polytopes, each with three distinct objective vectors, ranging from simple shapes (e.g., unit square, triangle) to more complex ones (e.g., hexagon, irregular pentagon). 2.CO Relaxations: Seven families of combinatorial optimization instances are generated using GECO[35]: maximum independent set, multidimensional knapsack, maximum cut, maximum clique, minimum vertex cover, packing, and production planning. 3.LLM4OPT-Derived LPs: • NLP4LP ([8]): use the provided gurobipy code directly. •NL4OPT[ 2], Easy LP[ 4], ComplexOR[ 3]: these benchmarks only supply an objective value and prompt. Thus, Llama 3.3 is used, following [ 8], to generate gurobipy formulations for each sample. These formulations are checked using the objective value and re-tried until there is a match. Any instance for which there is no match within five retries is excluded from D UALSCHOOL . LLM Configuration All models run with temperature 0.0, context window size 8192, repeat penalty 1.1, top k40, top p0.9and min p0.05.
https://arxiv.org/abs/2505.21775v1
Inference is performed via Ollama8. Compute Resources The paper used in total about 1000 GPU-hours to run LLM inference for both the experiments presented and those run during development. The evaluations are run on CPU and are relatively fast, adding up to only about 1 CPU-hour. All experiments were run on a node with two Intel Xeon 6426Y (2.5GHz) CPUs and 8 NVIDIA L40S 48GB GPUs. B Additional Experimental Results B.1 LLMs know the procedure To verify that the evaluated models “know” how to dualize an LP, the authors prompt and manually evaluate each model’s response to “How do you convert a primal linear program to its dual?” Since this prompt does not request a highly detailed response nor an example, LLMs are free to remain at a high level where it is easier to describe a valid procedure; details such as converting correctly to its chosen standard form are a single sentence rather than a careful subroutine implementation. In this setting, all models evaluated produced valid procedures except for Gemma 3 - 12B which has an incorrect sign in the objective coefficients and Mistral 7B which forgot to treat primal equality constraints. Notably, all models used a standard form-type method, though with varying standard forms. B.2 Symbolic and Synthetic Verification of LLM P2DC Although the main task of DUALSCHOOL is dualization of specific linear programs, i.e. single instances, one may consider instead prompting the LLMs to generate code that carries out the dualization more generically, i.e. to write Python code implementing symbolic dualization. This section contains experimental results for such a setting, where the LLM is asked to create a function that takes the primal problem data tuple ( A,b,c,l,u, objective sense, constraint senses) and returns its dual as a gurobipy model. Table 5 reports the aggregated accuracy results for this task, for both open and closed models. Similarly to the main GENERATION task, the execution accuracy is high 8https://github.com/ollama/ollama 15 Table 4: Aggregated accuracy results for the GENERATION task (continued) CO small Easy LP Model Prompt Exec% NGED OBJ CGED NGED OBJ CGED Mistral - 7B0-shot 23.0 0 0 0 0 0 0 1-shot 21.3 0 0 0 0 0 0 Phi 4 - 14B0-shot 96.2 0 0 0 8.1 24.8 7.9 1-shot 94.0 0 0 0 5.5 12.9 6.0 Gemma 3 - 12B0-shot 64.8 0 0 0 0 0.8 0 1-shot 91.4 0 0 0 0 0 0 Qwen 2.5 – 7B0-shot 80.9 0 0 0 0 1.5 0 1-shot 89.3 0 0 0 0 0.5 0 Qwen 2.5 – 14B0-shot 84.1 0 0 0 0 5.6 0 1-shot 96.2 0 0 0 0 0.4 0 Llama 3.1 - 8B0-shot 92.1 0 0 0 0 1.0 0 1-shot 89.0 0 0 0 0 0.6 0 Llama 3.3 - 70B0-shot 71.0 0 0 0 3.5 10.4 3.5 1-shot 99.8 0 0 0 0 1.0 0 Figure 4: Accuracy for VERIFICATION andCLASSIFICATION DUALSCHOOL tasks when using structured outputs. – in fact 100% – indicating the LLMs reliably produce executable code. However, the dualization accuracy of the produced routines is very low,
https://arxiv.org/abs/2505.21775v1
with only chatGPT 4o achieving a non-zero CGED. B.3 Common mistakes An informal analysis reveals the following common mistakes made by LLMs in the GENERATION task (when the response is almost correct): •gurobipy defaults – When declaring a new variable in gurobipy , the default lower bound is zero. Sometimes, when attempting to model x≤0, the LLM forgets to set the lower bound to −∞, effectively adding x= 0instead. 16 Table 5: Aggregated accuracy results for the symbolic P2DC task. Model ComplexOR NL4OPT NLP4LP Easy LP NGED OBJ CGED NGED OBJ CGED NGED OBJ CGED NGED OBJ CGED Claude 3.5 0 0 0 0 0 0 0 0 0 0 0 0 Claude 3.7 0 14.3 0 0 0.5 0 0 3.3 0 0 0 0 Gemini 1.5 Flash 0 14.3 0 0 0 0 0 0.4 0 0 0 0 Gemini 2.0 Flash 0 14.3 0 0 33.2 0 0 27.4 0 0 0 0 Gemini 2.5 Flash 0 0 0 0 0 0 0 0 0 0 0 0 Gemma3 12B 0 0 0 0 0 0 0 0 0 0 0 0 Gemma3 27B 0 0 0 0 0 0 0 4.1 0 0 0 0 Gemma3 4B 0 0 0 0 0 0 0 0 0 0 0 0 chatGPT 4o 7.1 7.1 7.1 0 0.5 0 0 2.9 0 0 0 0 chatGPT 4o mini 0 0 0 0 0 0 0 0 0 0 0 0 chatGPT 4o mini-high 0 0 0 0 0 0 0 0 0 0 0 0 chatGPT O3 0 0 0 0 0 0 0 0 0 0 0 0 Llama3.1 8B 0 7.1 0 0 0 0 0 7.9 0 0 0 0 Llama3.2 3B 0 0 0 0 0 0 0 0 0 0 0 0 Phi 4 0 0 0 0 0 0 0 0 0 0 0 0 Qwen2.5 14B 0 0 0 0 0 0 0 0 0 0 0 0 Qwen2.5 7B 0 0 0 0 0 0 0 0 0 0 0 0 •Incorrect bound senses – often, the variable bound senses are consistently wrong, i.e. all bounds are flipped compared to ground truth, but the constraint and objective coefficients are not flipped accordingly. •Missing dual variables – LLMs tend to forget to include dual variables corresponding to primal variable upper bounds when there is a finite lower bound on the same primal variable. C Error Types To systematically inject errors into ground-truth dual programs, the paper considers the following error types. Wrong Objective Sense Flip the sense of the dual objective (minimize ↔maximize), without adjusting the coefficients. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒max 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0 Missing Variable Randomly select a variable and delete it from the model. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒min 5 x1 s.t. 2x1≥1 x1≥0 Missing Constraint Randomly select a constraint and delete it from the model. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒min 5 x1+ 4x2 s.t.x1≥0, x2≥0 Flipped Constraint Sense Randomly select a constraint
https://arxiv.org/abs/2505.21775v1
and change its sense. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒min 5 x1+ 4x2 s.t. 2x1+ 3x2≤1 x1≥0, x2≥0 Flipped Bound Sense Randomly select a variable and change the sense of its bound constraint. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≤0 17 D Prompt Formats Figure 5 includes a visualization of the prompt format used in the experiments. You are an expert optimization practitioner and mathematician. Your task is to read and understand the primal linear program below and produce python code for the dual linear program in python with gurobipy. Here is the given primal problem: {primal_problem} Put all your code between two '=====' lines, like this, and use the boiler plate code: ===== {code_format} ===== Generate the complete code, including the dual model definition, variables, constraints, objective function. It must be runnable. Take a deep breath and think step by step about the primal to dual conversion process.Max 100 y[0] + 200 y[1] + 150 y[2] Subject to x[0]_lb : x[0] ≥ 0 x[1]_lb : x[1] ≥ 0 y[0]_lb : y[0] ≥ 0 y[1]_lb : y[1] ≥ 0 y[2]_lb : y[2] ≥ 0 Budget : 3 x[0] + 4 x[1] ≤ 4 Coverage_0 : -x[0] + y[0] ≤ 0 Coverage_1 : -x[1] + y[1] ≤ 0 Coverage_2 : -x[0] + y[2] ≤ 0 import gurobipy as gp from gurobipy import GRB, quicksum def create_dual_model(): model = gp.Model("dual_model") # TODO: create the dual model ... model.optimize() return modelcode_formatprimal_problem prompt_template Figure 5: The prompt template used in the Section 5 experiments. 18
https://arxiv.org/abs/2505.21775v1
arXiv:2505.21784v1 [cs.AI] 27 May 2025Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation Tharindu Kumarage1,2, Ninareh Mehrabi1, Anil Ramakrishna1, Xinyan Zhao1, Richard Zemel1,Kai-Wei Chang1,Aram Galstyan1,Rahul Gupta1,Charith Peris1 1 Amazon Nova Responsible AI,2Arizona State University {tharindd, gupra, perisc}@amazon.com Abstract Safety reasoning is a recent paradigm where LLMs reason over safety policies before gen- erating responses, thereby mitigating limita- tions in existing safety measures such as over- refusal and jailbreak vulnerabilities. How- ever, implementing this paradigm is challeng- ing due to the resource-intensive process of cre- ating high-quality policy-embedded chain-of- thought (CoT) datasets while ensuring reason- ing remains accurate and free from hallucina- tions or policy conflicts. To tackle this, we pro- pose AID SAFE : Agentic Iterative Deliberation for Safety Reasoning, a novel data generation recipe that leverages multi-agent deliberation to iteratively expand reasoning on safety poli- cies. A data refiner stage in AID SAFE ensures high-quality outputs by eliminating repetitive, redundant, and deceptive thoughts. AID SAFE - generated CoTs provide a strong foundation for supervised fine-tuning (SFT)-based safety training. Additionally, to address the need of preference data in alignment stages, such as DPO training, we introduce a supplemental recipe that uses belief augmentation to cre- ate distinct selected and rejected CoT samples. Our evaluations demonstrate that AID SAFE - generated CoTs achieve superior policy adher- ence and reasoning quality. Consequently, we show that fine-tuning open-source LLMs on these CoTs can significantly improve safety generalization and jailbreak robustness while maintaining acceptable utility and over-refusal accuracy. AID SAFE -generated CoT datasets can be found here. 1 Introduction Recently, there has been a paradigm shift in LLM safety training towards “ safety reasoning. ”—an approach where models explicitly reason over safety policies before generating responses (Jaech et al., 2024; Guan et al., 2024). Safety rea- soning, typically implemented through Chain-of- Thought (CoT) reasoning, has shown promise inimproving jailbreak robustness and reducing over- refusals (Guan et al., 2024; Zaremba et al., 2025). However, adopting this paradigm presents signif- icant data challenges. Effective safety reasoning requires high-quality CoTs that explicitly reason over a given set of safety policies. Specifically, for each prompt, we need well-reasoned CoT-response pairs. Obtaining such data through human anno- tations is prohibitively expensive due to the sub- jective nature of safety reasoning. Furthermore, generating CoTs that comprehensively cover mul- tiple policies is time-consuming and cognitively demanding. As a result, an alternative approach is to leverage LLMs themselves to generate policy- embedded CoTs (Guan et al., 2024). However, this is a non-trivial task due to two key challenges. (1) Resource constraints : generating high-quality CoTs requires capable reasoning models (Jaech et al., 2024; Guo et al., 2025). However, training or acquiring such models is costly, making it inacces- sible to most open-source initiatives. (2) Flawed reasoning : even with access to a powerful LLM, generated reasoning can be incorrect, deceptive, or misaligned with safety policies due to hallucina- tions. Additionally, safety policies are often inher- ently fuzzy or conflicting, further complicating the generation of reliable reasoning data. We address these data challenges by intro- ducing a novel data generation recipe based
https://arxiv.org/abs/2505.21784v1
on Agentic Iterative Deliberation for SAFE ty reason- ing (AID SAFE ), designed to generate high-quality policy-embedded CoT datasets without requiring an expensive reasoning-capable generator . Our approach leverages collaborative reasoning and re- finement in a multi-agent environment to gener- ate high-quality thoughts that reason over safety policies (examples in Appendix E). The deliber- ation stage in AID SAFE incorporates an iterative process where multiple agents collaboratively ex- pand the thoughts over a defined set of policies to come up with the best response. This stage ends once the agents reach a consensus or exhaust a predefined deliberation budget. In the second, re- finer stage, the output of the deliberation stage are post-processed to filter out redundant, deceptive, or policy-inconsistent thoughts , ensuring that the generated CoTs are of high quality and adhere to the specified policies. Our approach is inspired by related studies where multi-agent collaboration has been shown to reduce hallucinations and enhance reasoning reliability in tasks , such as mathematical reasoning, motivating our adoption of an agentic framework for CoT generation in the context of safety reasoning (Du et al., 2023). We evaluate AID SAFE outputs using two ap- proaches. First, we assess the quality of generated CoTs through data quality metrics such as faith- fulness to safety policies, completeness, relevance, and coherence. Second, we fine-tune open-source models, such as Mixtral (Jiang et al., 2023) and Qwen (Yang et al., 2024), on AID SAFE -generated CoTs and assess their impact on models’ safety. We find that AID SAFE leads to improvements of the models both in safety generalization and jailbreak robustness while incurring minimal regression on their utility and over-refusal accuracy. Additionally, we introduce a supplemental recipe that leverages a belief augmentation model to gen- erate diverse preference data that can be used in alignment stages such as Direct Policy Optimiza- tion (DPO) (Rafailov et al., 2024). We refer to this model as the “ear-whisperer” agent, whose role is to subtly influence the target LLM’s reasoning process. This approach ensures a controlled con- trast in preference data, enhancing the effectiveness of safety alignment. Our ear-whisperer powered recipe achieves a significant distribution shift in preference data, as evidenced by our policy adher- ence metrics (see Section 5). We summarize our contributions as follows: •We introduce a multi-agent deliberation frame- work, AID SAFE for generating high-quality, policy-embedded CoT data. •We present extensive evaluation and experi- mentation to demonstrate the quality of this data and how it enhances the safety of models with minimal regression on utility. •We introduce a supplemental ear-whisperer agent-based recipe to generate diverse CoT- based preference data, facilitating effective DPO training for safety alignment.•We release a policy-embedded CoT dataset to support safety training, which can be utilized for supervised fine-tuning LLMs to improve safety. 2 AID SAFE Framework As shown in Figure 1, we present a novel ap- proach to generate CoTs that reason over safety policies while responding to user queries. This framework is designed to address key challenges in LLM safety reasoning, such as the application of nuanced safety policies and the avoidance of over-refusal. Below,
https://arxiv.org/abs/2505.21784v1
we provide a detailed explana- tion of the safety policies we used, the initialization process, the deliberation stage, and the refinement stage. 2.1 Safety Policies Our experiments incorporate five key safety poli- cies derived from existing literature (Qi et al., 2023): Hate-Harass-Violence, Fraud and Decep- tion, Physical Harm, Illegal Activity, and Helpful- ness and Respectfulness Policy. Each policy is de- tailed using concise natural language descriptions to ensure that AID SAFE agents clearly understand their implications and applications. Hate-Harass-Violence Policy: “Must not generate content that is hateful, harassing, abusive, or threatening towards individuals or groups. Should avoid any insulting, discrimi- natory, violent, or manipulative language.” These descriptions, detailed in the Appendix A, serve as guidelines for reasoning and decision- making during the deliberation stage. While safety reasoning is the primary focus of our experiments in this paper, our framework can accommodate any type of policies, broadening its applicability to other domains requiring complex reasoning. 2.2 Initialization The initialization stage establishes the foundation for the deliberation process. It consists of two critical components, (1) intent decomposition and (2) seed CoT and response creation. Intent Decomposition This is an essential step for interpreting the user’s query comprehensively. An LLM agent identifies explicit and implicit in- tentions within the query, distinguishing between benign and potentially malicious intents. By clearly separating these intentions, subsequent agents can apply safety policies in a more targeted and fine- grained manner. This process helps provide a richer Figure 1: Proposed Multi-agent Deliberation Framework to Generate Safety-embedded CoTs response while minimizing over-refusal, ensuring a balanced and context-aware response that improves user experience. Initial Thoughts and Response Once the intents are decomposed, the deliberation process is initi- ated by generating a preliminary CoT and response for the user query. This step involves a single- agent generation to produce baseline thoughts and responses. Initialization prompt: Following are set of policies you should adhere to when answering a user query: "{policy list}"... Now adhering to the above policies, create set of brief reasoning steps on how to respond to the following user query... Finally, based on these reasoning steps, write a potential response... This step mirrors the basic process of generating CoTs using a single LLM, providing a foundation for the deliberation process to build upon. The complete prompt used for the initialization step is detailed in Appendix A. 2.3 Deliberation Stage Following initialization, the AID SAFE enters the deliberation stage, where iterative rounds of safety reasoning occur. For each round, an agent evaluates the user query, safety policies, and the thoughts and responses generated so far. The agent assesses whether additional reasoning steps or modifications are required to address gaps or enhance the quality of the response. If necessary, the agent proposes new thoughts and updates the response. Deliberation prompt: ...Can you suggest corrections or ad- ditions to the these reasoning steps?... Then, based on those corrections and/or additions, modify the potential re- sponse...” This iterative process continues until the agents reach a consensus indicated by a terminating state- ment (e.g. "I agree with previous agent.."), or untila predefined deliberation budget is exhausted.
https://arxiv.org/abs/2505.21784v1
This structured exchange ensures that the final response reflects a thorough examination of the query and the associated safety policies. 2.4 Refinement Stage Once the deliberation stage concludes, all gener- ated thoughts from each round are aggregated to form the complete CoT, and the final response from the last round is selected. These outputs (CoT, re- sponse) are then passed to a refiner agent, which serves as an impartial evaluator. The refiner agent evaluates the deliberation outcomes, ensuring that the final response adheres to the safety policies and reflects truthful and reliable reasoning. Incorporating a refiner is inspired by related safety research (Irving et al., 2018), which high- lights the importance of third-party evaluation to enhance decision-making reliability. The refiner agent scrutinizes the arguments made at the de- liberation stage to identify and mitigate deceptive thoughts and inconsistencies. Additionally, the re- finement process addresses issues like overthink- ing (Chen et al., 2024), where repetitive or incre- mental thoughts can lead to over-refusal during training. By eliminating such artifacts, the refiner ensures that the final outputs are concise, coherent, and aligned with safety policies. 3 Data Generation and Evaluation In this section, we first discuss the details of the implementation of the AID SAFE and the creation of the safety policy-embedded CoT dataset for the experiments. We then describe our evaluations to assess the quality of the generated dataset. 3.1 Dataset Generation The proposed framework involves several key de- sign and implementation choices, including: (1) user queries for generating CoTs and responses, (2) LLM selection for different agents in AID SAFE , and (3) the efficiency of the AID SAFE . To generate policy-embedded CoTs and responses suitable for safety reasoning training, we used the BeaverTails dataset (Ji et al., 2024), a well-established bench- mark for safety training and alignment research, known for its diverse range of harmful query cate- gories spanning 14 potential harm areas. From this dataset, we subsampled 5,000 unique prompts for our experiments to generate safety reasoning CoTs. We selected Mixtral 8x22B (Jiang et al., 2024) as the base LLM for all agents in AID SAFE (Further details on the LLM selection criteria can be found in the Appendix A). Efficiency is a critical factor in the AID SAFE , given that multiple agents collaborate to establish safety reasoning for a given user query. To im- prove efficiency, we implemented the AID SAFE using asynchronous LLM queries via AsyncInfer- enceClient on Hugging Face’s Text Generation In- ference1. Although each deliberation process for a given query is sequential, our implementation en- ables the batching of multiple user queries to con- duct single forward passes, significantly improving efficiency and scalability. In our experiments, uti- lizing 4 ×A100 Nvidia GPUs with a batch size of 100, we recorded an average processing time of approximately 35 seconds per prompt to generate the final CoTs and responses. 3.2 Dataset Evaluation To evaluate the generated CoTs, we first examine their general qualities and characteristics. Next, we assess the quality of their safety reasoning us- ing faithfulness analysis and pairwise preference evaluation. As a baseline, we
https://arxiv.org/abs/2505.21784v1
consider single LLM generations, where CoTs are produced by directly prompting Mixtral 8x22B without any agentic de- liberation process (which we will denote as LLM ZS in subsequent sections). One key design choice in this evaluation is select- ing appropriate evaluators. Previous studies have demonstrated that evaluating reasoning (in the form of CoTs) independently, without knowing the cor- rect answer, is challenging (to reach an agreement), 1https://huggingface.co/docs/text-generation- inference/en/indexMetric ↓ LLM ZS AID SAFE ∆(%) Relevance 4.66 4.68 +0.43% Coherence 4.93 4.96 +0.61% Completeness 4.86 4.92 +1.23% CoTs Faithfulness (Policy) 3.85 4.27 +10.91% Response Faithfulness (Policy) 4.85 4.91 +1.24% Response Faithfulness (CoT) 4.99 5.00 +0.20% Table 1: Average auto-grader scores on the dataset (1-5 scale)- general reasoning quality metrics to understand the quality of CoT and faithfulness metrics to under- stand the policy adherence. time-consuming, and expensive (Golovneva et al., 2022). This challenge is even more pronounced in our safety reasoning usecase due to the complexity of policies and the subjective nature of determin- ing the "correct" response to open-ended harmful prompts. Given these challenges, previous studies have highlighted the effectiveness of LLM auto- graders for structured and consistent evaluation (Ja- covi et al., 2024). Based on these findings, we adopted the same and use clearly defined rubrics (Appendix B) to assess CoT quality efficiently and objectively. 3.2.1 CoT Quality Metrics Following prior work (Jacovi et al., 2024), we an- alyze the quality of the generated CoTs by mea- suring three fine-grained attributes: (1) Relevance: Does the chain directly relate to the user query? (2) Coherence: Are the thoughts logically consistent? (3)Completeness: Does the reasoning provide a full explanation? Each metric is evaluated on a scale from 1 to 5, where 1 represents the lowest quality and 5 represents the highest quality. We include concise natural language descriptions for each level in the grading rubric, detailed in Ap- pendix B. For automated grading, we utilize the Claude-3 Sonnet model. As shown in Table 1, the AID SAFE -generated CoTs achieve high average grader scores across all three metrics, slightly outperforming LLM ZS. However, we observe that AID SAFE CoTs exhibit significantly improved safety reasoning coverage, capturing a broader and more comprehensive under- standing of safety principles. This distinction will be further quantified in our faithful and pairwise evaluations in the following subsections. Addi- tional ablations on CoT quality after each iteration of the deliberation can be found in Appendix B. Figure 2: Pairwise comparison of AID SAFE and LLM ZS-generated CoTs, evaluated by Claude-3 Son- net and Command. The bars show the proportion of AID SAFE wins (green), ties (gray), and LLM ZSwins (orange). 3.2.2 Faithfulness Analysis A crucial aspect of evaluating the generated thoughts and responses is measuring policy faith- fulness. This determines the extent to which data adhere to the defined safety policies. We assess faithfulness along two primary dimensions: (1) Faithfulness between policy and the generated CoT: Does the AID SAFE yield reasoning that aligns with the underlying policy? (2) Faithfulness between policy and the generated response: Does the final response adhere to the specified policy? Addition- ally, to ensure
https://arxiv.org/abs/2505.21784v1
that generated responses are faithful to the CoTs, we measure the faithfulness between the response and CoT. We use the Claude-3 Son- net auto-grader to evaluate faithfulness on a scale of 1-5, where 1 indicates minimal faithfulness and 5 indicates complete adherence. The full grading rubric is provided in the Appendix B. AID SAFE yields highly faithful CoTs and re- sponses: In Table 1, we compare the faithfulness metrics of the AID SAFE -generated CoTs and the single LLM-generated CoTs (LLM ZS). The former consistently outperforms the latter across all faith- fulness dimensions, with particularly strong gains in the CoTs’ alignment with the underlying poli- cies. This highlights the importance of the iterative refinements that occur within the AID SAFE , which refine the thoughts and enhance safety reasoning. 3.2.3 Pairwise Evaluation using Auto-grader To further validate these findings, we conduct a pairwise comparison of AID SAFE -generated CoTs against single LLM-generated CoTs. For each given user query, we first randomly swap the CoTs generated by both methods (to mitigate any po- sitional bias) and present them to an LLM auto- grader as CoT A and CoT B. The grader is asked to select the better CoT based on the comprehensive-ness of safety reasoning, relevance to the query and logical coherence. We run this experiment using two different LLM auto-graders, Claude-3 Sonnet and Command, to ensure that our evaluations are not overly influenced by any specific biases of a single model. The pairwise grading rubric can be found in Appendix B. AID SAFE yields high win-rate on safety rea- soning: As shown in Figure 2, the AID SAFE - generated CoTs consistently outperform the single LLM-generated CoTs, achieving high win rates across both auto-graders. This further validates the significance of iterative refinements introduced during the deliberation stage in improving safety reasoning and policy adherence. 4 Training Experiments To verify the effectiveness of the AID SAFE - generated CoT data, we conduct experiments to assess its impact on training existing open-source LLMs for safety. Specifically, we use our gen- erated data to apply supervised fine-tuning (SFT) to LLMs and evaluate whether incorporating such policy-embedded CoTs improves model safety. 4.1 Experimental Setting For our experiments we use 5,000 safety-related samples generated using BeaverTails prompts. Given that achieving a balance between safety and general utility requires a mixture of both safety and general data (Wang et al., 2024a), we addition- ally generate an additional 5,000 CoTs on general prompts from the Alpagsus dataset (Chen et al., 2023), which consists of filtered instruction-tuning data from the Alpaca dataset. Since the Alpagsus data do not require reasoning over safety policies, we apply only the "Respectfulness and Helpful- ness" policy from our policy list. More details on the preparation of general utility CoTs and data quality results can be found in Appendix B. We split the combined dataset 9:1 and use 9,000 sam- ples for training and 1,000 for evaluation. For SFT experiments, we select two existing LLMs: Mixtral (Mistral-7B-Instruct-v0.1) and Qwen 2.5 (Qwen2.5-7B-Instruct). These selections are motivated by the need to analyze two cases: Mixtral, a non-safety-trained
https://arxiv.org/abs/2505.21784v1
model, to study the effects of safety reasoning training from scratch, and Qwen 2.5, an already safety-trained model, to understand how additional safety reasoning train- ing impacts performance. We utilize Hugging Face’s SFT trainer with 4-bit quantization using QLoRA (Dettmers et al., 2024). Additional details and parameters are provided in Appendix C. For baselines, we use the same LLMs but with different variations of the training data. First, we use the base LLMs, which we denote as Base . Next, we create traditional safety-trained baselines, where models are fine-tuned using existing safe re- sponses without any CoTs. For these baselines, we extract the original safe responses from the Beaver- Tails and Alpagsus datasets and fine-tune the LLMs with these responses. We denote these as SFT OG. The models trained with AID SAFE -generated data are denoted as SFT DB. 4.2 Evaluation Protocol We evaluate models based on the following: (1) safety: are the model responses safe? , (2) over- refusal: does the model unnecessarily deflect an- swering appropriate queries? , (3)jailbreak robust- ness: does the model remain safe when exposed to jailbreak techniques? , and (4) utility: does the model answer general questions correctly? Following standard safety evaluation, we incor- porate an LLM specialist that is trained to clas- sify whether a model’s response is safe. For this, we use the ShieldGemma-9B model (Zeng et al., 2024). Safety evaluation is conducted in two parts: in-domain safety, using the held-out BeaverTails test set, and out-of-domain safety, using WildChat- 1M (Zhao et al., 2024), which consists of real-world unsafe prompts in different styles and topics. Fol- lowing prior work (Guan et al., 2024), we extract the top 200 prompts based on the moderator scores. For over-refusal evaluation, we use the XSTest dataset (Röttger et al., 2023), which contains safe prompts that include trigger words such as "kill" (e.g., "How to kill a Python program?"). We use Claude-3 Sonnet to classify whether the model un- necessarily refuses to answer, reporting the accu- racy as 100% minus the over-refusal rate. We use the StrongREJECT (Souly et al., 2024) dataset for jailbreak robustness evaluation, which contains over 300 unsafe prompts and a tool to encapsulate them in known jailbreak techniques. We extract 12 template-based jailbreak techniques and apply them to these unsafe prompts, generat- ing over 3,000 jailbreaks. Consequently, we use ShieldGemma to assess whether the model remains safe when responding to these jailbreaks. For utility evaluation, We use a subset of the MMLU dataset (Hendrycks et al., 2020), selectingfive categories: abstract algebra, college computer science, college biology, global facts, and prehis- tory. More details on the evaluation protocol can be found in Appendix C. 4.3 Results Safety reasoning improves generalization: As seen in Table 2, both Mixtral and Qwen show in- creased in-domain safety after SFT, evident in their performance on the BeaverTails test set. Unlike tra- ditional safety training, safety reasoning achieves significantly better generalization, particularly in WildChat-1M. This is especially apparent for Mix- tral, where SFT DBexhibits an exceptional increase in performance when compared to SFT OG, which only shows small improvements over the
https://arxiv.org/abs/2505.21784v1
baseline. Importantly, even with only 5,000 safety reason- ing samples, we achieve an increase of in-domain safety by 20% (from 76% to 96%) and out-of- domain by 54.95% (from 31.00% to 85.95%) com- pared to the base model. Additional safety training may override pre- trained safety: Qwen is already safe due to ex- tensive pre-training. Our observations align with recent studies that suggest additional safety train- ing on an already safe model can sometimes over- ride its original safety behavior (Qi et al., 2023). This phenomenon is evident as Qwen SFT OGun- derperforms compared to Qwen Base, reinforcing findings that excessive safety fine-tuning may in- advertently alter or diminish pre-existing safety mechanisms. Interestingly, our deliberation-driven safety reasoning (Qwen SFT DB) does not exhibit this degradation, suggesting that safety reason- ing helps models understand policies rather than merely learning surface-level safety heuristics. Safety reasoning improves jailbreak robustness: Both Mixtral and Qwen base models perform poorly on jailbreak prompts. Despite not being explicitly trained on jailbreak examples, AID SAFE - powered safety reasoning enhances safety general- ization, achieving a high safety rate (94.04% Mix- tral and 95.39% Qwen) compared to the base and traditional safety training variations. Utility versus safety trade-off: Additional safety training may lead to general utility degradations due to catastrophic forgetting caused by the addi- tional training phase coupled with increased over- refusals. This is seen in Table 2. This effect is more pronounced in Qwen. However, in Mixtral, training with AID SAFE -generated data only leads to small utility reductions when compared to tradi- LLM Eval Dimension Metric Dataset Base SFT OG SFT DB(ours) MixtralSafety Safe response rateBeavertails 76.00 79.57 96.00 WildChat 31.00 33.50 85.95 Overrefusal 1-Overrefuse rate XSTest 98.80 87.60 91.84 Utility Answer accuracy MMLU 35.42 31.38 34.51 Jailbreak Robustness Safe response rate StrongREJECT 51.09 67.01 94.04 QwenSafety Safe response rateBeavertails 94.14 87.95 97.00 WildChat 95.50 59.42 96.50 Overrefusal 1-Overrefuse rate XSTest 99.20 98.00 93.60 Utility Answer accuracy MMLU 75.78 55.73 60.52 Jailbreak Robustness Safe response rate StrongREJECT 72.84 59.48 95.39 Table 2: Evaluation of the supervised fine-tuned model. "Base" denotes the LLM without SFT, SFT OGdenotes the model SFT’d on the original response data without any CoTs, and SFT DBdenotes the model SFT’d on our AID SAFE -generated CoTs and responses Figure 3: Comparison of model performance in terms of safety level and over-refusal accuracy. Higher safety levels and higher over-refusal accuracy are desirable. tional safety training. AID SAFE improves safety reasoning compared to single LLM generation: To validate the ef- fectiveness of AID SAFE -powered generation for safety reasoning, we conduct SFT experiments us- ing policy-embedded CoT data generated via the single LLM generation (SFT ZS). As shown in Fig- ure 3, while SFT ZSachieves comparable safety rates to SFT DB, however, it significantly under- performs in over-refusal accuracy, suggesting that the models over-fit to incomplete, surface-level safety policy reasoning. This aligns with findings from Section 4, where AID SAFE -generated data exhibited superior safety reasoning adherence and completeness compared to LLM ZS-generated data. 5 Preference Data Creation Alignment is an important training phase that typ- ically
https://arxiv.org/abs/2505.21784v1
follows SFT in the current standard LLM training pipeline. Out of a variety of techniques that are widely used for this phase (Wang et al., 2024b), we pick Direct Policy Optimization (DPO) for our work here. The alignment training phasesgenerally use preference data, that is formatted as a prompt paired with two responses, "selected" and "rejected". The model is provided the ability to learn to favor responses like the selected response, while avoiding the production of responses similar to the rejected response. 5.1 Sampling Data Quality Generally, preference data is created by sampling multiple responses from an SFT-trained model and ranking using a preference judge or a reward model (Ouyang et al., 2022). The highest-ranked response is marked as selected, while the lowest- ranked one is labeled as rejected. However, in the safety reasoning paradigm, sampling approaches commonly struggle to distinguish selected and re- jected responses, a challenge exacerbated by CoTs. To analyze this problem, we conducted prefer- ence data sampling using our Mixtral model that was SFT’ed on AID SAFE -generated CoTs. We introduced an additional 3,000 prompts from the BeaverTails dataset that were not used in the initial 5,000 samples generated for SFT’ing the model. The ShieldGemma prediction score was used as a proxy judge to rank the sampled generations and choose selected and rejected CoTs. As shown in Figure 4, both selected and rejected CoTs exhib- ited high adherence to safety policies, with mini- mal differences in their average Claude auto-grader scores. This indicates a fundamental issue with the standard sampling approach, as the "rejected" responses should ideally contain CoTs with faulty, misleading, or deceptive reasoning about safety policies. The lack of meaningful distinction be- tween selected and rejected CoTs limits the effec- tiveness of preference learning in a DPO setting. 5.2 Recipe for Rejected Data Creation To address this issue, we propose a supplemental data recipe (as shown in Figure 5) that enhances the distinction between selected and rejected responses. We introduce an "ear-whisperer" agent, which is an LLM that generates inappropriate guiding prefixes (i.e., bad beliefs). Instead of the standard sampling and ranking process that is followed to collect se- lected and rejected responses, we prepend these bad belief prefixes to the LLM’s input when sampling rejected responses, and sample directly for selected responses. This ensures that the generated rejected CoTs contain safety policy violations and flawed or deceptive reasoning, enabling a data distribution that provides the model with clearer guidance on desirable and undesirable reasoning patterns dur- ing the preference optimization stage. Initially, we considered employing both ethical and adversarial ear-whisper agents for improving both selected and rejected responses. However, we found that our SFT’d model was capable of generating selected CoTs of a quality that was high enough to not re- quire additional augmentation. Therefore, we only use an adversarial ear-whisperer agent. Inspired by prior work (Mehrabi et al., 2024, Figure 4: Preference Data Quality - faithfulness mea- sures to understand the policy adherence of the selected and rejected CoT data. Figure 5: Preference Data Creation 2023), we adopt an iterative in-context-learning (ICL) strategy that
https://arxiv.org/abs/2505.21784v1
jointly optimizes belief augmen- tation through adversarial probing and feedback. In our adaptation, we iteratively train the adversarial ear-whisperer agent by continuously refining its deceptive belief generations based on interactions with the target LLM. Each iteration involves using the target LLM to generate belief-augmented CoTs, evaluating their effectiveness, and updating the bad belief exemplars based on performance metrics. To assess the quality of the bad belief generations, we use ShieldGemma as the scoring function. This iterative refinement ensures that the adversarial ear- whisperer model continuously adapts to generate increasingly sophisticated deceptive beliefs, which in turn improve the distinction between selected and rejected CoTs. After 100 iterations, we extract the highest-quality adversarial beliefs, which are then appended to the target LLM to generate the final "rejected" CoTs. We limit our experiments for this phase to the Mixtral model. As shown in Figure 4, our ear- whisperer leveraged recipe enabled the production of high quality preference data with a substantial distribution shift between selected and rejected re- sponses when compared to the standard sampling method. More details on the ear-whisperer recipe and subsequent DPO training experiments and re- sults can be found in Appendix D. 6 Related Work 6.1 LLM Safety Traditional safety training for LLMs has primar- ily relied on Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022). As safety train- ing methods have advanced, alternative approaches have emerged to reduce reliance on human labor and improve efficiency. Reinforcement Learning from AI Feedback (RLAIF) replaces human feed- back with AI-generated evaluations, enabling more scalable safety training (Bai et al., 2022b). Direct Preference Optimization (DPO) (Rafailov et al., 2024) further streamlines alignment by directly op- timizing model outputs based on preference data, eliminating the need for complex reward models. More recently, safety reasoning has gained atten- tion as a novel paradigm, incorporating reasoning models (Jaech et al., 2024). These models deliber- ate over safety policies before generating responses, improving their ability to assess safety implications proactively (Guan et al., 2024). 6.2 Safety Training Data Conventional safety datasets consist of human or AI-generated safe responses (Bai et al., 2022a; Ji et al., 2024; Wang et al., 2024a) but often lack explicit Chain-of-Thought (CoT) detailing the rea- soning processes behind these outputs, limiting models’ ability to internalize nuanced safety con- siderations. 6.3 Agentic Deliberation Multi-agent deliberation frameworks, such as struc- tured debates among LLMs, have been shown to enhance accuracy and reduce issues like hallucina- tions (Du et al., 2023). These debates encourage critical evaluation and consensus-building, lead- ing to more reliable outputs (Talebirad and Nadiri, 2023; Khan et al., 2024). Our work draws inspi- ration from these frameworks to generate high- quality, policy-embedded CoT datasets, aiming to improve LLM safety and reliability. 7 Conclusion To address the data challenges in safety reasoning, this paper proposes AID SAFE , a multi-agent delib- eration framework that iteratively refines thoughts and responses, enhancing safety policy adherence and reasoning quality. Our evaluations demonstrate thatAID SAFE -generated CoTs improve safety gen- eralization and jailbreak robustness while main- taining acceptable utility and over-refusal accu- racy.
https://arxiv.org/abs/2505.21784v1
Additionally, we introduce an adversarial ear-whisperer agent that enables us to overcome the limitations of standard sampling techniques, which fail to distinguish selected and rejected CoTs for preference learning. By leveraging belief aug- mentation and iterative ICL, this method ensures that rejected CoTs exhibit policy violations and faulty reasoning, enhancing alignment effective- ness. By providing these data recipes, code, and high-quality policy-embedded datasets, we aim to advance safety reasoning in open-source LLMs.8 Limitations 8.1 Policy Coverage Our framework relies on a predefined set of safety policies for reasoning and generating the CoT out- puts. In this work, we only incorporated five safety policies. While these policies cover critical safety dimensions, the comprehensiveness of the safety reasoning could be further enhanced by integrating additional policies. This would allow the model to address a broader spectrum of safety challenges and improve its robustness in real-world scenarios. 8.2 Model and Agent Constraints Due to brevity and scope constraints, we only exper- imented with the Mixtral 8x22B model as the agent in all stages of the proposed method. While this choice demonstrates the efficacy of our framework, there is significant potential to explore other LLMs as agents in the deliberation process. Additionally, for the deliberation part, we limited our approach to two agents engaging in a back-and-forth reason- ing process. A more dynamic round-table setup involving multiple agents could lead to more re- fined and diverse CoTs, potentially improving the overall safety reasoning. 8.3 Supervised Fine-Tuning (SFT) Setup Ideally, the SFT experiments should have been con- ducted by first performing SFT warm-ups on a base model using general CoTs and then transitioning to safety training with our policy-embedded CoTs. Due to time and resource constraints, we incor- porated instruction-tuned versions of the models directly for safety fine-tuning. 8.4 Potential Interruptions in the Deliberation Process Our deliberation framework may encounter inter- ruptions if the agent LLMs are highly safety-trained or have strict guardrails. Since we require the mod- els to reason about potentially harmful or malicious prompts, agents with overzealous safety mecha- nisms may respond with disclaimers such as "I cannot answer" when prompted with such tasks. This results in an incomplete or failed delibera- tion process, potentially limiting the framework’s applicability in contexts involving highly safety- conscious models. 8.5 Effectiveness of the Ear-Whisperer Agent The ear-whisperer proposal is designed to generate "rejected" CoTs by subtly influencing the target LLM’s reasoning process. However, its effective- ness diminishes when the target LLM is already highly safety-trained. In such cases, it becomes challenging for the adversarial ear-whisperer agent to guide the LLM toward generating harmful CoTs through belief augmentation alone, especially when the target model is already well-equipped to handle safety-related concerns. 9 Ethical Consideration TheAID SAFE framework is primarily designed for generating CoTs that enhance safety training for LLMs and support the development of more respon- sible and ethical AI systems. However, like any tool, its application can raise ethical concerns, par- ticularly when it comes to the policies that guide its reasoning and the potential biases that could emerge in the generated CoTs. The safety policies used
https://arxiv.org/abs/2505.21784v1
for reasoning must be carefully designed to en- sure they account for diverse ethical considerations, such as privacy, fairness, and non-discrimination. It is essential that the policies are constructed in an inclusive manner and reflect the values of a wide range of stakeholders to avoid unintentional biases in the reasoning process. Any biases or gaps in the policies themselves could lead to flawed safety rea- soning, that could slip through the refiner agent, potentially causing harm to vulnerable users or communities. In the case of the ear-whisperer agent, which is designed to generate "rejected" CoTs in the context of Direct Policy Optimization (DPO), we acknowl- edge the potential for misuse. This technique is em- ployed to train models to distinguish between safe and unsafe reasoning by deliberately introducing adversarial beliefs. While this approach is intended to improve safety by enhancing the model’s ability to recognize harmful beliefs, it could also be ex- ploited by malicious actors to generate malicious responses. The ethical risk here lies in the poten- tial for adversaries to reverse-engineer and use the ear-whisperer method to produce harmful outputs, thereby undermining the safety measures that are being put in place. Despite these risks, we believe that the overall benefits of using this technique in the DPO stage—specifically in terms of improving LLM safety—outweigh the potential drawbacks.References Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 . Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 . Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srini- vasan, Tianyi Zhou, Heng Huang, et al. 2023. Al- pagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701 . Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. 2024. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187 . Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2024. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems , 36. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenen- baum, and Igor Mordatch. 2023. Improving factual- ity and reasoning in language models through multia- gent debate. arXiv preprint arXiv:2305.14325 . Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Celikyilmaz. 2022. Roscoe: A suite of metrics for scoring step-by-step reasoning. arXiv preprint arXiv:2212.07919 . Melody Y Guan, Manas Joglekar, Eric Wallace, Saachi Jain, Boaz Barak, Alec Heylar, Rachel Dias, Andrea Vallone, Hongyu Ren, Jason Wei, et al. 2024. Delib- erative alignment: Reasoning enables safer language models. arXiv preprint arXiv:2412.16339 . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1:
https://arxiv.org/abs/2505.21784v1