text
string
source
string
0.0149 0.0069 36.56 B 3/20/19 0.0415 0.0188 27.64 I 0.0233 0.0105 32.67 D 0.0234 0.0110 32.61 4/16/19 0.0352 0.0170 29.08 E 3/20/19 0.0242 0.0116 32.33 4/16/19 0.0229 0.0101 32.82 Average 0.0252 0.0117 32.42 Spatial A 3/20/19 0.0274 0.0175 31.25 4/16/19 0.0313 0.0188 30.09 E 3/20/19 0.0371 0.0232 28.61 4/16/19 0.0406 0...
https://arxiv.org/abs/2505.21746v1
32.86 4/16/19 36.56 36.77 35.85 35.22 35.53 35.40 B 3/20/19 27.64 29.78 25.05 28.67 26.26 26.83 I 32.67 32.29 30.71 32.01 30.49 29.95 D 32.61 30.44 30.73 32.05 30.01 29.40 4/16/19 29.08 30.53 29.90 29.51 29.22 28.33 E 3/20/19 32.33 30.77 30.96 31.93 27.72 28.68 4/16/19 32.82 34.71 31.87 31.50 33.84 33.29 Average 32.42 ...
https://arxiv.org/abs/2505.21746v1
27.06 I 32.67 29.96 D 32.61 31.95 4/16/19 29.08 28.78 E 3/20/19 32.33 32.59 4/16/19 32.82 33.08 Average 32.42 30.85(b) A situation where no recent Sentinel-2 data was available. For the UAS flights on 11/20/19 and 11/21/19 the closest available cloud-free Sentinel-2 images were on 11/2 and 12/12. Target Sentinel-2 None...
https://arxiv.org/abs/2505.21746v1
and April, Temporal: sites A, D, E in December only. We abbreviated ground-truth as GT. Extension Tag Data Source (ground-truth=GT) Biomass Nitrogen Scenario R2RMSE (Mg/ha) R2RMSE (kg/ha) Spectral M23 Sentinel-2 8-band (10m) 68.9 0.87 55.9 25.1 M24 Sentinel 10-band (with SWIRs) (10m) 65.9 0.91 53.4 25.7 M25 UAS RGB (0....
https://arxiv.org/abs/2505.21746v1
of UAS deployment remains a limiting factor in precision farming. To address this, we demonstrated that UAS-guided super-resolution approaches can yield significant improvements in predictive modeling across spectral, spatial, and temporal dimensions, offering a cost-effective strategy for advancing precision agricultu...
https://arxiv.org/abs/2505.21746v1
use of UV light to reveal otherwise invisible contamination, highlighting the potential of spectral extension to uncover features not detectable with conventional RGB imagery alone [Li et al., 2021]. As with spectral extension, targeted UAS imagery can also be utilized for spatial and temporal extension models. The spa...
https://arxiv.org/abs/2505.21746v1
specific to a region or crop. This scenario deserves consideration because high-quality hyperspectral sensors can cost as much as $175,000 [ Olson and Anderson, 2021 ], and the spatial extension model exhibits reduced generalization compared to the spectral extension model. If there is only one target application, it m...
https://arxiv.org/abs/2505.21746v1
cover crop setting. However, as seen in Figure 13 and Table 6 there is some evidence that the spectral extension model will generalize, while the lower performance of the temporal and spatial extension models is an indication that these will have to be re-trained or trained on a substantially larger training dataset. A...
https://arxiv.org/abs/2505.21746v1
The lower the resolution of the satellite platform, the more data is needed. Since hyperspectral data collection is expensive this limits which satellite can be considered. Our current imagery comes from multiple counties across Maryland and Pennsylvania, but the models may generalize further geographically. However, o...
https://arxiv.org/abs/2505.21746v1
precision crop management across varying crop types, field sizes, and growing conditions. Future research is needed to further assess the robustness of this super-resolution AI system across space and time. An early warning system can be developed that enables farmers to proactively and affordably address various anoma...
https://arxiv.org/abs/2505.21746v1
G. and Gitelson, A. A. (2013). Remote estimation of crop and grass chlorophyll and nitrogen content using red-edge bands on sentinel-2 and-3. International Journal of Applied Earth Observation and Geoinformation , 23:344–351. [Dash et al., 2018] Dash, J. P., Pearse, G. D., and Watt, M. S. (2018). Uav multispectral imag...
https://arxiv.org/abs/2505.21746v1
K., Maity, A., Halli, H. M., GK, S., Vadivel, R., TK, D., et al. (2023). Nitrogen use efficiency—a key to enhance crop productivity under a changing climate. Frontiers in Plant Science , 14:1121073. [Grüner et al., 2021] Grüner, E., Astor, T., and Wachendorf, M. (2021). Prediction of biomass and n fixation of legume–gr...
https://arxiv.org/abs/2505.21746v1
Zheng, K., Zhang, B., and Chanussot, J. (2022). Deep learning in multimodal remote sensing data fusion: A comprehensive review. International Journal of Applied Earth Observation and Geoinformation , 112:102926. [Li et al., 2017] Li, S., Kang, X., Fang, L., Hu, J., and Yin, H. (2017). Pixel-level image fusion: A survey...
https://arxiv.org/abs/2505.21746v1
Burken, J. G., and Fritschi, F. (2019). Uav/satellite multiscale data fusion for crop monitoring and early stress detection. In Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W13 , pages 715–722. International Society for Photogrammetry and Remote Sensing. [Saharia et al., 2021] Saharia, C., Ho, J., Chan...
https://arxiv.org/abs/2505.21746v1
, 61:1–19. 28 Super-resolution for precision farming [Thieme et al., 2020] Thieme, A., Yadav, S., Oddo, P. C., Fitz, J. M., McCartney, S., King, L., Keppler, J., McCarty, G. W., and Hively, W. D. (2020). Using nasa earth observations and google earth engine to map winter cover crop conservation performance in the chesa...
https://arxiv.org/abs/2505.21746v1
FRAMES-VQA: Benchmarking Fine-Tuning Robustness across Multi-Modal Shifts in Visual Question Answering Chengyue Huang*Brisa Maneechotesuwan*Shivang Chopra Zsolt Kira Georgia Institute of Technology {chuang475,bmaneech3,shivangchopra11,zkira }@gatech.edu Abstract Visual question answering (VQA) systems face significant ...
https://arxiv.org/abs/2505.21755v1
conduct a comprehensive comparison of the existing robust fine-tuning baselines on ID and OOD performance using the benchmark. Furthermore, we analyze shift scores and modality importance across fine-tuning methods. To summarize, our contributions are: • We propose FRAMES-VQA for evaluating robust fine- tuning in VQA, ...
https://arxiv.org/abs/2505.21755v1
frame regularization as a con- straint through bi-level optimization, learning tailored con- straints for each layer. FTP [47] enhances TPGM’s effi- ciency by leveraging prior training steps, while SPD [48] selectively regularizes layers with consistent loss reduction, projecting corresponding layers within the constra...
https://arxiv.org/abs/2505.21755v1
fine-tuning and VQAv2 val for model selection. For other OOD datasets, we only use the test splits for evaluation. Evaluation and Metrics. We adopt the evaluation metric from VQAv2 [19], which measures accuracy by compar- ing predicted answers to ground truth human-annotated an- swers. Each question is paired with 10 h...
https://arxiv.org/abs/2505.21755v1
(v) PaliGemma V FT fftmethod (q) PaliGemma Q FT fftmethod (v, q)PaliGemma V ,Q FT Table 2. Embedding extractions from different layers and back- bone models. 4. Robust Fine-Tuning In this section, we summarize several existing robust fine- tuning methods and evaluate their performance on our pro- posed benchmark FRAMES...
https://arxiv.org/abs/2505.21755v1
Near & Far OOD Performance We fine-tune Google’s recently released PaliGemma-3B [4] model on the VQAv2 dataset and evaluate on the other OOD datasets. PaliGemma-3B is lightweight and one of the state-of-the-art models on VQAv2, making it a practical op- tion for benchmarking. We apply LoRA [23], a parameter- efficient ...
https://arxiv.org/abs/2505.21755v1
54.42 63.95 44.72 50.10 54.29 30.68 30.46 45.70 14.86 16.84 28.60 20.10 37.17 Vanilla FT LoRA [23] 86.29 94.43 69.36 78.90 86.21 71.73 49.82 75.08 42.08 22.92 48.30 37.77 62.64 Linear Prob LoRA 78.24 87.83 63.87 69.61 78.48 61.66 42.90 67.39 29.61 18.80 42.27 30.23 55.00 LP-FT LoRA [28] 85.97 93.30 65.93 76.49 86.16 72...
https://arxiv.org/abs/2505.21755v1
pure uni-modal embeddings from ViT and BERT remain more stable. This suggests that question and joint embeddings from multi-modal models are more sensi- tive to significant distribution shifts, likely due to their de- pendence on contextual and multi-modal interactions. 5.2. Correlation between Uni- & Multi-Modal Shift...
https://arxiv.org/abs/2505.21755v1
in Suppl. 12. In Tab. 5, we further separate ID and OOD sam- ples from all datasets with a Mahalanobis Distance of 60, chosen as the relative median of shift scores to illustrate the distinction between closer and more distant samples, and show MI v, MIqfor different fine-tune methods. (1) VQAv2, Vanilla FT (2) VQAv2, ...
https://arxiv.org/abs/2505.21755v1
future work should aim to enhance robustness under language shifts and im- plement ways to dynamically handle modality importance. Our findings show that more robust methods exhibit higher intra-modality attention, highlighting the potential of adap- tive attention mechanisms and modality-specific regulariza- tion to b...
https://arxiv.org/abs/2505.21755v1
Gu, Zhen Han, Yunpu Ma, Philip Torr, and V olker Tresp. Benchmarking robustness of adaptation methods on pre-trained vision-language models, 2023. 2 [9] Corentin Dancette, Remi Cadene, Damien Teney, and Matthieu Cord. Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answer- ing, 2...
https://arxiv.org/abs/2505.21755v1
of large language models, 2023. 5 [25] Chengyue Huang, Junjiao Tian, Brisa Maneechotesuwan, Shivang Chopra, and Zsolt Kira. Directional gradient pro- jection for robust fine-tuning of foundation models, 2025. 1 [26] Drew A. Hudson and Christopher D. Manning. Gqa: A new dataset for real-world visual reasoning and compos...
https://arxiv.org/abs/2505.21755v1
Al- berto Lopez Magana, Wojciech Galuba, Devi Parikh, andDouwe Kiela. Human-Adversarial Visual Question Answer- ing, 2021. arXiv:2106.02280 [cs]. 1, 2, 3 [43] Xiangxi Shi and Stefan Lee. Benchmarking out-of- distribution detection in visual question answering, 2024. 3 [44] Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng L...
https://arxiv.org/abs/2505.21755v1
cross- validation, and the model with the best ID validation accu- racy is taken. We use 8 A40 GPU for each experiment. The best training configurations for different methods are listed in Tab. 6. lr wd others Vanilla FT 1e−31e−4 - Linear Prob 1e−31e−4 - LP-FT 1e−31e−4 - WiSE-FT - - α= 0.5 FTP 1e−31e−4κ= 0 SPD 1e−3 0.5...
https://arxiv.org/abs/2505.21755v1
OOD Avg. OOD Avg. (a) LLaV A-7B with LoRA under 10% of VQAv2 (train & val) Zero-Shot 3.27 3.44 0.68 2.52 Vanilla FT 72.49 60.07 28.67 49.60 LP-FT 53.01 28.63 7.64 21.63 WiSE-FT 60.47 43.33 9.07 31.98 FTP 67.95 58.49 26.21 47.73 SPD 73.59 61.98 29.98 51.31 (b) PaliGemma-3B with Full Fine-Tuning under 10% of VQAv2 (train...
https://arxiv.org/abs/2505.21755v1
distribution between both train and test splits. 2 Figure 6. Sampling region in histogram Under V , Q and V+Q, we sample 50 instances from 4 regions as shown in Fig. 6 including: •Left tail (top %5): ID samples. •Top region: top occurring samples in the test set. •Intersect region: similar samples between train & test ...
https://arxiv.org/abs/2505.21755v1
significant push to the joint embedding towards the right tail region. Similarly, for ID Question + OOD Object, if those sam- ples are found in the right tail, then it may indicate dataset having more OOD samples with ID Question + OOD Ob- ject or the visual modality has a greater influenced in steer- ing the joint emb...
https://arxiv.org/abs/2505.21755v1
Figure 9. Comparison set of ID and OOD joint samples. 5 (1) VQAv2 Val (2) IV VQA (3) CV VQA (4) VQA Rephrasings (5) VQA CP v2 (6) VQA CE (7) ADVQA (8) Text VQA (9) VizWiz (10) OK VQA Figure 10. Histogram for Vanilla FT Visual Shifts: We depict the SMahascore on the visual modality for each sample in the VQAv2 train spl...
https://arxiv.org/abs/2505.21755v1
arXiv:2505.21765v1 [cs.AI] 27 May 2025Don’t Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models Sohyun An1, Ruochen Wang1, Tianyi Zhou2, Cho-Jui Hsieh1 1University of California, Los Angeles 2University of Maryland, College Park sohyun0423@cs.ucla.edu tianyi@umd.edu chohsieh@cs.ucla.edu ...
https://arxiv.org/abs/2505.21765v1
of reasoning efficiency as a constrained optimization problem aimed at minimizing the expected computational cost of reasoning trajectories while preserving or improving task performance. Subsequently, we propose a framework for Dynamic Thinking pattern Optimization ( DTO ), specifically designed to refine the selectio...
https://arxiv.org/abs/2505.21765v1
the collection of effective datasets. Complementary to these strategies, another line of research [ 17,4,27] has investigated post-hoc refinement and scoring mechanisms for reasoning trajectories generated by models, demonstrating promising results with relatively low computational 2 Q: If a and b are integers such tha...
https://arxiv.org/abs/2505.21765v1
is odd and 20 is even, so their product has to be odd, so both a and b are odd. …Ptn 37෩𝜟𝒙𝒇𝒑𝟐=𝟎.𝟎 ⊕𝜹𝒇𝒊𝒏𝒂𝒍𝒊𝒛𝒆 (Hmm, I think this is enough to derive the final answer.) ⊕𝒔∗ 𝒑𝟑=𝟎.𝟎 𝒑𝟏𝟏=𝟎.𝟎 𝒑𝟏𝟐=𝟎.𝟖 𝒑𝟏𝟑=𝟏.𝟎 𝒑𝟑𝟕=𝟏.𝟎Figure 1: Illustration of DTO. We construct a truncated reasoning traj...
https://arxiv.org/abs/2505.21765v1
x∼ D,i.e.,∆x= [δ1, δ2, . . . , δ nx], where D denotes the underlying distribution over input problems. Each thinking pattern δincurs a non-negative computational cost c(δ)≥0, typically quantified by metrics such as token length or FLOPs. The total cost of the reasoning trajectory ∆xis therefore given by: C(∆x) =X δ∈∆xc...
https://arxiv.org/abs/2505.21765v1
7.8%) with lower FLOPs. (c) shows max ipi, the maximum estimated correctness probability across thinking patterns (Equation (6)). High values suggest that even incorrect trajectories often contain a promising intermediate segment. where yis the whole trajectory in Equation (3). Next, we introduce a special finalize thi...
https://arxiv.org/abs/2505.21765v1
40% for DeepSeek-R1-Distill-Qwen-1.5B and DeepScaleR- 1.5B-Preview, respectively, while preserving correctness. Interestingly, for responses that initially yielded incorrect outcomes, applying our framework to induce more effective thinking patterns such as finalizing reasoning at appropriate termination points transfo...
https://arxiv.org/abs/2505.21765v1
( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Acc. ( ↑) #Tokens ( ↓) Eff. ( ↑) Instruct ver. [37] 76.36 555.16 N/A 85.37 315.44 N/A 65.13 575.86 N/A Baseline 79.80 3543.44 1.000 82.13 1382.99 1.000 66.62 3725.16 1.000 Fast Prompt 81.17 3354.99 1.074 85.14 1894.73 0.757 69.68 3634.30 1.072 SFT 81.28 3180.10 1.135 80.12 933.89 1...
https://arxiv.org/abs/2505.21765v1
data can be constructed—that is, when at least one correct response is available. Metrics. For each dataset, we report both the average accuracy and the average number of generated tokens. Additionally, inspired by Qu et al. [21], we compute the following efficiency metric η: η=Ex∼D, y∼πθ∗(x)[P(y)] Ex∼D, y0∼πθ(x)[P(y0)...
https://arxiv.org/abs/2505.21765v1
such cases, FCS+Ref. fails to construct a pairwise dataset, while DAST relies solely on token length, i.e.,it assigns higher scores to longer incorrect responses, which may limit its ability to generalize. On DeepScaleR-1.5B-Preview (Table 2), DTO continues to achieve the highest efficiency, though with narrower margin...
https://arxiv.org/abs/2505.21765v1
value of -1 is 1. So, arctan(1) is... 45◦, right? Since tan(45◦) = 1 . Wait, but hold on, the question asks for the acute angle. So, 45◦is already acute, since it’s less than 90◦. So, is that the answer? It seems straightforward. But let me make sure . . . Slope m1= 2,m2= 1/3. . . . So, arctan(1) is 45◦. That seems cor...
https://arxiv.org/abs/2505.21765v1
preprint arXiv:2502.04463 , 2025. [3]Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 , 2024. [4]Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui P...
https://arxiv.org/abs/2505.21765v1
Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl, 2025. Notion Blog. [19] Sara Vera Marjanovi ´c, Arkil Patel, Vaibhav Adlakha, Milad Aghajohari, Parishad BehnamGhader, Mehar Bhatia, Aditi Khandelwal, Austin Kraft, Benno Krojer, Xing Han Lù, et al. Deepseek-r1 thoughtology: Le...
https://arxiv.org/abs/2505.21765v1
language understanding benchmark. arXiv preprint arXiv:2406.01574 , 2024. [34] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837,...
https://arxiv.org/abs/2505.21765v1
even within incorrect responses. These findings underscore the potential of selectively terminating the reasoning process at appropriate points to recover correct answers. A.2 Implementation Details For all training-based methods, the batch size was fixed at 128, and we trained the model using four NVIDIA RTX A6000 GPU...
https://arxiv.org/abs/2505.21765v1
edits for flow. • Use each kept chunk verbatim. • Do not paraphrase or skip ideas. • Preserve the original tone (informal, exploratory, etc). End the reasoning with the final answer in this format: \boxed{{...}} Output Format: [OUTPUT] [CHUNK_FILTERING_RESULTS] [Chunk 1] KEEP AS IS [Chunk 2] REMOVE ... [Chunk N] KEEP A...
https://arxiv.org/abs/2505.21765v1
2, that 's pretty steep, going up 2 units for every 1 unit to the right. The other line has a slope of 1/3, which is a gentle slope, going up 1 unit for every 3 units to the right. So, if you imagine these two lines intersecting, the angle between them is likely to be acute because both slopes are positive but one is s...
https://arxiv.org/abs/2505.21765v1
2 \\cdot \\frac{1}{3} = 1 + \\frac{2}{3} = \\frac{5}{3} \\] 3. Simplify the argument inside the arctangent: \\[ \\left| \\frac{-\\frac{5}{3}}{\\frac{5}{3}} \\right| = \\left| -1 \\right| = 1 \\] 4. Therefore, the angle \\(\\theta\\) is: \\[ \\theta = \\arctan(1) = 45^\\circ \\] To verify, we can also compute the angles...
https://arxiv.org/abs/2505.21765v1
\\frac{1}{3} = 1 + \\frac{2}{3} = \\frac{3}{3} + \\frac{2}{3} = \\frac{5}{3} \\] Thus, we have: \\[ \\tan(\\theta) = \\left| \\frac{-\\frac{5}{3}}{\\frac{5}{3}} \\right| = \\left| -1 \\right| = 1 \\] Since \\(\\tan(\\theta) = 1\\), we find that \\(\\theta = 45^\\circ\\). \\[ \\boxed{45} \\] C Datasets The information a...
https://arxiv.org/abs/2505.21765v1
MMTB ENCH : A Unified Benchmark for Complex Multimodal Table Reasoning Prasham Yatinkumar Titiya*Jainil Trivedi*Chitta Baral Vivek Gupta† Arizona State University {ptitiya,jtrived7,chitta.baral,vgupt140} @asu.edu Abstract Multimodal tables—those that integrate semi-structured data with visual elements such as charts an...
https://arxiv.org/abs/2505.21771v1
textual tables, leaving multimodal variants under- explored. While LLMs excel at processing sequential text, they often struggle to grasp the inherently two-dimensional and nested structure of tables [ 19,22,7]. On the other hand, VLMs, designed for visual inputs, tend to miss the deeper semantic and relational pattern...
https://arxiv.org/abs/2505.21771v1
structured layouts. Additionally, most cells featured stylized text (e.g., colored or bolded) rather than embedded visual content, limiting their multimodal depth and complexity. Visual Reasoning Without Tables. A parallel line of work has focused on visual QA using standalone charts, diagrams, and plots. Datasets such...
https://arxiv.org/abs/2505.21771v1
ensure real-world diversity, we collected tables from a variety of publicly accessible sources, including Google Images, Wikipedia, Amazon, Zara, and domain-specific platforms like Premier League and weather websites. This diverse set of sources allows MMTBench to capture a wide range of varying images and table struct...
https://arxiv.org/abs/2505.21771v1
meaningful insights. This includes Color-based, shape- based, Pattern, analysing Text-in-Image, and Entity Identification Questions. •"Others" , covering a broad range of inquiries that do not align with the specified reasoning types. These include question types such as Geographical Based. Distance Based, Common Sense...
https://arxiv.org/abs/2505.21771v1
Table as Image Baseline encompassed a broader range of models, including Google’s Gemini 1.5 Flash and Gemini 2.0 Flash, OpenAI’s GPT-4o Mini, OpenGVLab’s InternVL2.5-8B [ 14], TIGER- Lab’s Mantis-8B-Idefics2 [ 11], Microsoft’s Phi-3.5-Vision-Instruct [ 1], QWEN 2.5-VL-7b-Instruct [34], and Table-Llava-1.5-7b-hf [ 39]....
https://arxiv.org/abs/2505.21771v1
11.43 0.062 12.68 14.49 0.063 15.77 16.52 0.060 10.95 11.30 0.050 Interleaved Baseline Gemini 1.5 34.38 35.24 0.247 31.55 31.52 0.210 20.33 20.47 0.119 26.29 25.65 0.175 Gemini 2.0 Flash 37.27 38.47 0.272 34.08 37.46 0.231 24.59 25.75 0.142 26.38 28.76 0.176 GPT-4o mini 47.74 49.88 0.376 46.92 48.96 0.348 36.41 37.84 0...
https://arxiv.org/abs/2505.21771v1
0.205 41.02 44.32 0.270 - - - Mixtral-8x7B 80.17 83.34 0.643 34.05 39.97 0.237 44.16 49.18 0.317 - - - Image Captioning Baseline Gemini 1.5 Flash 47.90 49.29 0.369 13.68 15.38 0.074 25.13 27.47 0.165 22.72 23.76 0.186 Gemini 2.0 Flash 48.14 51.87 0.358 19.60 21.43 0.092 28.48 33.31 0.179 29.15 30.22 0.235 Table as an I...
https://arxiv.org/abs/2505.21771v1
Entity Only where there is only one type of entity, Multiple Entities where there are more than one type of entities in the table, Entities along with Maps, Entities along with Charts, Single Chart Type, Multiple Chart Types, Maps only and Visualizations. Since these types refer to an image, for this analysis we have c...
https://arxiv.org/abs/2505.21771v1
this work warrants consideration. As benchmarks guide model development, there is a risk of overfitting to evaluation tasks at the expense of real- world generalization. In high-stakes domains—such as public health or finance—models that misinterpret visual or tabular data may produce inaccurate summaries or misleading...
https://arxiv.org/abs/2505.21771v1
Evaluation (LREC-COLING 2024) , pages 9705–9719, Torino, Italia, May 2024. ELRA and ICCL. [10] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv...
https://arxiv.org/abs/2505.21771v1
Chen, and Xiaoyong Du. Large language model for table processing: A survey. Frontiers of Computer Science , 19(2):192350, 2025. [23] Haohao Luo, Ying Shen, and Yang Deng. Unifying text, tables, and images for multimodal question answering. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Associati...
https://arxiv.org/abs/2505.21771v1
al. Cc-ocr: A comprehensive and challenging ocr benchmark for evaluating large multimodal models in literacy. arXiv preprint arXiv:2412.02210 , 2024. [37] Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, and Kevin Bailly. Spiq: Data-free per- channel static input quantization. In Proceedings of the IEEE/CVF Winter Confer...
https://arxiv.org/abs/2505.21771v1
found in online archives This figure displays three vintage science fiction book covers from the mid-20th century, each featuring imaginative artwork by noted illustrators. The covers are for books by Philip K. Dick, Isaac Asimov, and Kendell F. Crossen. Question: Who illustrated the 1957 edition of *Eye in the Sky* by...
https://arxiv.org/abs/2505.21771v1
34.67 0.20 GPT-4o mini 45.83 47.26 0.28 43.14 44.48 0.27 52.99 55.61 0.34 38.56 41.39 0.27 Mantis-8B-Idefics2 34.80 36.59 0.19 34.24 36.19 0.18 40.66 41.27 0.22 42.38 42.99 0.23 Phi-3.5 39.75 41.65 0.21 38.28 42.32 0.24 44.23 46.71 0.25 33.28 34.08 0.20 Qwen-2.5-VL 29.65 61.37 0.19 29.06 64.65 0.18 30.10 76.22 0.18 30....
https://arxiv.org/abs/2505.21771v1
2.0 Flash 19.0 0.084 16.9 0.094 15.8 0.037 24.8 0.090 2.6 0.014 25.9 0.067 GPT-4o mini 32.0 0.230 32.0 0.201 27.4 0.188 33.0 0.224 10.6 0.073 37.3 0.240 Llama 3-8B 28.1 0.188 29.7 0.182 21.3 0.137 29.1 0.145 0.0 0.000 25.4 0.160 Mixtral-8x7B 35.3 0.273 33.3 0.247 28.4 0.209 36.7 0.233 3.4 0.025 38.0 0.316 Entity Replac...
https://arxiv.org/abs/2505.21771v1
or identify the image. •Reasoning Errors – This category includes mistakes where the image is correctly identified, but the logic used to answer the question is flawed. Such errors typically involve incorrect inferences, faulty assumptions, or logical inconsistencies. •Identification of Visual Attributes – Errors in th...
https://arxiv.org/abs/2505.21771v1
Entity Identification and Disambiguation errors occur at similar rates, as the model misidentifies or fails to recognize entities, likely due to insufficient training data and over-reliance on context. Structural Errors show difficulty in interpreting complex tabular data, including hierarchical structures and nested t...
https://arxiv.org/abs/2505.21771v1
Figure 10 shows the prompt for the Image Captioning Baseline; Figure 11 shows the prompt for the Table as Image Baseline; and Figure 12 shows the prompt for the Interleaved Baseline. Prompt: You will be provided a table in a pipe-separated table where all the entities have been removed. Your task is to: Step 1: UNDERST...
https://arxiv.org/abs/2505.21771v1
questions provided and explore **ALL TYPES OF REASONING** to find answers. Table with Captions Step 5: PROVIDE ANSWERS IN Format Ensure that all answers adhere strictly to the FORMAT specified. ALWAYS PROVIDE YOUR ANSWERS IN THIS FORMAT. Now I will provide you with the questions. Questions Based on the steps I mentione...
https://arxiv.org/abs/2505.21771v1
DualSchool: How Reliable are LLMs for Optimization Education? Michael Klamkin∗†‡Arnaud Deza†‡ Sikai Cheng‡Haoruo Zhao‡Pascal Van Hentenryck‡ Abstract Consider the following task taught in introductory optimization courses which addresses challenges articulated by the community at the intersection of (generative) AI and...
https://arxiv.org/abs/2505.21775v1
models specialized to structured data, a relatively under-studied but extremely valuable competency. Because of the simplicity of P2DC and the availability of the P2DC instructions and instances in the training corpus of LLMs, students in optimization classes may reasonably expect that LLMs would perform well on the P2...
https://arxiv.org/abs/2505.21775v1
organized as follows. Section 2 discusses related work. Section 3 introduces the P2DC task and Section 4 introduces the CGED algorithm. Then, Section 5 presents the experimental results and Section 6 concludes the paper. 2 Related Work This section reviews related work at the intersection of large language models and o...
https://arxiv.org/abs/2505.21775v1
detailed introduction to linear programming duality. Example: Production Planning Consider the production planning problem where a factory manager is tasked with finding the most profitable production plan given a fixed amount of resources; wood ( W) and steel ( S). In this example, the factory can produce a number of ...
https://arxiv.org/abs/2505.21775v1
cannot handle basic symmetries that are either within declarations, such as the order of terms in a constraint, or span across multiple declarations, such as variable sign convention. Optimal Value Most prior work uses an optimal value check, often referred to as execution accuracy, to establish the correctness of a gi...
https://arxiv.org/abs/2505.21775v1
sign and slack variables. Although NGED itself includes some canonicalization such as converting the objective sense to minimization and single-sided inequalities to less-than sense, it fails to treat these convention differences, leading to many false negatives as demonstrated in Section 5. The following paragraphs de...
https://arxiv.org/abs/2505.21775v1
primal inequality constraints. Note that although CGED is designed specifically for the P2DC setting, the canonicalization proce- dures can be used more broadly to detect equivalence between formulations or as a normalization procedure for systems that take a linear program as input. For other applications, it is impor...
https://arxiv.org/abs/2505.21775v1
Results for P2DC G ENERATION Table 2 reports the CGED, NGED, and OBJ accuracies across four benchmark datasets under both 0-shot and 1-shot prompting conditions. The Execution accuracy (Exec%) column reports the percentage of instances for which the LLM code successfully produced an MPS file. Overall, even though Exec%...
https://arxiv.org/abs/2505.21775v1
with accuracies below 60% across all models and error types. Similarly to theGENERATION task, the Phi 4 and Llama 3.3 models outperform the others. These uniformly low accuracies – even on error types that are relatively easy to detect as shown in the next section – reveal that CORRECTION is essentially just as challen...
https://arxiv.org/abs/2505.21775v1
as reinforcement learning with symbolic feedback [ 7] that can leverage rich reward signals. Future directions include extending DUALSCHOOL to quadratic and conic formulations and evaluating its efficacy as a fine-tuning dataset. Acknowledgements This research was partly funded by NSF awards 2112533 and DGE-2039655. An...
https://arxiv.org/abs/2505.21775v1
Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. arXiv preprint arXiv:2309.03409 , 2023. [14] Pepper Miller and Kristen DiCerbo. Llm based math tutoring: Challenges and dataset, 2024. [15] Jakub Macina, Nico Daheim, Ido Hakimi, Manu Kapur, Iryna Gurevych, and Mrinmaya...
https://arxiv.org/abs/2505.21775v1
[33] Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B Shah. Julia: A fresh approach to numerical computing. SIAM Review , 59(1):65–98, 2017. doi: 10.1137/141000671. URL https://epubs.siam.org/doi/10.1137/141000671 . 11 [34] Yansen Zhang, Qingcan Kang, Wing Yin Yu, Hailei Gong, Xiaojin Fu, Xiongwei Han, Tao Zh...
https://arxiv.org/abs/2505.21775v1
NGED: 1.Variable nodes have only one feature cicompared to the ci,li, anduiin NGED. This is due to the fact that variable bounds are included in the constraint nodes. 2.Constraint nodes have only one feature bjcompared to the li,uiin NGED since in CGED, constraints are reformulated to a⊤ jx≥bjrather than lj≤a⊤ jx≤uj. T...
https://arxiv.org/abs/2505.21775v1
Inference is performed via Ollama8. Compute Resources The paper used in total about 1000 GPU-hours to run LLM inference for both the experiments presented and those run during development. The evaluations are run on CPU and are relatively fast, adding up to only about 1 CPU-hour. All experiments were run on a node with...
https://arxiv.org/abs/2505.21775v1
with only chatGPT 4o achieving a non-zero CGED. B.3 Common mistakes An informal analysis reveals the following common mistakes made by LLMs in the GENERATION task (when the response is almost correct): •gurobipy defaults – When declaring a new variable in gurobipy , the default lower bound is zero. Sometimes, when atte...
https://arxiv.org/abs/2505.21775v1
and change its sense. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒min 5 x1+ 4x2 s.t. 2x1+ 3x2≤1 x1≥0, x2≥0 Flipped Bound Sense Randomly select a variable and change the sense of its bound constraint. min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≥0=⇒min 5 x1+ 4x2 s.t. 2x1+ 3x2≥1 x1≥0, x2≤0 17 D Prompt Formats Figure 5 includes a...
https://arxiv.org/abs/2505.21775v1
arXiv:2505.21784v1 [cs.AI] 27 May 2025Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation Tharindu Kumarage1,2, Ninareh Mehrabi1, Anil Ramakrishna1, Xinyan Zhao1, Richard Zemel1,Kai-Wei Chang1,Aram Galstyan1,Rahul Gupta1,Charith Peris1 1 Amazon Nova Responsible AI,2Arizona St...
https://arxiv.org/abs/2505.21784v1
on Agentic Iterative Deliberation for SAFE ty reason- ing (AID SAFE ), designed to generate high-quality policy-embedded CoT datasets without requiring an expensive reasoning-capable generator . Our approach leverages collaborative reasoning and re- finement in a multi-agent environment to gener- ate high-quality thoug...
https://arxiv.org/abs/2505.21784v1
we provide a detailed explana- tion of the safety policies we used, the initialization process, the deliberation stage, and the refinement stage. 2.1 Safety Policies Our experiments incorporate five key safety poli- cies derived from existing literature (Qi et al., 2023): Hate-Harass-Violence, Fraud and Decep- tion, Ph...
https://arxiv.org/abs/2505.21784v1
This structured exchange ensures that the final response reflects a thorough examination of the query and the associated safety policies. 2.4 Refinement Stage Once the deliberation stage concludes, all gener- ated thoughts from each round are aggregated to form the complete CoT, and the final response from the last rou...
https://arxiv.org/abs/2505.21784v1
consider single LLM generations, where CoTs are produced by directly prompting Mixtral 8x22B without any agentic de- liberation process (which we will denote as LLM ZS in subsequent sections). One key design choice in this evaluation is select- ing appropriate evaluators. Previous studies have demonstrated that evaluat...
https://arxiv.org/abs/2505.21784v1
that generated responses are faithful to the CoTs, we measure the faithfulness between the response and CoT. We use the Claude-3 Son- net auto-grader to evaluate faithfulness on a scale of 1-5, where 1 indicates minimal faithfulness and 5 indicates complete adherence. The full grading rubric is provided in the Appendix...
https://arxiv.org/abs/2505.21784v1
model, to study the effects of safety reasoning training from scratch, and Qwen 2.5, an already safety-trained model, to understand how additional safety reasoning train- ing impacts performance. We utilize Hugging Face’s SFT trainer with 4-bit quantization using QLoRA (Dettmers et al., 2024). Additional details and pa...
https://arxiv.org/abs/2505.21784v1
baseline. Importantly, even with only 5,000 safety reason- ing samples, we achieve an increase of in-domain safety by 20% (from 76% to 96%) and out-of- domain by 54.95% (from 31.00% to 85.95%) com- pared to the base model. Additional safety training may override pre- trained safety: Qwen is already safe due to ex- tens...
https://arxiv.org/abs/2505.21784v1
follows SFT in the current standard LLM training pipeline. Out of a variety of techniques that are widely used for this phase (Wang et al., 2024b), we pick Direct Policy Optimization (DPO) for our work here. The alignment training phasesgenerally use preference data, that is formatted as a prompt paired with two respon...
https://arxiv.org/abs/2505.21784v1
jointly optimizes belief augmen- tation through adversarial probing and feedback. In our adaptation, we iteratively train the adversarial ear-whisperer agent by continuously refining its deceptive belief generations based on interactions with the target LLM. Each iteration involves using the target LLM to generate beli...
https://arxiv.org/abs/2505.21784v1
Additionally, we introduce an adversarial ear-whisperer agent that enables us to overcome the limitations of standard sampling techniques, which fail to distinguish selected and rejected CoTs for preference learning. By leveraging belief aug- mentation and iterative ICL, this method ensures that rejected CoTs exhibit p...
https://arxiv.org/abs/2505.21784v1
for reasoning must be carefully designed to en- sure they account for diverse ethical considerations, such as privacy, fairness, and non-discrimination. It is essential that the policies are constructed in an inclusive manner and reflect the values of a wide range of stakeholders to avoid unintentional biases in the re...
https://arxiv.org/abs/2505.21784v1