text
string
source
string
\( \frac{\pi}{4} \) radians to degrees: \[ \frac{\pi}{4} \times \frac{180^\circ}{\pi} = 45^\circ \] 3. Compare the Answers: -Extracted Answer in degrees: 45°-Ground Truth Answer converted to degrees: 45°Conclusion: Both answers represent the same angle (45 degrees) but expressed differently. \boxed{1}Figure 7: Example ...
https://arxiv.org/abs/2505.22203v1
numbers $x$ and $y$, we have $$f\\left(x^{2}\\right)+f\\left(y^{2}\\right)=f(x+y)^{2}-2 x y$$ Let $S=\\sum_{n=-2019}^{2019} f(n)$. Determine the number of possible values of $S$.Ground Truth Answer:2039191Extracted Answer:i-YCZ>o:g#1\'g1&8>GOxwuy2>T.k&&Wv\'S$~{4UWCn]\'8OU-bAem"Bc\'>ZY0,Zf#HAQa=P{&<TsiZ1,g23tm2)yvUqyD;D...
https://arxiv.org/abs/2505.22203v1
arXiv:2505.22224v1 [cs.LG] 28 May 2025Solver-Free Decision-Focused Learning for Linear Optimization Problems Senne Berden, Ali ˙Irfan Mahmuto ˘gulları, Dimos Tsouros, Tias Guns Department of Computer Science, KU Leuven {senne.berden,irfan.mahmutogullari,dimos.tsouros,tias.guns}@kuleuven.be Abstract Mathematical optimiz...
https://arxiv.org/abs/2505.22224v1
that gradient-based DFL involves solving the optimization problem with the predicted parameters during each loss evaluation, to assess the impact of the predictions on decision quality. In this paper, we address this issue and greatly improve the efficiency of DFL for linear optimization problems, which have received m...
https://arxiv.org/abs/2505.22224v1
predictions ˆc. Unlike conventional regression, the objective in training is notto maximize the accuracy of predicted costs ˆc. Rather, the aim is to make predictions that maximize the quality of the resulting decisions. This is measured by the regret , which expresses the suboptimality of the made decisions z⋆(ˆc)with...
https://arxiv.org/abs/2505.22224v1
or changes it discontinuously (leading to nonexistent gradients). Most work on DFL has focused on circumventing this obstacle. Three general types of approaches can be distinguished. We briefly discuss them here, but refer the reader to [26] for a comprehensive overview of the field. The first type of approach is based...
https://arxiv.org/abs/2505.22224v1
present a novel loss function that uses only the adjacent vertices of the optimal solutions. We then describe how these adjacent vertices can be efficiently precomputed. 4.1 Loss based on adjacent vertices We construct a loss function based on the following proposition (which we prove in the appendix). Proposition 4.1....
https://arxiv.org/abs/2505.22224v1
is computationally efficient to evaluate, as it does not require computing z⋆(ˆc)through a call to a solver. Instead, it only involves dot products between ˆcand a set of precomputed solution vectors, making it highly efficient. The LAVA loss is somewhat related to the recently proposed CaVE [ 40], particularly in term...
https://arxiv.org/abs/2505.22224v1
Letdbe the unit vector ej∈Rn 16: fork= 1. . . m do 17: Set direction dBcurr(k)=d′ kfor basic variable Bcurr(k) 18: Add new adjacent vertex Zadj=Zadj∪ {z⋆+θ⋆d} 19: else ▷Degenerate pivot 20: Identify Bcurr(i)as the leaving basic variable according to the TNP rule 21: Construct new basis Bnewand corresponding Nnewby repl...
https://arxiv.org/abs/2505.22224v1
applies the TNP rule to select a leaving variable (line 20). If the resulting basis is not in Visited yet, it is added to the Queue for later exploration. The Visited set ensures that each basis is processed only once, preventing redundant pivot operations. 4.3 Putting it together Given a dataset D={(x, c, z⋆(c))}orD={...
https://arxiv.org/abs/2505.22224v1
grown throughout training [ 28]. 7 0 100 200 300 400 500 Training Time (s)101 Test RegretRandom LP 0 200 400 600 Training Time (s)101 8×102 9×102 Test RegretMulti-dim. Knapsack 0 10 20 30 Training Time (s)101 6×102 Test RegretShortest pathMSE SPO+ PFYL NCE CaVE LAVA (Ours)Figure 2: Comparison of the efficiency of diffe...
https://arxiv.org/abs/2505.22224v1
scale) of the various methods (full results with standard errors are given in tabular format in the appendix). LAVA is Pareto-efficient on all 8 50 100 200 400 Number of Items101 7×102 8×102 9×102 Test Regret 50 100 200 400 Number of Items0200400600Training Time (s) 1 2 4 8 Number of Dimensions101 6×102 Test Regret 1 2...
https://arxiv.org/abs/2505.22224v1
it scales well with problem size. Directions for future work include improving the performance of LAVA on highly degenerate problems, possibly through problem-specific adjacent vertex computation methods or heuristics. Another avenue is to investigate ways for LAVA to utilize the ground-truth cost coefficients when ava...
https://arxiv.org/abs/2505.22224v1
improved n-tree algorithm for the enumeration of all neighbors of a degenerate vertex. Annals of Operations Research , 46:361–391, 1993. [18] Bruce Golden, Linus Schrage, Douglas Shier, and Lida Anna Apergi. The unexpected power of linear programming: an updated collection of surprising applications. Annals of Operatio...
https://arxiv.org/abs/2505.22224v1
P. Prettenhofer, R. Weiss, V . Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011. [34] Marin Vlastelica Pogan ˇci´c, Anselm Paulus, Vit Musil, Georg Martius, and Michal Rolinek. D...
https://arxiv.org/abs/2505.22224v1
Method Random LP Multi-dim. Knapsack Shortest Path LAVA (ϵ= 0.1) 0.016±0.001 0 .086±0.002 0 .060±0.012 LAVA (ϵ= 0) 0.023±0.001 0 .127±0.007 0 .061±0.008 LAVA (no threshold) 0.166±0.009 0 .156±0.003 0 .243±0.022 C More information on the TNP rule When a vertex z⋆is degenerate, it has multiple corresponding bases. This m...
https://arxiv.org/abs/2505.22224v1
we must start from a transition node associated with z⋆. If our initial basis is not a transition node, we can obtain one by applying random pivots (with respect to an anti-cycling rule [ 8]). Since the starting node is now a transition node, there will be a column dofD=−B−1N such that min {i|di<0} −z⋆ Bcurr (i) di >...
https://arxiv.org/abs/2505.22224v1
i∈Icizi (10) s.t.X i∈Iwijzi≤Wj,∀j∈ {1, . . . , m } (11) zi∈ {0,1} ∀i∈ I. (12) The objective function (10) maximizes the total value of the selected items. Constraints (11) ensure that the weight of the selected items does not exceed the capacity for each dimension j. The domain constraint (12) specifies that each item ...
https://arxiv.org/abs/2505.22224v1
0 .144±0.023 SPO+ 0.015±0.001 0 .082±0.001 0 .052±0.007 PFYL 0.017±0.001 0 .082±0.001 0 .055±0.010 NCE 0.025±0.001 0 .159±0.003 0 .151±0.016 CaVE 0.017±0.001 0 .111±0.002 0 .080±0.014 LAVA (Ours) 0.016±0.001 0 .086±0.002 0 .060±0.012 Table 3: Training times with standard errors for different methods Method Random LP Mu...
https://arxiv.org/abs/2505.22224v1
arXiv:2505.22232v1 [cs.CL] 28 May 2025Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models Mehdi Ali1,2†Manuel Brack3,5†Max Lübbering1,2†Elias Wendt5†Abbas Goher Khan1† Richard Rutmann1,2Alex Jude2Maurice Kraus5Alexander Arno Weber1,2 Felix Stollenwerk6David Kaczé...
https://arxiv.org/abs/2505.22232v1
( Judging Quality across Languages)1comprising the four stages outlined in Fig. 1. With minimal human su- pervision and small amounts of distilled annotation data, we are able to train lightweight regressors for efficient filtering of multilingual, large-scale data at low computational cost. JQL is language agnostic an...
https://arxiv.org/abs/2505.22232v1
the Appendix.transfer capabilities of lightweight annotator mod- els, evaluating how well judgment abilities general- ize to unseen languages (Sec. 4). (4) Demonstration that our approach leads to high-quality pre-training datasets that improve the downstream performance of LLMs (Sec. 5). 2 Collecting Human Annotations...
https://arxiv.org/abs/2505.22232v1
a spread >3. Upon manual inspection, we found that the educational value of these examples is indeed highly subjective, which resulted in dis- agreement between annotators. Overall, our rigor- ous annotator training and data cleaning procedure have resulted in a reliable ground truth, suitable for robustly evaluating M...
https://arxiv.org/abs/2505.22232v1
Llama-3.3-70B-it Mistral-3.1-24B-it Phi-4-14B Qwen-2.5-14B-it Qwen-2.5-32B-it Qwen-2.5-72B-it Qwen-2.5-7B-it 0.400.450.500.550.600.650.700.750.80 Figure 2: LLMs show varying ranking performance for educational quality. Some models exhibit strong multilingual capabilities. We show Spearman Correlation between model pred...
https://arxiv.org/abs/2505.22232v1
focused on cross-lingual embedding models with long context windows (Zhang et al., 2024; Sturua et al., 2024; Yu et al., 2024). These models efficiently process long web documents and pro- duce well-aligned representations that map seman- tically equivalent texts across languages to similar embeddings. Thus, enabling e...
https://arxiv.org/abs/2505.22232v1
of data re- quired to effectively train our lightweight annota- tion models, we conducted a controlled experiment involving all 35 languages. The performance con- verged with 500k training samples (App C.4). Building upon the insights gained, we trained our final lightweight annotator models. We used a frozen Snowflake...
https://arxiv.org/abs/2505.22232v1
This dataset originates from Com-mon Crawl WARC files and includes standard pre- processing such as HTML extraction, language identification, and deduplication. Using the unfil- tered raw data ensures that our comparisons directly reflect differences introduced by our annotator- driven filtering methods, rather than pr...
https://arxiv.org/abs/2505.22232v1
+6.70 +7.2 Table 2: Percentile-based filtering on JQL annotations provides reliable trade-offs in performance improve- ments and achieves higher data quality and document retention. Retained tokens and benchmark performance are reported relative to the FW2 baseline and aggregated over 13 languages. Benchmark "Avg." and...
https://arxiv.org/abs/2505.22232v1
in multilingual scenarios, where limited data is avail- able for many languages. Key Insights: •JQL outperforms multilingual heuris- tic filtering. •Percentile-based filtering is better suited than threshold-based filtering •Higher percentile thresholds trade-off better data quality for reduced number of tokens. 6 Gene...
https://arxiv.org/abs/2505.22232v1
purpose LLMs. Specifically, annotations and filters judging the educational quality of a document have produced hiqh-quality datasets (Su et al., 2024; Penedo et al., 2024a; Wettig et al., 2024). Multilingual Data Curation Pipelines. Despite these advances in dataset curation, they remain largely English-centric (with ...
https://arxiv.org/abs/2505.22232v1
Lamarr Institute for Machine Learn- ing and Artificial Intelligence (LAMARR22B), as well as by the European Union’s Horizon 2020 research and innovation program under grant agree- ment No. 101135671 (TrustLLM). The authors gratefully acknowledge EuroHPC (https://eurohpc-ju.europa.eu/index_en ) and the Barcelona Superco...
https://arxiv.org/abs/2505.22232v1
Con- ference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) . Roberts Dar‘gis, Guntis B ¯arzdi n,š, Inguna Skadi n,a, Nor- munds Gr ¯uz¯itis, and Baiba Saul ¯ite. 2024. Evaluating open-source LLMs in low-resource languages: In- si...
https://arxiv.org/abs/2505.22232v1
Ian Magnusson, Nguyen Tai, Ben Bogin, David Heine- man, Jena D. Hwang, Luca Soldaini, Akshita Bha- gia, Jiacheng Liu, Dirk Groeneveld, Oyvind Tafjord, Noah A. Smith, Pang Wei Koh, and Jesse Dodge. 2025. Datadecide: How to predict best pretrain- ing data with small experiments. arXiv preprint arXiv:2504.11393 . Tamzeed ...
https://arxiv.org/abs/2505.22232v1
Ke- nealy, Lucas Beyer, Xiaohai Zhai, Anton Tsitsulin, Robert Busa-Fekete, Alex Feng, Noveen Sachdeva, Benjamin Coleman, Yi Gao, Basil Mustafa, Iain Barr, Emilio Parisotto, David Tian, Matan Eyal, Colin Cherry, Jan-Thorsten Peter, Danila Sinopal- nikov, Surya Bhupatiraju, Rishabh Agarwal, Mehran Kazemi, Dan Malkin, Rav...
https://arxiv.org/abs/2505.22232v1
tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuch...
https://arxiv.org/abs/2505.22232v1
annotations, along with anonymized information about the annotators, for subsequent analysis and anonymized public release. No ethics review board approval was sought, as the study did not fall under institutional requirements for ethical review. Annotator (Anonymized) Background Age Group Annotator 1 MSc. in Computer ...
https://arxiv.org/abs/2505.22232v1
Qwen-2.5-14B-it Qwen-2.5-32B-it Qwen-2.5-72B-it Qwen-2.5-7B-it 0102030405060 Figure 9: Invalid scores predictions (in percent) B LLM Based Annotator Evaluation In this Section we provide further details and ablations on our LLM based annotators discussed in Section 3. B.1 Invalid Predictions Similar to the human annota...
https://arxiv.org/abs/2505.22232v1
the Spearman correlation scores in Figure 11. The p-values were calculated using a two-sided Student’s t-test and indicate the statistical significance of the measured correlations (lower is better). Across all models and languages, even the highest p-values are extremely small. This underpins the statistical significa...
https://arxiv.org/abs/2505.22232v1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 12.1 61.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 7.8 51.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.6 0.0 8.4 52.6 0.0 0.0 0.0 0.2 0.2 0.2 0.2 0.0 0.4 0.0 12.5 42.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.6 0.0 9.2 60.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 7.3 40.9 0.0 0.0 0.0 0.0 0.0 0.0 ...
https://arxiv.org/abs/2505.22232v1
0.107 0.119 0.086 0.146 0.091 0.091 0.136 0.081 0.100 0.084 0.105 0.082 0.105 0.111 0.256 0.312 0.206 0.303 0.264 0.237 0.239 0.285 0.264 0.280 0.269 0.308 0.268 0.269 0.257 0.088 0.146 0.102 0.111 0.080 0.116 0.115 0.137 0.129 0.131 0.128 0.052 0.073 0.108 0.111 0.284 0.307 0.272 0.279 0.309 0.279 0.307 0.299 0.309 0....
https://arxiv.org/abs/2505.22232v1
0.445 0.684 0.664 0.646 0.582 0.631 0.621 0.600 0.665 0.682 0.679 0.662 0.619 0.679 0.649 0.608 0.621 0.640 0.618 0.592 0.646 0.673 0.710 0.686 0.530 0.692 0.677 0.641 0.592 0.652 0.625 0.626 0.636 0.692 0.711 0.644 0.468 0.658 0.684 0.631 0.615 0.666 0.626 0.671 0.659 0.682 0.723 0.658 0.455 0.686 0.684 0.660 0.624 0....
https://arxiv.org/abs/2505.22232v1
0.128 0.104 0.233 0.226 0.087 0.239 0.129 0.298 0.289 0.314 0.229 0.239 0.135 0.090 0.219 0.225 0.131 0.322 0.115 0.299 0.274 0.321 0.217 0.268 0.115 0.081 0.207 0.205 0.116 0.270 0.068 0.308 0.287 0.272 0.241 0.235 0.120 0.072 0.174 0.226 0.097 0.239 0.134 0.287 0.260 0.321 0.235 0.221 0.121 0.059 0.198 0.222 0.085 0....
https://arxiv.org/abs/2505.22232v1
motivates the model-specific threshold for pre-training data sampling. Notably, we found a Spearman correlation of 0.83 between the three models, indicating similar ranking orders despite the scale shifts. embedder gte-multilingual-base jina-embeddings-v3 snowflake-arctic-embed-m-v2 annotator + balancing Gemma-3-27B-it...
https://arxiv.org/abs/2505.22232v1
The Snowflake embedding model consistently outperforms the other backbones across annotators and training set variants. Its best configuration—combined with the Mistral- 3.1 annotation model and class-balanced training—yields the highest overall correlation ( 0.744 ±0.016). C.3 End-to-End Training: Embedder and Regress...
https://arxiv.org/abs/2505.22232v1
the regression-head-only variant. Despite the additional degrees of freedom introduced by updating the full model. This suggests that fine-tuning the embedding model does not offer any additional benefit in our setup and may even hinder performance—likely due to overfitting or insufficient optimization stability under ...
https://arxiv.org/abs/2505.22232v1
corresponds to a regression head trained solely on one specific language, while each column represents the test language. The values in each cell indicate the Spearman correlation between the model’s predictions and human- annotated scores. This exhaustive view highlights the generalization capability of the model acro...
https://arxiv.org/abs/2505.22232v1
0.750 0.721 0.746 0.726 0.727 0.723 0.724 0.716 0.672 0.715 0.722 0.531 0.725 0.724 0.724 0.682 0.690 0.728 0.699 0.708 0.728 0.626 0.723 0.733 0.688 0.731 0.723 0.757 0.729 0.740 0.725 0.709 0.740 0.695 0.717 0.712 0.763 0.731 0.777 0.743 0.717 0.737 0.723 0.750 0.699 0.738 0.719 0.561 0.744 0.752 0.756 0.713 0.721 0....
https://arxiv.org/abs/2505.22232v1
0.669 0.659 0.670 0.546 0.711 0.694 0.683 0.642 0.668 0.681 0.680 0.687 0.706 0.535 0.683 0.710 0.649 0.685 0.689 0.768 0.710 0.733 0.713 0.659 0.722 0.681 0.679 0.682 0.738 0.708 0.750 0.706 0.693 0.697 0.704 0.693 0.647 0.685 0.696 0.502 0.715 0.712 0.720 0.648 0.668 0.708 0.697 0.692 0.731 0.550 0.697 0.718 0.654 0....
https://arxiv.org/abs/2505.22232v1
0.586 0.697 0.691 0.703 0.672 0.681 0.685 0.699 0.682 0.699 0.642 0.678 0.689 0.675 0.689 0.686 0.735 0.698 0.704 0.690 0.679 0.701 0.669 0.693 0.687 0.711 0.715 0.716 0.700 0.730 0.697 0.727 0.705 0.661 0.714 0.728 0.595 0.728 0.691 0.699 0.674 0.697 0.727 0.678 0.693 0.712 0.628 0.698 0.706 0.695 0.725 0.728 0.734 0....
https://arxiv.org/abs/2505.22232v1
0.694 0.705 0.696 0.688 0.699 0.683 0.696 0.698 0.720 0.692 0.703 0.698 0.690 0.707 0.680 0.693 0.692 0.725 0.733 0.738 0.738 0.707 0.721 0.705 0.749 0.695 0.744 0.710 0.599 0.719 0.719 0.737 0.695 0.728 0.706 0.732 0.730 0.719 0.670 0.732 0.729 0.722 0.717 0.712 0.751 0.724 0.746 0.731 0.734 0.744 0.704 0.719 0.719 0....
https://arxiv.org/abs/2505.22232v1
mk mt nb nl nn pl pt ro sk sl sr sv tr ukTrain Language0.765 0.739 0.763 0.740 0.737 0.759 0.731 0.734 0.671 0.755 0.738 0.587 0.735 0.734 0.736 0.727 0.718 0.733 0.738 0.730 0.760 0.656 0.735 0.748 0.700 0.744 0.737 0.766 0.748 0.764 0.748 0.725 0.757 0.698 0.747 0.731 0.747 0.754 0.760 0.726 0.729 0.749 0.730 0.711 0...
https://arxiv.org/abs/2505.22232v1
0.698 0.664 0.717 0.676 0.674 0.676 0.729 0.693 0.722 0.686 0.686 0.721 0.687 0.684 0.645 0.713 0.700 0.560 0.705 0.699 0.675 0.727 0.692 0.692 0.692 0.673 0.720 0.508 0.691 0.675 0.660 0.703 0.694 0.713 0.707 0.726 0.711 0.675 0.716 0.662 0.729 0.688 0.687 0.704 0.695 0.705 0.658 0.708 0.646 0.721 0.664 0.710 0.650 0....
https://arxiv.org/abs/2505.22232v1
0.729 0.702 0.661 0.687 0.655 0.705 0.660 0.704 0.663 0.518 0.703 0.708 0.714 0.655 0.702 0.662 0.693 0.697 0.707 0.548 0.705 0.705 0.673 0.673 0.676 0.738 0.723 0.720 0.725 0.678 0.723 0.667 0.696 0.685 0.734 0.744 0.753 0.733 0.718 0.730 0.716 0.717 0.682 0.741 0.709 0.558 0.737 0.724 0.732 0.708 0.722 0.721 0.709 0....
https://arxiv.org/abs/2505.22232v1
HellaSwag Source Bulgarian bg openGPT-X/arcx openGPT-X/mmlux openGPT-X/hellaswagX German de openGPT-X/arcx openai/MMMLU openGPT-X/hellaswagX Greek el openGPT-X/arcx CohereLabs/Global- MMLUopenGPT-X/hellaswagX Spanish es openGPT-X/arcx openai/MMMLU openGPT-X/hellaswagX Finnish fi openGPT-X/arcx openGPT-X/mmlux openGPT-X...
https://arxiv.org/abs/2505.22232v1
(%)Spanish Benchmark MMLU Hellaswag ARC Quality Filter FW2 JQL-Edu-0.6 (Ours) +9.15% Tokens JQL-Edu-0.7 (Ours) −13.96% TokensFigure 24: Dataset training performance for Spanish. 5 10 15 20 25 Training Tokens in Billion6810121416Gold Label Prop. (%)Finnish Benchmark MMLU Hellaswag ARC Quality Filter FW2 JQL-Edu-0.6 (Our...
https://arxiv.org/abs/2505.22232v1
as for the European languages. For Thai, we even observed better performance than the European language average across all annotators. Consequently, JQL generalizes well to new languages (families). E.2 Further Results In Figs. 34, 35 and 36, we compare the training curves of Arabic, Thai, and Chinese, respectively. Si...
https://arxiv.org/abs/2505.22232v1
0.63 0.51 0.65 0.65 0.64 0.65 0.64 0.65 0.65 0.67 0.65 0.54 0.65 0.65 0.63 0.64 0.65 0.68 0.66 0.66 0.64 0.66 0.67 0.65 0.64 0.64 0.69 0.68 0.69 0.68 0.64 0.66 0.66 0.69 0.64 0.69 0.64 0.48 0.68 0.67 0.66 0.66 0.67 0.67 0.67 0.68 0.67 0.55 0.68 0.68 0.66 0.66 0.66 0.7 0.68 0.69 0.67 0.68 0.7 0.66 0.67 0.66 0.67 0.67 0....
https://arxiv.org/abs/2505.22232v1
8000 # tokens512 tokens (c) Token Counts across all Test Languages. We observe a meaningful percentage of documents longer then 512 tokens. Figure 39: Increased context length of lightweight JQL-annotators improved performance. F.2 Influence of Ranking Performance and Ensembles on Data Quality In Sec. 3, we observed th...
https://arxiv.org/abs/2505.22232v1
performed on a cluster equipped with several hundreds of A100 GPUs. K Usage of AI Tools We made use of AI-assisted tools such as ChatGPT and GitHub Copilot to support writing and coding tasks. All AI-generated outputs were thoroughly validated to ensure their correctness. 12Note that Mistral is shared under Apache Lice...
https://arxiv.org/abs/2505.22232v1
arXiv:2505.22244v1 [cs.AI] 28 May 2025A Preprocessing Framework for Efficient Approximate Bi-Objective Shortest-Path Computation in the Presence of Correlated Objectives Yaron Halle1, Ariel Felner2, Sven Koenig3, Oren Salzman1 1Technion - Israel Institute of Technology 2Ben-Gurion University 3University of California, ...
https://arxiv.org/abs/2505.22244v1
precisely (Ehrgott 2005; Breugem, Dollevoet, and van den Heuvel 2017). While exact algorithms have been proposed for BOSP (Skyler et al. 2022; Hern ´andez et al. 2023), we are often interested in approximating the Pareto-optimal solution set (see, e.g., (Perny and Spanjaard 2008; Tsaggouris and Zaro- liagis 2009; Goldi...
https://arxiv.org/abs/2505.22244v1
summarized in Fig. 1, consists of a pre- processing phase and a query phase. In the preprocessing phase, regions, or clusters, of the bi-objective graph Gwith strong correlation between objectives are identified. The set of paths within each cluster that connect vertices that lie on the cluster’s boundary is efficientl...
https://arxiv.org/abs/2505.22244v1
is formalized in the following definition. Problem 1. LetG= (V,E,c)be a bi-objective search graph and ε∈R2 ≥0a user-provided approximation factor. Our problem calls for preprocessing the inputs Gandεsuch that, given a query in the form vs, vt∈ V, we can efficiently compute Π∗ ε(vs, vt). 3 Algorithmic Background This se...
https://arxiv.org/abs/2505.22244v1
A′ C1 C2 π A π′ c(e) EA (b) Figure 2: (a) A*pex expanding an apex-path pair AP=⟨A, π⟩by edge eto obtain AP′=⟨A′, π′⟩. (b)GA*pex expanding an apex-path pair AP=⟨A, π⟩by an apex-edge pair AE=⟨EA, e⟩to obtain AP′=⟨A′, π′⟩. C1 C2 π π′ A′ A C1 C2 π A′ A C1 C2 π′ A′ A Anew Anew merge merge option 1 option 2 Figure 3: A*pex m...
https://arxiv.org/abs/2505.22244v1
using only trivial apex-edge pairs is identical to how A*pex expands apex-path pairs using the corresponding edge. 4.2 Generalized A*pex Formally, let VandEbe a vertex set and edge set, respec- tively, and let c,c′:E →R2 ≥0be two bi-objective cost func- tions over the edge set Esuch that c′(e)⪯c(e). We define ˆG:= (V,E...
https://arxiv.org/abs/2505.22244v1
.Lete∈ E,δ >0be some threshold and ℓbe some two-dimensional line (i.e., ℓ:ax+ by+ 1 = 0 for some a, bs.t.a2+b2>0). We say that an edgee δ-conforms with line ℓiffdist ⊥(ℓ,c(e))≤δ. Here dist ⊥(ℓ,c(e)) :=|ac1(e) +bc2(e) + 1|√ a2+b2. (1) Definition 3 (correlated cluster) .Given a graph G= (V,E), a(δ, ℓ)-correlated cluster ...
https://arxiv.org/abs/2505.22244v1
not perfect. To this end, in order to detect distinct linear relationships in the 2-dimensional (C1,C2)space, we utilize RANSAC (Random Sample Consensus) (Fischler and Bolles 1981), similar to Mahmood, Han, and Lee (2020). RANSAC is an iterative method for estimating model parameters from ob- served data while distingu...
https://arxiv.org/abs/2505.22244v1
which e conforms with. If there exists a line ℓuthat all these edges conform to (i.e.,T e∈EuLu̸=∅) then a new cluster ψu is created and uis added to the cluster’s vertex set. Now, a Depth-First Search (DFS) recursion is invoked for each neighboring vertex to expand the cluster. All neighbors are then removed from the s...
https://arxiv.org/abs/2505.22244v1
ij π1 bj 100 80 90 28 30 ε= (0.1,0.1)Figure 5: Introducing super-edges connecting the boundary vertices biandbjin cluster ψ. See Example 1 for details. such that running GA*pex on this generalized query graph will allow us to efficiently compute Π∗ ε. Unfortunately, the branching factor of vertices in ˜Gmay turn out to...
https://arxiv.org/abs/2505.22244v1
an ε-approximation of Π∗inG. Lazy Edge Expansion Recall that within a correlated cluster ψ, we connect all pairs of boundary vertices bi, bj∈B(ψ)by one or more super-edges of the set ˆEψ,i,j. Namely, the number of super-edges introduced is at least quadratic in the number of boundary vertices. Thus, the branching facto...
https://arxiv.org/abs/2505.22244v1
ing factor), as well as preprocessing time and space required for storing the optimal paths abstracted by super-edges. As expected, ˜Gconsistently has dramatically fewer vertices but a higher branching factor when compared to G. However, these two factors counterbalance each other, and both graphs have comparable numbe...
https://arxiv.org/abs/2505.22244v1
some instances, up to 1000×. 7 Discussion and Future Work In this work we presented the first practical, systematic ap- proach to exploit correlation between objectives in BOSP. Our approach is based on a generalization to A*pex that is of independent interest and an immediate question is what other problems can make u...
https://arxiv.org/abs/2505.22244v1
R. E. 1985. Depth-first iterative-deepening: An op- timal admissible tree search. Artificial intelligence , 27(1): 97–109. Mahmood, B.; Han, S.; and Lee, D.-E. 2020. BIM-based registration and localization of 3D point clouds of indoor scenes using geometric features for augmented reality. Re- mote Sensing , 12(14): 230...
https://arxiv.org/abs/2505.22244v1
Salzman, O.; Felner, A.; Kumar, T. K. S.; Skyler, S.; Ulloa, C. H.; and Koenig, S. 2023a. Towards Effective Multi-Valued Heuristics for Bi-objective Shortest-Path Al- gorithms via Differential Heuristics. In Symposium on Com- binatorial Search (SoCS) , 101–109. Zhang, H.; Salzman, O.; Felner, A.; Kumar, T. S.; Ulloa, C...
https://arxiv.org/abs/2505.22244v1
MRT at SemEval-2025 Task 8: Maximizing Recovery from Tables with Multiple Steps Maximiliano Hormazábal Lagos†,Álvaro Bueno Sáez†,Héctor Cerezo-Costas†, Pedro Alonso Doval†,Jorge Alcalde Vesteiro† mhormazabal@gradiant.org ,abueno@gradiant.org ,hcerezo@gradiant.org palonso@gradiant.org ,jalcalde@gradiant.org †Fundación C...
https://arxiv.org/abs/2505.22264v1
or Python code, respectively. These methods offer advantages such as greater flexibility, as they are theoretically independent of the table size (which might not fit entirely in the context window of an LLM), and greater trans- parency, by including an intermediate step that al- lows auditing and reviewing the generat...
https://arxiv.org/abs/2505.22264v1
in the desirable data type or selecting the correct number of decimals in numbers. 3.1 Column Descriptor This module aims to analyze and understand the content of the table. First, it analyzes the input table obtaining some statistical data for each column, such as the data type, the number of unique values, if it has ...
https://arxiv.org/abs/2505.22264v1
already preexisting librariessuch as autopep8, autoflake, and lib_23 to fix mini- mal inconsistencies in the Python code syntax. In particular lib_23 is used to parse python 2 code into python 3, and will check for missing commas/- paretheses (for example). We also employ the AST tree parsing to detect when something d...
https://arxiv.org/abs/2505.22264v1
GB of graphics memory for performance. In 4.2, we define the model configurations used in the test phase. During development, we also used reduced versions of these models with 8B parameters. 4.1 Dataset splits Although no training of any model has been per- formed, the splits of the dataset are shown below. Split Tabl...
https://arxiv.org/abs/2505.22264v1
the Runner, Interpreter and Formatter step. As can be seen in the results, the heuristics to format the final predictions sometimes introduce additional errors. If we filter the prediction by type of response re- quested (Table 4) we can see that the lists of items (either number or categorical) have a much higher diff...
https://arxiv.org/abs/2505.22264v1
account or not, etc.). Theothers set accounts for all unclassifiable er- rors, in general due to ambiguous questions or ex- pected answers or incorrect ground truth samples. Description % error Wrong cell value filtering 14.29 Wrong Instructions 37.66 Wrong code (incl. exceptions) 14.29 Formatting (transformations) 6.4...
https://arxiv.org/abs/2505.22264v1
Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 450–482, Mexico City, Mexico. Association for Com- putational Linguistics. Jorge Osés Grijalba, Luis Alfonso Ure na-López, Euge- nio Mart’inez Cámara, and Jose Camacho-Collados. 2025. SemEval-2025 task 8: Question answering over ta...
https://arxiv.org/abs/2505.22264v1
{ 32 " name ": " Have you ever use an online 33 dating app ?", 34 " description ": " Indicates whether 35 the respondent has ever used an 36 online dating application ." 37 } 38} Figure 2: Examples of outputs of the Column Descrip- tions for two columns.1Question : " What is the primary type of 2the Pok \’{e} mon with ...
https://arxiv.org/abs/2505.22264v1
arXiv:2505.22271v1 [cs.CR] 28 May 2025Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models Yongcan Yu1,2Yanbo Wang2,1Ran He1,2Jian Liang1,2,† 1NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Ch...
https://arxiv.org/abs/2505.22271v1
resistance against various jailbreak attacks during testing. In biological immunity, when the body encounters a pathogen for the first time, the immune system identifies it and initiates a targeted response, producing specific antibodies to neutralize the threat. Likewise, TIM treats jailbreak attempts as digital "path...
https://arxiv.org/abs/2505.22271v1
regularization. Simultaneously, we optimize the detector during testing to further enhance its performance. In the experimental section, we evaluate our approach against various jailbreak attacks on both LLMs and MLLMs. The results demonstrate that our framework effectively mitigates jailbreak attempts after detecting ...
https://arxiv.org/abs/2505.22271v1
LVLM-LP [ 54] addresses jailbreak detection by adopting a classifier beyond the first generated token. Another approach by Zhang et al. [52] involves augmenting the input multiple times and using a similarity matrix between responses for detection. However, most of these methods are time-consuming, relying on additiona...
https://arxiv.org/abs/2505.22271v1
detection data into detection memory for training. Then we utilize jailbreak memory Mjto train the LLM’s defense LoRA module by supervised fine-tuning and employ detection memory Mdto further train the detector (i.e., TTA) by Equation (4). Additionally, we employ a question-answering dataset Dqaand a detection dataset ...
https://arxiv.org/abs/2505.22271v1
invasion. In our approach, we treat jailbreak activities as pathogens and use the above detector to distinguish them from normal activities. Once pathogens are identified, the organism will initiate an immune response and produce antibodies to neutralize the damage caused by antigens. Following an immune response, the ...
https://arxiv.org/abs/2505.22271v1
token nor adapts during 5 testing (i.e., LLMs with a linear probing binary detector on the last generated token). Furthermore, we compare our detector against detection baselines, including Self Defense [31] and LVLM-LP [54], in LLM experiments. ▷Metrics. We evaluate jailbreak methods from two perspectives: the effecti...
https://arxiv.org/abs/2505.22271v1
demonstrates strong defensive capabilities, especially against Figstep, where it reduces the ASR to 0%. Similarly, the ASR on MM-SafetyBench is reduced to 7% by Adashield. Despite its effectiveness, Adashield suffers from a noticeable over-defense phenomenon with normal samples, with over 5% of them being rejected. Aft...
https://arxiv.org/abs/2505.22271v1
the last token for classification. This strategy has improved the expressive capacity of our detector. Table 4: The detection performance under I-FSJ attack. Methods ACC ( ↑) TPR ( ↑)FPR (↓) Self Defense [31] 64.4 42.9 14.2 LVLM-LP [54] 67.7 36.3 0.8 LP 88.5 77.4 0.7 TIM (w/o adapt.) 99.1 98.9 0.6 TIM (w/o gist) 99.4 1...
https://arxiv.org/abs/2505.22271v1
that under our method, the ASR can still be reduced to a very low level, while the model’s ability to respond to normal queries remains largely unaffected. ▷Results under Different Jailbreak Data Ratios. In practical applications, the proportion of jailbreak data within the model’s test data is typically not fixed. The...
https://arxiv.org/abs/2505.22271v1
Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, and Amrit Singh Bedi. Immune: Improving safety against jailbreaks in multi-modal llms via inference-time alignment. arXiv preprint arXiv:2411.18688 , 2024. [6]Yichen Gong, Delong ...
https://arxiv.org/abs/2505.22271v1
2024. [23] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Improved reasoning, ocr, and world knowledge, January 2024. URL https://llava-vl.github.io/ blog/2024-01-30-llava-next/ . [24] Xiaogeng Liu, Nan Xu, Muhao Chen, and Chaowei Xiao. Autodan: Generating stealthy ...
https://arxiv.org/abs/2505.22271v1
2021. [42] Yanbo Wang, Jiyang Guan, Jian Liang, and Ran He. Do we really need curated malicious data for safety alignment in multi-modal large language models? arXiv preprint arXiv:2504.10000 , 2025. [43] Yihan Wang, Zhouxing Shi, Andrew Bai, and Cho-Jui Hsieh. Defending llms against jailbreaking attacks via backtransl...
https://arxiv.org/abs/2505.22271v1
Proc. ICML , 2024. [59] Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 , 2023. A The Details of Experimental Setup A.1 Dataset Construction To construct the detection datas...
https://arxiv.org/abs/2505.22271v1
For LLM experiments, we use LLaMA2-7B-chat and LLaMA3-8B-Instruct [ 40] as the base 12 model. The weights for all base models are sourced from Hugging Face. We set the learning rate, number of epochs, and batch size for detector training to 1e-3, 5, and 32, respectively. We use the Adam optimizer [ 16] for defense trai...
https://arxiv.org/abs/2505.22271v1
TIM (w/o adapt.) Results under GCG attack . We supplemented the results of the white-box attack, GCG, in Table 8. TIM decreased the ASR from 21.5% to 7.7%, demonstrating its effectiveness against GCG. Performance curve during testing . To demonstrate the performance of our method as the test progresses, we report the r...
https://arxiv.org/abs/2505.22271v1
Natural Language Processing in Support of Evidence-based Medicine: A Scoping Review Zihan Xu1,*, Haotian Ma1,*, Gongbo Zhang2, Yihao Ding3, Chunhua Weng2,Yifan Peng1 1Weill Cornell Medicine,2Columbia University,3University of Sydney Correspondence: yip4002@med.cornell.edu *Authors contributed equally Abstract Evidence-...
https://arxiv.org/abs/2505.22280v1