text
string
source
string
generally performs poorly across the datasets, which shows the importance of candidate toxic words filtering. (2) ft-SCOPE consistently outperforms SCOPE across all the metrics and datasets. This shows the significant domain gap between general CSC pretraining data and toxic speech corpora. Further, our method C2TU-BER...
https://arxiv.org/abs/2505.22184v1
FP and TN rates, and higher precision scores. For example, On theToxicloakCN dataset, in the correction task, Naive has a precision of 9.74%, while that of our model C2TU-LLM is 90.41%, which is around 10 ×larger. In summary, our methods significantly improve precision, at the cost of a mild drop in recall, which final...
https://arxiv.org/abs/2505.22184v1
probability of a word based only on its preceding context Xpre. This helps us understand the importance of the sentence probability method that leverages full context in the filtering stage. We call this method C2TU-WP (WordProbability). Further, filtering candidate toxic words in Algorithm 2 involves multiple rounds. ...
https://arxiv.org/abs/2505.22184v1
pages 4171–4186, 2019. [9]G. Dong, J. Zhao, T. Hui, D. Guo, W. Wang, B. Feng, Y . Qiu, Z. Gongque, K. He, Z. Wang, et al. Revisit input perturbation problems for llms: A unified robustness evaluation framework for noisy slot filling task. In CCF International Conference on Natural Language Processing and Chinese Comput...
https://arxiv.org/abs/2505.22184v1
key. Neurocomputing , 490:312–318, 2022. [23] Q. Team. Qwen2.5: A party of foundation models, September 2024. [24] Y . Xiao, Y . Hu, K. T. W. Choo, and R. K.-w. Lee. Toxicloakcn: Evaluating robustness of offen- sive language detection in chinese with cloaking perturbations. arXiv preprint arXiv:2406.12223 , 2024. [25] ...
https://arxiv.org/abs/2505.22184v1
in each level and task. For sentence level, given the sample pair of (source sentence and target sentence) and the correspond- ing corrected sentence, if the length of corrected sentence is not equal to the target sentence, the result is FN for both detection and correction tasks. Otherwise, we have the following defin...
https://arxiv.org/abs/2505.22184v1
and connect characters with identical or phonetically similar pinyins, which is defined in Section 3.1.1. The details of homo-graphs is shown in Table 5. Toxic Lexicon (1)ToxicloakCN is an enhanced dataset derived from ToxiCN [19], where JioNLP [ 5] and NMSL4are applied to perform homophone substitution and semanticall...
https://arxiv.org/abs/2505.22184v1
iflen(w(i)) = len( l(j))then 4: N←len(w(i)), flag ←True 5: foreachk∈ {1,2,···, N}do 6: ifG.HasEdge( w(i) k, l(j) k)̸= 1then 7: flag←False 8: end if 9: end for 10: ifflag = True then 11: Wp←Wp∪ {(w(i), l(j))} 12: end if 13: end if 14:end for 15:return Wp 15 F.2 Filtering Toxic Words Algorithm Algorithm 2 Filtering Toxic...
https://arxiv.org/abs/2505.22184v1
“母蛆”, “杠精”, “猎默”, “狗贼”, “鬼母”, “恶心”, “白 男”, “傻子”, “混黑”, “杀默”, “断袖”, “西戎”, “国铝”, “沙口”, “强奸”, “母人”, “屠同”, “北狄”, “白 皮”, “跪族”, “默妖”, “厌女”, “活该”, “绿帽”, “黑畜”, “畜生 ”, “黑逼”, “阿娜”, “网暴”, “黑族”, “普 信”, “粪蛋”, “傻逼”, “黑粪”, “男同”, “舔黑”, “西八”, “圣母”, “呆子”, “牛马”, “东百”, “喷子”, “同 志”, “虫类”, “阿三”, “窑姐”, “拳畜”, “基佬”, “矮子”, “瘪三”, “蛮夷”, “倭寇”, “杂种...
https://arxiv.org/abs/2505.22184v1
Justification: The key claims we make in the abstract and introduction accurately reflect the contribution and scope of the paper. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, incl...
https://arxiv.org/abs/2505.22184v1
the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof...
https://arxiv.org/abs/2505.22184v1
to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully repro...
https://arxiv.org/abs/2505.22184v1
the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). 21 •The method for calculating the error bars should be explained (cl...
https://arxiv.org/abs/2505.22184v1
(e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. •The conference expects that many papers will be foundational research and not t...
https://arxiv.org/abs/2505.22184v1
•If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both ...
https://arxiv.org/abs/2505.22184v1
LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA...
https://arxiv.org/abs/2505.22184v1
arXiv:2505.22193v1 [quant-ph] 28 May 2025Physics-inspired Generative AI models via real hardware-based noisy quantum diffusion Marco Parigi1, Stefano Martina1,2*, Francesco Aldo Venturelli1,3, Filippo Caruso1,2 1Department of Physics and Astronomy, University of Florence, Via Sansone 1, Sesto Fiorentino, 50019, Florenc...
https://arxiv.org/abs/2505.22193v1
tasks [9– 11], text generation [12], sequential data model- ing [13], audio synthesis [8], and are one of the fundamental elements of famous and widespread GenAI technologies such as Stable Diffusion [14], DALL-E 4 [15] and Diffwave [8]. On the other hand, quantum computing is a rapidly emerging technology that har- ne...
https://arxiv.org/abs/2505.22193v1
65], QVAE [66], and Quantum Diffusion Models (QDMs) [67]. An interesting aspect of QGenAI models is that they allow the integration of computational proto- cols with physical quantum devices. For example, QGANs have recently been realized using a silicon quantum photonic chip [68]. Moreover, a practi- calquantum advant...
https://arxiv.org/abs/2505.22193v1
sample xt at time tis obtained drawing from the transition kernel: xt∼q(xt|xt−1), (1) q(xt|xt−1) = Cat( xt;p=xt−1Qt), (2) where Cat( x;p) is the categorical distribution sampling the one-hot row vector xwith probabil- ityp,xt−1is the sample at time t−1, and Qt is the matrix that contains the transition proba- bilities ...
https://arxiv.org/abs/2505.22193v1
(4) dampens the oscil- lations of the pure quantum case, leading to faster convergence with respect to the classical case, for example, for ω= 0.4,0.6. 0246810121416182000.511.52 tKLω= 0 ω= 0.2 ω= 0.4 ω= 0.6 ω= 0.8 ω= 1 Fig. 2 : The KL divergence between the popula- tions of a single QSW and the uniform distribution on...
https://arxiv.org/abs/2505.22193v1
and standard error of the mean are also reported. The plot show how the hybrid quantum-classical diffusion dynamics (ω= 0.3) results to generate statistically better image datasets. where µandµ′are, respectively, the mean of the multivariate normal distribution of the features of the original and generated image datase...
https://arxiv.org/abs/2505.22193v1
classical stochastic dynamics. 2.3 Implementation on NISQ Devices A hybrid QSW dynamics can be interpreted as a QW interacting with an external environment introducing noise. In this section, we therefore per- form image generation by implementing a QW that exploits the intrinsic noise of a NISQ device in the forward p...
https://arxiv.org/abs/2505.22193v1
a coefficient used to guarantee the convergence of the forward process to the uniform distribu- tion within the a priori fixed number of time steps T= 20, and we choose c= 5·104. This scal- ing of the injected noise is chosen in analogy to the cosine schedule of noise in the classical DM of Nichols et al. [86]. The bac...
https://arxiv.org/abs/2505.22193v1
0 5KL=1.375t=5 0 5KL=0.014 0 5KL=1.016 Forward process Generation processFID=352 Fig. 8 : Images generation with QW-based DM with noise from real ibmbrisbane NISQ device. We report for selected values of tthe evolution of 9 random samples of the real dataset in forward (first row) and 9 random generated samples in back...
https://arxiv.org/abs/2505.22193v1
maxi- mum 3 degree of connectivity between qubits of a QPU. This allows for the implementation on the currently available NISQ devices. In conclusion, we show how noise can be used as a resource in the context of QGenAI and not only be a detrimental factor for quantum algo- rithms. Some future research directions can f...
https://arxiv.org/abs/2505.22193v1
sample of the initial distribution q(x0). The denoising is implemented by an ANN that is trained to learn the reverse trajectory of the diffusion process: p(xT) =π(xT) (13) pθ(x0:T) =p(xT)TY t=1pθ(xt−1|xt), (14) where pθ(xt−1|xt) is a parameterized transi- tion kernel having the same functional form of q(xt|xt−1). A de...
https://arxiv.org/abs/2505.22193v1
The num- berdvof edges connected to the vertex vis called degree of the vertex. A graph is called undirected if the edges of the graph do not have a direction, or directed otherwise. A graph is completely defined by its adjacency matrix Athat contains infor- mation on the topology of the graph, and whose element are de...
https://arxiv.org/abs/2505.22193v1
V.(27) The dynamics of the quantum walker is governed by the unitary single time-step operator ˆUacting on the total Hilbert space: ˆU=ˆS·(ˆC⊗ˆI), (28) where ˆIis the identity on position space, ˆCis the coin operator acting on the auxiliary space, and ˆS is shift operator acting only on position space and moving the w...
https://arxiv.org/abs/2505.22193v1
Rilevante Inter- esse Nazionale (PRIN) Bando 2022 - project n. 20227HSE83 – ThAI-MIA funded by the Euro- pean Union-Next Generation EU. Author Contributions M.P. and S.M. performed the implementation and experiments. M.P., S.M., F.A.V. and F.C. dis- cussed and analyzed the results. M.P., S.M., F.C. conceived the method...
https://arxiv.org/abs/2505.22193v1
Song, J., Song, Y., Ermon, S.: Csdi: Conditional score-based diffusion models for probabilistic time series impu- tation. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Pro- cessing Systems, vol. 34, pp. 24804–24816. Curran Associates, Inc. (2021) [14] St...
https://arxiv.org/abs/2505.22193v1
Caruso, F.: Universally optimal noisy quan- tum walks on complex networks. New Journal of Physics 16(5), 055015 (2014) [37] Caruso, F., Crespi, A., Ciriolo, A.G., Scia- rrino, F., Osellame, R.: Fast escape of a quantum walker from an integrated photonic maze. Nature Communications 7(1), 11682 (2016) [38] Dalla Pozza, N...
https://arxiv.org/abs/2505.22193v1
015004 (2022) [57] Das, S., Zhang, J., Martina, S., Suter, D., Caruso, F.: Quantum pattern recognition on real quantum processing units. Quantum Machine Intelligence 5(1), 16 (2023) [58] Geng, A., Moghiseh, A., Redenbach, C., Schladitz, K.: A hybrid quantum image edge detector for the nisq era. Quantum Machine Intellig...
https://arxiv.org/abs/2505.22193v1
M.: Argmax flows and multi- nomial diffusion: Learning categorical distri- butions. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. 16 (eds.) Advances in Neural Information Pro- cessing Systems, vol. 34, pp. 12454–12465. Curran Associates, Inc. (2021) [80] Kossakowski, A.: On quantum statisti-...
https://arxiv.org/abs/2505.22193v1
brisbane . t=10 0 5KL=0.087 0 5KL=1.075t=15 0 5KL=0.008 0 5KL=1.343t=T=20 0 5KL=0.000 0 5KL=1.229t=0 0 50.000.250.500.751.00KL=1.320 0 50.000.250.500.751.00KL=0.016t=1 0 5KL=0.494 0 5KL=1.635t=2 0 5KL=0.351 0 5KL=0.541t=3 0 5KL=0.273 0 5KL=2.085t=4 0 5KL=0.277 0 5KL=1.375t=5 0 5KL=0.257 0 5KL=2.124 Forward process Gene...
https://arxiv.org/abs/2505.22193v1
Published as a conference paper at ICLR 2025 ENHANCING UNCERTAINTY ESTIMATION AND INTER - PRETABILITY VIA BAYESIAN NON-NEGATIVE DECI- SION LAYER Xinyue Hu*, 1, Zhibin Duan*, 2, Bo Chen†,1, Mingyuan Zhou3 1National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an, 710071, China. 2School of Mathematics...
https://arxiv.org/abs/2505.22199v1
(Nguyen et al., 2016), meaning they respond to multiple, unrelated features. This phenomenon may arise from the entangled nature of DNNs, wherein multiple features are utilized for various tasks. To address this challenge, significant efforts have been made in the literature, including the use of specialized regularize...
https://arxiv.org/abs/2505.22199v1
a flexible Bayesian Non-negative Decision Layer ( BNDL ) for deep neural networks, empower its interpretability and uncertainty estimation capabilities. • The complexity analysis shows that the computational overhead introduced by BNDL is minimal compared to DNNs. Further, we take theory analysis to verify its disentan...
https://arxiv.org/abs/2505.22199v1
surrogates (Ribeiro et al., 2016) and salience maps (Si- monyan et al., 2013). However, as noted by various recent studies, these local attributions can be easy to fool or may otherwise fail to capture global aspects of model behavior (Adebayo et al., 2018; Leavitt & Morcos, 2020; Wong et al., 2021). A major confounder...
https://arxiv.org/abs/2505.22199v1
in Fig. 1(b), the Eq. 2 is improved to p(y|x) =Z θp(y|θ)p(θ|x) (3) To further account for epistemic uncertainty, which refers to the uncertainty inherent in the model itself, we treat the weights of the final fully connected layer as stochastic latent variables. The generative model is then defined as follows, and its ...
https://arxiv.org/abs/2505.22199v1
easier to optimize; ii)the Weibull distribution is similar to a gamma distribution, capable of modeling sparse, skewed and positive distributions. Specifically, the latent variable x∼Weibull (k, λ)can be easily reparameterized as: x=λ(−ln(1−ε))1/k, ε ∼Uniform (0,1). (6) Where λandkare the scale and shape parameter of W...
https://arxiv.org/abs/2505.22199v1
divergence that constrains the variational distribution q(−)to be close to its prior p(−). The parameters in the Generalized GBN can be directly optimized by advanced gradient algorithms, like SGD (Kingma & Ba, 2015). Complexity analysis Modifying the last layer of base deep neural networks minimally increases paramete...
https://arxiv.org/abs/2505.22199v1
a more flexible mechanism for inducing sparsity without sacrificing model per- formance. Similarly, we can demonstrate the partial identifiability of Φ, as it is often considered to be the transpose of θ(Fu et al., 2017; HaoChen et al., 2021). In conclusion, BNDL follows the aforementioned assumptions, and its optimiza...
https://arxiv.org/abs/2505.22199v1
1. The baseline models are grouped into two categories: 1)Uncertainty estimation networks, including Bernoulli MC Dropout(Gal & Ghahramani, 2016), BM (Joo et al., 2020) and CARD (Han et al., 2022) 2)Dense decision layer baselines: including ViT-Base (Dosovitskiy et al., 2021) (We used the pretrained weights for the vit...
https://arxiv.org/abs/2505.22199v1
that the model made correct pre- dictions for images with low uncertainty, while for images with high uncertainty, the visualizations reveal the causes of misclassification, e.g., in the image of a wine bottle, the model primarily fo- cused on the wine glass filled with red wine in the background, leading to a misclass...
https://arxiv.org/abs/2505.22199v1
of top- ksuper-pixels. As shown in Table 2, BNDL features exhibit much better disentanglement than ResNet-50 across all top-kdimensions. The advantage is more pronounced when considering the top features, as learned features also contain noise dimensions. This verifies the disentanglement of learned features, as analyz...
https://arxiv.org/abs/2505.22199v1
safety. arXiv preprint arXiv:1606.06565 , 2016. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning , pp. 1613–1622. PMLR, 2015. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Y...
https://arxiv.org/abs/2505.22199v1
Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. Jos´e Miguel Hern ´andez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learn- ing of bayesian neural networ...
https://arxiv.org/abs/2505.22199v1
2017. Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems , 31, 2018. Andrey Malinin, Sergey Chervontsev, Ivan Provilkov, and Mark Gales. Regression prior networks. arXiv preprint arXiv:2006.11590 , 2020. Arshia Soltani Moakhar, Eugenia ...
https://arxiv.org/abs/2505.22199v1
and Yisen Wang. Non-negative contrastive learning. In The Twelfth International Conference on Learning Representations , 2024. Joe Watson, Jihao Andreas Lin, Pascal Klink, Joni Pajarinen, and Jan Peters. Latent derivative bayesian last layer networks. In International Conference on Artificial Intelligence and Statistic...
https://arxiv.org/abs/2505.22199v1
whether a sample is classified as certain. •Fine-tuning setup For Places-10, we set the learning rate to 0.1, batch size to 128, and epochs to 100. For ImageNet-1k, CIFAR-10, and CIFAR-100, we set the learning rate to 0.001 and epochs to 200. •Sparsity Vs Accuracy in Sec. 5.1.2 We follow the default settings of (Wong e...
https://arxiv.org/abs/2505.22199v1
2013). Therefore, we concentrate on the partial identifiability of BNDL, which similarly ensures the identifiability and uniqueness of a subset of columns of θandΦunder more relaxed conditions. Partially Identifiable Features To demonstrate that BNDL is partially identifiable, we first present the definition of partial...
https://arxiv.org/abs/2505.22199v1
accuracy: higher uncertainty corresponds to lower accuracy. This suggests that the model provides reliable uncer- tainty estimates, helping to avoid potential misclassifications. 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Uncertainty707580859095100Test Accuracy (%) cifar-10 (a) ResNet-18 Cifar-10 0.0 2.5 5.0 7.5 10.0 12.5 15....
https://arxiv.org/abs/2505.22199v1
disentanglement of the network. It leverages the non-negativity and sparsity of the gamma distribution to design a Variational Autoencoder (V AE) generative model, achieving better disentanglement performance compared to Gaussian-V AE. In general, while BNDL shares some similar tools with (Wang et al., 2024) and (Duan ...
https://arxiv.org/abs/2505.22199v1
conference paper at ICLR 2025 BNDLResNet 50DebuggableBM Argiope aurantiaHay Lampshade Egyptian cat Water ouzel Lorikeet Figure 6: The LIME visualization results for BNDL ,ResNet-50 Debuggable Networks and BM, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with...
https://arxiv.org/abs/2505.22199v1
arXiv:2505.22200v1 [cs.CV] 28 May 2025Investigating Mechanisms for In-Context Vision Language Binding Darshana Saravanan Makarand Tapaswi Vineet Gandhi CVIT, IIIT Hyderabad, India Abstract To understand a prompt, Vision-Language models (VLMs) must perceive the image, comprehend the text, and build as- sociations within...
https://arxiv.org/abs/2505.22200v1
to represent associations between image tokens and text tokens. We study the most commonly used VLM architecture that consists of a visual encoder, a multi- modal projector and a language model. VLMs and LLMs have some key differences that neces- sitate careful experimentation. (i) Text tokens have fixed embeddings, wh...
https://arxiv.org/abs/2505.22200v1
concept and those that encode the binding information. Each binding ID consists of similar vector pairs in a subspace, with asso- ciated concepts sharing one vector from the same ID. Ex- tending this, we describe our hypothesis for the existence of binding IDs in VLMs using the Shapes task below: • Consider 3D objects ...
https://arxiv.org/abs/2505.22200v1
-11.27 -14.02 -14.04 -14.17 -9.75 -17.68 -17.66 -9.41 -13.23 -14.96 -15.09 -12.02 -11.62 -13.93 -14.10Item 0 I0I1I/prime0I/prime1 Items-9.30 -13.44 -15.27 -15.28 -12.51 -10.83 -15.32 -15.26 -11.98 -11.61 -13.89 -14.00 -13.75 -9.66 -16.87 -16.93Item 1 I0I1I/prime0I/prime1 Items-11.24 -11.12 -14.04 -14.06 -12.27 -10.84 -...
https://arxiv.org/abs/2505.22200v1
Context: The green object contains item P . The red object contains item I .Context: The green object contains item P . The red object contains item I .Figure 4. Mean intervention samples. Fig. 3b. When ZIkis replaced by ZI′ k, the model prefers itemI′ kfor object Ok. However, when we intervene on the color activations...
https://arxiv.org/abs/2505.22200v1
vectors Random vectors O0↔I0O1↔I1O0↔I0O1↔I1 None 1.00 1.00 - - O 0.00 0.05 1.00 1.00 I 0.05 0.00 1.00 1.00 C 1.00 1.00 1.00 1.00 O, I 1.00 0.95 1.00 1.00 O, I, C 1.00 0.95 1.00 1.00 Table 1. Mean ablation accuracies: Object (O), Item (I), Color (C). we compute ∆Oas the mean of the difference of activations over multipl...
https://arxiv.org/abs/2505.22200v1
to understand visual processing [8], shown that object infor- mation is localized to corresponding image token positions [11], and developed methods to manipulate image token representations to mitigate hallucinations [7]. Our work complements these efforts by examining the association be- tween image and text represen...
https://arxiv.org/abs/2505.22200v1
, 2022. 4 [14] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. RoFormer: Enhanced Trans- former with Rotary Position Embedding. arXiv preprint arXiv:2104.09864 , 2021. 3 [15] Gemini Team. Gemini 1.5: Unlocking multimodal under- standing across millions of tokens of context. arXiv preprint arX...
https://arxiv.org/abs/2505.22200v1
arXiv:2505.22202v1 [cs.CL] 28 May 2025Let’s Predict Sentence by Sentence Hyeonbin Hwang1∗Byeongguk Jeon1∗ Seungone Kim2Jiyeon Kim1Hoyeon Chang1Sohee Yang3 Seungpil Won4Dohaeng Lee4Youbin Ahn4Minjoon Seo1 1KAIST2Carnegie Mellon University3University College London4LG AI Research {hbin0701, byeongguk, minjoon}@kaist.ac.k...
https://arxiv.org/abs/2505.22202v1
representations directly by abstracting over their existing token-level representations, without the prohibitive cost of pre-training from scratch. Specifically, we introduce a framework that repurposes pretrained next-token Transformers to reason in a latent sentence-level embedding space. Instead of producing outputs...
https://arxiv.org/abs/2505.22202v1
x= (x1, . . . , x N), the encoder produces a sequence of hidden states H= (h1, . . . , h N). We then define the embedding h[−1]:=hNas the latent representation of the entire input sequence. This embedding conditions the decoder, trained autoregressively with cross-entropy loss: ˆy=θDEC(h[−1])andLCE=−NX t=1logp(yt|y<t, ...
https://arxiv.org/abs/2505.22202v1
compact representations. Yet, as we form CommonsenseQA (CSQA) task’s SEMANTIC embedding using a subset of Fineweb-Edu corpus ( ∼100k documents), we highlight that larger language space (compared to synthetic, constrained, i.e. ProsQA and Blocksworld) involves a higher difficulty. In the Contextual configuration, model ...
https://arxiv.org/abs/2505.22202v1
ground-truth sentence embed- dings hi, each computed using a fixed encoder θENC. Additionally, to enhance the alignment between predicted and teacher-forced embeddings, we incorporate an InfoNCE loss [14]: LInfoNCE =−n−1X t=1logexp sim(ˆht+1, ht+1)/τ P jexp sim(ˆht+1, hj)/τ. The overall training objective combines ...
https://arxiv.org/abs/2505.22202v1
match token-level CoT performance? We hypothesize that effective reasoning is driven more by transitions between high-level concepts than by fine-grained token-level details. Empirically, sentence-level models match or even exceed CoT performance on logical and commonsense reasoning tasks. On mathematical and planning ...
https://arxiv.org/abs/2505.22202v1
to computational constraints, our experiments are limited to sub-1B ∗To see the cost with a lightweight classifier, please refer to Appendix D. 7 (a) CoT vs. CTX-B on CommonsenseQA across GPT-2 variants. (b) GPT-4o Qualitative evaluation of the reasoning steps evaluated using a similar metric employed in [25], where SF...
https://arxiv.org/abs/2505.22202v1
see the fish B: have fun C: catching fish D: wet clothes E: killing 0→1 LAYER 19: A person who eats a lot experiences increased energy levels. LAYER 22: A person who is hungry seeks to alleviate their hunger. When you are hungry, you engage in an activity to satisfy your hunger. ··· 1 If you are hungry, you are likely ...
https://arxiv.org/abs/2505.22202v1
intermediate outputs could offer a novel training signal that could enhance both reasoning efficiency and stability. Furthermore, unlike prior latent reasoning approaches, our framework allows for sampling in the 9 Figure 4: Performance Change when injecting a Gaussian random noise to different modes of inferencing, fo...
https://arxiv.org/abs/2505.22202v1
been extended to language: Hao et al. [19] introduced continuous latent reasoning , where token-level embeddings are gradually replaced with continuous embeddings with the last-layer hidden states through a curriculum-based strategy from Deng et al. [21]. Further extensions include, among others, methods by Shen et al....
https://arxiv.org/abs/2505.22202v1
minor perturbations, the continuous pathway lacks such built-in stabilization. This discrete bottleneck serves as a form of regularization, filtering out numerical noise and constraining the model’s trajectory to a finite set of linguistically meaningful sequences. However, this regularization comes at the expense of e...
https://arxiv.org/abs/2505.22202v1
and Jaehyeok Doo for their insightful discus- sions and valuable feedback. 13 References [1]Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. J. Mach. Learn. Res. , 3(null):1137–1155, March 2003. ISSN 1532-4435. [2]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maart...
https://arxiv.org/abs/2505.22202v1
to reason in a continuous latent space, 2024. URL https://arxiv.org/abs/2412.06769 . [20] Guilherme Penedo, Hynek Kydlí ˇcek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro V on Werra, and Thomas Wolf. The fineweb datasets: Decanting the web for the finest text data at scale, 2024. URL https:...
https://arxiv.org/abs/2505.22202v1
Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien ...
https://arxiv.org/abs/2505.22202v1
Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, ...
https://arxiv.org/abs/2505.22202v1
V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Con- stable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun ...
https://arxiv.org/abs/2505.22202v1
A person spends time traveling between different locations. LAYER 20: A person spends time commuting to work. LAYER 21: A person spends time traveling, which often involves moving from one place to another. LAYER 22: A person spends time traveling, which often involves traveling across distances. 1 People often take th...
https://arxiv.org/abs/2505.22202v1
through MLP→O(LR). •Language-Grounded: With a semantic encoder, each step decodes and re-encodes Ltokens on compact codes—processing 2Ltokens per step for an MLP cost of O(LR). If instead a contextual encoder must re-attend over up to N0+ (t−1)Ltokens each pass, it incurs an additional O(LR2)MLP overhead, which can ero...
https://arxiv.org/abs/2505.22202v1
Freq every 10 every 10 every 2 every 10 LR 5e-4 5e-4 5e-4 5e-4 Batch 128 128 32 64 Table 10: Training configurations of GPT-2 for each dataset and training stage.*SFT includes both CoT and No-CoT variants. Stage GPT-2 Small GPT-2 MediumGPT-2 Large (LoRA) r=256 ,a=1024 ) SFT Epochs 20 20 20 LR 1e-4 1e-4 1e-4 Batch 64 64...
https://arxiv.org/abs/2505.22202v1
lempus is a impus. Ev er y r empus is a s t erpus. Ev er y yimpus is a zu mpus. Ev er y lempus is a yu mpus. Ev er y shu mpus is a je l pus. Ev er y brimpus is a zhorpus. Ev er y scr ompus is a r empus. Ev er y lempus is a wu mpus. Sal ly is a boompus. Sal ly is a gerpus. Ev er y gerpus is a scr ompus. Bob is a wu mpus...
https://arxiv.org/abs/2505.22202v1
arXiv:2505.22203v1 [cs.LG] 28 May 2025Pitfalls of Rule- and Model-based Verifiers – A Case Study on Mathematical Reasoning Yuzhen Huang∗1Weihao Zeng∗1Xingshan Zeng2Qi Zhu3Junxian He1 1The Hong Kong University of Science and Technology 2The Chinese University of Hong Kong3Tsinghua University https://github.com/hkust-nlp...
https://arxiv.org/abs/2505.22203v1
GSM8K, MATH500, Minerva Math, OlympiadBench, AIME24, and AMC23. Right depicts changes in reward values during training. The “training rewards” indicate the rewards provided by the corresponding reward system to the policy model, whereas the “oracle rewards” represent rewards the model receives when judged by combining ...
https://arxiv.org/abs/2505.22203v1
– an observation consistent with our RL experimental findings. 2 Our findings in this work clearly identify the pitfalls of both rule-based and model-based verifiers in the context of mathematical reasoning: current rule-based verifiers are not sufficiently accurate even for widely used open-source mathematical dataset...
https://arxiv.org/abs/2505.22203v1
curation process involves three main steps: First, we select and sample from four mathematical RL datasets – Math [Hendrycks et al., 2021], DeepscaleR [Luo et al., 2025a], Open-Reasoner-Zero (ORZ-Math)[Hu et al., 2025], and Skywork-OR1[He et al., 2025] – with 1,000 queries sampled from each dataset. In the second step,...
https://arxiv.org/abs/2505.22203v1
5 in Appendix B, all three verifiers exhibit near-perfect precision, consistently achieving over 99% precision. This means that if an answer passes the rules, it is almost certainly correct because the rule-based verifiers rely on deterministic programming language logic and computation. Notably, the HuggingFace Math V...
https://arxiv.org/abs/2505.22203v1
the core capabilities of LLMs, including their advanced reasoning skills, to produce more accurate judgments. They are, in principle, better equipped to evaluate answers presented in diverse formats. Model-based verifiers are explored in several concurrent works [Su et al., 2025, Ma et al., 2025, Seed et al., 2025] wit...
https://arxiv.org/abs/2505.22203v1
that integrates the strengths of both approaches. We first evaluate its performance in static settings, then analyze its improvements over rule-based verifiers in RL training, as well as its training efficiency compared to fully model-based verifiers. 4.1 The Hybrid Verifier Designs. In the hybrid design, the rule-base...
https://arxiv.org/abs/2505.22203v1
58.3 ,→+ R1-Distill-Verifier-1.5B verifier 93.0 79.8 40.4 40.1 17.8 77.5 58.1 ,→+ general-verifier 92.5 82.0 43.0 40.9 18.4 70.0 57.8 Table 3: Detailed performance of models across multiple benchmarks with GPT-4o as the verifier. This table evaluates the correctness of the models’ responses by using GPT-4o as the verif...
https://arxiv.org/abs/2505.22203v1
integrate the fine-tuned models into the hybrid verifier and evaluate their impact on RL training. 5.1 Classification-RL Performance Mismatch Trained Verifier. We incorporate dedicated open-source verifiers explicitly fine-tuned for verifi- cation tasks, including: (1) xVerify 0.5B and 3B [Chen et al., 2025], fine-tune...
https://arxiv.org/abs/2505.22203v1
performance, R1-Distill-Verifier-1.5B becomes compromised during dynamic RL training, leading to a drop in evaluation accuracy and eventual training collapse, as shown in Figure 1 (Left). In contrast, the untrained verifier, R1-Distill-Verifier-1.5B, and the rule-based verifier do not exhibit such instability. These fi...
https://arxiv.org/abs/2505.22203v1
and ground-truth answer, resulting in a comprehensive set of “hacking data”. We then evaluate the attack success rates – i.e., how often a hacking pattern successfully causes the verifier to misjudge an incorrect answer as correct – for different types of hacking patterns against a range of model-based verifiers. These...
https://arxiv.org/abs/2505.22203v1
Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai ...
https://arxiv.org/abs/2505.22203v1
source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2503.24290 , 2025. Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, and Yahui Zhou. ...
https://arxiv.org/abs/2505.22203v1
Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal sc...
https://arxiv.org/abs/2505.22203v1
notequivalent.Your final response format must be:```[Reward Score] = <1 or 0>```[Question][Ground Truth Answer][Extracted Answer] Figure 4: Prompt for using GPT-4o as an annotator to provide ground-truth annotations based on the model’s response and the target answer, indicating whether the model’s response aligns with...
https://arxiv.org/abs/2505.22203v1
partitions models and dynamically switches between training and inference modes, significantly improving GPU utilization and reducing communication overhead during RL training. Building on this capability, we extend HybridEngine to the model-based verifier, allowing it to be offloaded from GPUs during idle periods. For...
https://arxiv.org/abs/2505.22203v1
responses from DeepSeek-R1-Distill-Qwen-1.5B. Responses that do not match GPT-4o’s judgment or are duplicates are filtered out, yielding approximately 20K examples for fine-tuning. The model is fully fine-tuned using a learning rate of 1e-4 for 3 epochs. 15 Table 6: Performance of model-based verifier and hybrid verifi...
https://arxiv.org/abs/2505.22203v1