text
string
source
string
structure (Gandhi et al., 2025; Li et al., 2025a; Ye et al., 2025). Both Wu et al. and Ballon et al. highlighted the overthinking phenomenon, where overly long reasoning chains can degrade rather than improve final answer quality. However, our analysis (Figure 1) shows that response length alone remains an inadequate p...
https://arxiv.org/abs/2505.22148v1
reasoning, driving more reliable answers. More recently, models such as Deepseek- R1 (Guo et al., 2025), Kimi-1.5 (Team et al., 2025) and QwQ-32B (Team, 2024) have leveraged rule- based reinforcement learning to embed reason- ing capabilities directly into model parameters, achieving remarkable progress in handling com...
https://arxiv.org/abs/2505.22148v1
al., 2025)) on the MATH (Hendrycks et al., 2021) dataset. It demonstrates that as the rea- soning chain becomes unnecessarily long, model performance deteriorates, highlighting how over- thinking can harm the reasoning ability of LLMs. To tackle this issue, researchers have proposed using a length penalty during the tr...
https://arxiv.org/abs/2505.22148v1
integer N as a ones digit of 0. What is the probability that N is divisible by 4?Long Chain-of-thoughtOkay, so I need to figure …First, let’s understand …Wait, actually, the last number should …Alternative,since three-digit number …So, for each A, there are 5 …Wait, wait: 10 B should be ……Let me verify: k is from 10 to...
https://arxiv.org/abs/2505.22148v1
, and each edge rep- resents a transition to a deeper level of reasoning, with the edge type defined by the Thought Function of its child node. When inserting a new thought Ti, we first identify the ordered list of reasoning steps it maps to, denoted as [S1 i, ..., Sn i]. Here, n indicates that the current thought enco...
https://arxiv.org/abs/2505.22148v1
tree-based representations for understand- ing complex reasoning processes. Experimental Setup. We use the same dataset described in Section 3.1, which contains responses from five reasoning models across four public benchmarks. The key difference is that we extract the tree structure from each LCoT response and use it...
https://arxiv.org/abs/2505.22148v1
patterns within the reasoning chain. For example, in models trained to predict incorrect answers, the highlighted subgraphs often reflect flawed reasoning behaviors that lead to poor performance. Similarly, in models trained on MATH tasks, the important subgraphs typically capture common reasoning patterns ob- served i...
https://arxiv.org/abs/2505.22148v1
contrast, the behaviors in code comple- tion (Figure 11) show wide, parallel branches with minimal exploration or verification, reflecting a more straightforward pattern of generation. GPQA (Figure 12) samples reveal high out-degree nodes, where the model repeatedly revisits complex con- cepts, indicating the model’s u...
https://arxiv.org/abs/2505.22148v1
of using answer correctness alone to assess reasoning quality, as it tolerates shallow or unsound reasoning paths. Addressing these flawed- but-correct patterns can guide LLMs toward pro- ducing reasoning that is not only accurate but also logically sound. 5 Application of LCoT2Tree: Tree-based Best-of-N Decoding Beyon...
https://arxiv.org/abs/2505.22148v1
LCoT2Tree en- ables more interpretable and structural analysis of complex reasoning processes, with significantly im- proving the prediction of reasoning success across a wide range of tasks and models. Beyond evaluation, we apply LCoT2Tree for behavioral analysis, re- vealing error patterns and accounting for disparat...
https://arxiv.org/abs/2505.22148v1
Wang, Siyuan Zhuang, Shu Liu, Luis Gaspar Schroeder, Tian Xia, Huanzhi Mao, and 1 others. 2025. The danger of overthinking: Exam- ining the reasoning-action dilemma in agentic tasks. arXiv preprint arXiv:2502.08235 . Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. 2023. Towards revealing the my...
https://arxiv.org/abs/2505.22148v1
Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024a. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 . Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Ju- jie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui...
https://arxiv.org/abs/2505.22148v1
Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in Neural Information Processing Systems (NeurIPS) . Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi....
https://arxiv.org/abs/2505.22148v1
continue segment in a reasoning chain that involves no logical transition, such as exploration or verification. We then analyze the collected LCoTs to identify common linguistic pat- terns ( i.e., separators) that signal shifts between distinct reasoning steps. The separators set to [“Al- ternatively”, “Hmm”, “Let me v...
https://arxiv.org/abs/2505.22148v1
step. The insertion process follows two rules: (1) If S1 i is greater than the step of the latest node Nj i−1in the tree, the new node N1 iis added as a child of Nj i−1. (2) Otherwise, we backtrack to the most recent node at step S1 i−1. Then we create a new branch from that node and link it to new node N1 i. Once N1 i...
https://arxiv.org/abs/2505.22148v1
end, each sample produces a single graph instance for classification. B.3 Node and Edge Features We design informative features for both nodes and edges to enhance the performance of our tree-based classification model. For each node in the reason-ing tree, we extract the following features: (1) the index of the curren...
https://arxiv.org/abs/2505.22148v1
61.60% 95.20% 99.42% 84.68% Gain +24.47% +10.48% +27.24% +49.99% +19.34% LCBLength-based 54.49% 54.17% 52.37% 54.49% 53.39% Tree-based 86.32% 71.73% 96.12% 86.32% 82.51% Gain +31.83% +17.56% +43.75% +31.83% +29.12% MMLULength-based 55.36% 60.10% 54.17% 53.23% 59.55% Tree-based 62.86% 64.99% 73.65% 85.62% 71.89% Gain +7...
https://arxiv.org/abs/2505.22148v1
Length-Best (56.70%) and ORM-Best (60.82%). Similar trends are observed for QwQ-32B, with our model showing competi- tive or superior performance. These results confirm that incorporating structural reasoning patterns via LCoT2Tree leads to a reliable output selection in complex reasoning tasks. D Diagnostic Insight in...
https://arxiv.org/abs/2505.22148v1
deductive approach with limited exploration, which is consistent with the knowledge-intensive nature of MMLU-Pro questions rather than deeply compositional reasoning. These observations highlight how LCoT2Tree provides fine-grained insights into the cognitive strategies employed by the model in diverse reason- ing scen...
https://arxiv.org/abs/2505.22148v1
in LCoT2Tree tool to assign reasoning step to each thought. Your task is to match each reasoning thought from List B to corresponding step number(s) in the List A. Follow the following process: 1. First understand List B: - For each thought in List B, identify if it describes some specific calculation processes (mathem...
https://arxiv.org/abs/2505.22148v1
a more direct and deductive reasoning style with less exploration. Figure 14: Visualization results of tree structure of a response from DeepSeek-R1 onMATH dataset extracted using LCoT2Tree. It exhibits similar behavior to DS-32, but with an important distinction: it tends to truncate detailed exploration earlier and b...
https://arxiv.org/abs/2505.22148v1
arXiv:2505.22165v1 [cs.CL] 28 May 2025Unifying Continuous and Discrete Text Diffusion with Non-simultaneous Diffusion Processes Bocheng Li*1,2, Zhujin Gao*1,2, Linli Xu†1,2 1School of Computer Science and Technology, University of Science and Technology of China 2State Key Laboratory of Cognitive Intelligence {bcli,gao...
https://arxiv.org/abs/2505.22165v1
noise levels based on the surrounding context (Chen et al., 2023; Wu et al., 2024). To address these limitations, we propose inte- grating the complementary strengths of discrete and continuous diffusion approaches, enabling fine- grained noise control at the token level. This uni- fied approach aims to provide precise...
https://arxiv.org/abs/2505.22165v1
unifying existing text diffusion models. •We propose the Poisson diffusion process as the forward process, enabling fine-grained cor- ruption of text data, a context-aware time pre- dictor that adaptively modulates the reverse process based on semantic context, and an optimized extrinsic time schedule for precise noisi...
https://arxiv.org/abs/2505.22165v1
simplex. Li et al. (2022) proposed Diffusion-LM, where the token sequence yis first mapped to a random repre- sentation z0using a word embedding as the mean. After the reverse diffusion process, the generated vectors are rounded back to discrete tokens. Gong et al. (2022) extended this approach to sequence-to- sequence...
https://arxiv.org/abs/2505.22165v1
al., 2023), can also be formalized under this frame- work by setting τt= max( t+τmask(t),1), where τmask(t)∼Bernoulli (γ,¯β(t))andγis the ratio of tokens replaced by [MASK] when t= 1. NeoDiff defines τt∈[0,1]as a continuous ran- dom function of extrinsic time t∈[0,1], enabling fine-grained control over the diffusion pr...
https://arxiv.org/abs/2505.22165v1
all tokens effectively share nearly identical τvalues, causing NeoDiff to degenerate into a continuous diffusion model. To address this limitation, we further introduce a variance-controlled rescaling transformation: τt=Clip Round st−λ(t)√ λ(t)σ(t) +λ(t) , smax smax(7) Under this transformation, the variables withi...
https://arxiv.org/abs/2505.22165v1
notes the Poisson cumulative distribution function. The resulting ˜s(t)values are then transformed via Eq. (7) to obtain the final pseudo labels. 3.4 Optimized Extrinsic Time Schedule The choice of time schedule in diffusion models significantly impacts both generation quality and computational efficiency. While previo...
https://arxiv.org/abs/2505.22165v1
Table 2: Comparison on SacreBLEU for machine trans- lation tasks. *: Results from Ye et al. (2023); †: Results from Yuan et al. (2024); remaining data are reproduced. ⇑: NeoDiff outperforms baselines with beam size ≤b. cess involved providing the LLM with source text, generated text from different models, and specific ...
https://arxiv.org/abs/2505.22165v1
Fluency Completeness Creativity WMT14Difformer 10 79.72 80.31 85.24 75.12 Transformer 5 85.66 86.35 90.81 80.07 NeoDiff 10 80.30 80.81 85.61 76.20 Table 4: LLM evaluation of text generation tasks using DeepSeek-v3 685B. We evaluate Paraphrasing (QQP Dataset) and Machine Translation (WMT14 En-De Dataset). (1) We access ...
https://arxiv.org/abs/2505.22165v1
original meaning. 8 ASrc: das zeigt die enorm große rolle , die ein meeress- chutzgebiet spielen kann. Ref: and hence , the enormous role that a marine protected area can play. Base: and this shows the enormously big role that a area can play with a sea protected +P:so this shows the enormously big role that a marine p...
https://arxiv.org/abs/2505.22165v1
Matt Post, Herve Saint- Amand, et al. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the ninth workshop on statistical machine translation , pages 12–58. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Ph...
https://arxiv.org/abs/2505.22165v1
diffusion models. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2023 , pages 9868–9875. Xiaochuang Han, Sachin Kumar, and Yulia Tsvetkov. 2023. Ssd-lm: Semi-autoregressive simplex-based diffusion language model for text generation and modular control. In Proceedings of the 61st An- nual Meeting ...
https://arxiv.org/abs/2505.22165v1
Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations . Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of...
https://arxiv.org/abs/2505.22165v1
zt′ as pθ(zt′|zt, τt, τt′) =q(zt′|ˆz0(zt, τt, t), τt′), where ˆz0is the model prediction of z0. Following Ho et al. (2020), the training objective is derived from the variational lower-bound LVLB=Eq −logpθ(z0:1, τ0:1) q(z>0, τ>0|z0) (12) =Eq" −logpθ(z1, τ1) q(z1, τ1|z0)(13) +X 0<t′<1−logq(zt′|ˆz0(zt, τt, t), τt′) q(z...
https://arxiv.org/abs/2505.22165v1
Extrinsic Time Schedule We propose a systematic approach to optimize the extrinsic time schedule S={t1, t2, ..., t K}, where Kdenotes the number of diffusion steps and ti∈ [0,1]witht1< t2< ... < t K. While previous works (Dhariwal and Nichol, 2021; Chen, 2023) focus on optimizing noise schedules with fixed time steps, ...
https://arxiv.org/abs/2505.22165v1
We measured inference speed (sen- tences/second) and memory cost (MB). NeoDiff demonstrates competitive inference speed, process- ing 5.12 sentences per second, which is comparable to Difformer’s 6.49 sentences per second. While significantly faster than diffusion-based models like DiffuSeq and SeqDiffuSeq, NeoDiff’s s...
https://arxiv.org/abs/2505.22165v1
10: Perform a sampling on Tsrcusing MandSi, yielding predicted text T′ pred. 11: Compute the BLEU score BLEU( Ttgt, T′ pred)using TtgtandT′ pred. 12: Update the observation set: O←O∪ {(Si,BLEU( Ttgt, T′ pred))}. 13:end for 14:Sopt= arg max (S,BLEU) ∈OBLEU 15 Hyper-parametersWMT14 En-DeWMT16 En-RoIWSLT14 De-EnQQP Wiki-A...
https://arxiv.org/abs/2505.22165v1
. Difformer <tgt’_pred> and the world that we’re living in right now , it looks different . <src> sein ganzer arbeitsprozess hat sich danach geändert . <tgt> and his whole work process changed after that . <src’> sein ganzer arbeitsprozess hat sich davon geändert. <tgt’> His whole work process changed because of that. ...
https://arxiv.org/abs/2505.22165v1
happened years af- ter that? Table 11: Additional examples showing the improvements from introducing the Poisson diffusion process on IWSLT14 De-En dataset. The Base model often produces unnatural word ordering and incorrect lexical choices, while +P shows better handling of complex phrases and more natural English con...
https://arxiv.org/abs/2505.22165v1
choose some fellows ev- ery year, and we have them work with city adminicies.we choose some fellows ev- ery year, and we let them work with urban manage- ment. also bot ich einen 10000 $ preis an software für die gewinner .so i offered a 10,000 dollar prize of software to the win- ning team .so i offered a $10,000 pric...
https://arxiv.org/abs/2505.22165v1
, they have too little . their problem is that they have too little . 9 now , your problem is , you have too little . their problem is that they have too little . ... ... ... 20 now , your problem is , you have too little . their problem is that they have too little . Final now , your problem is , you have too little ....
https://arxiv.org/abs/2505.22165v1
mes@@ exhi@@ 1 so i i to a few months later at a conference . so i spoke at at a a few months later . 2 so i i to a few months later at a conference . so i spoke at a conference a few months later . 3 so i i about a few months later at a confer- ence .so i spoke at a conference a few months later . 4 so i i about a few...
https://arxiv.org/abs/2505.22165v1
arXiv:2505.22174v1 [cs.GT] 28 May 2025Online Fair Division for Personalized 2-Value Instances Georgios Amanatidis1,2, Alexandros Lolos1,2, Evangelos Markakis1,2,3, and Victor Turmel4 1Department of Informatics, Athens University of Economics and Business, Athens, Greece. 2Archimedes, Athena Research Center, Athens, Gre...
https://arxiv.org/abs/2505.22174v1
agents, each equipped with a valuation function, are given as an input and one would like to produce a, complete or partial, partition of the resources to the agents, so that some predetermined fairness criterion is satisfied. Here we study a version of the problem where the set of nagents is indeed given, but the indi...
https://arxiv.org/abs/2505.22174v1
agents. Moreover, there is a natural way of approximating any additive instance via a 2-value instance by just setting a threshold for each agent iand rounding everything up (to the “high value” αi) or down (to the “low value” βi) accordingly. Although this approximation is not always meaningful, it does give nontrivia...
https://arxiv.org/abs/2505.22174v1
as well, albeit milder. We show that, for any ε >0, no algorithm can guarantee (1/2 +ε)-EF1at every time step, even for two agents (Theorem 3.1), or (1/(2n−1) +ε)-MMS at every time step for general n(Theorem 3.2). •We present an algorithm with tight approximation guarantee with respect to MMS . In particular, our Algor...
https://arxiv.org/abs/2505.22174v1
via carefully defined auxiliary valuation functions. Further Related Work. There is a vast literature on fair division, both with divisible and with indivisible resources. For a recent survey on the latter, see Amanatidis et al. [2023]. Here we focus on online fair division settings, mostly with indivisible items. Alek...
https://arxiv.org/abs/2505.22174v1
setting is very similar to ours but the items that arrive online are divisible [Gkatzelis et al., 2021, Barman et al., 2022, Banerjee et al., 2022, 2023]. The similarities, however, are only superficial, as the fractional assignment of even a few goods allows us to bypass most strong impossibility results. Finally, onl...
https://arxiv.org/abs/2505.22174v1
(namely, they see the allocation as being temporal- EF) by an empty bundle. So, without loss of generality, we may assume that the instances we consider only contain agents of types 1, 2, and 3. Of course, agents of type 2 and 3 are easier to satisfy (see, e.g., Corollary 4.4). As a final observation about agent types,...
https://arxiv.org/abs/2505.22174v1
however, we assume that our instances can be augmented with limited information about the future. We say that an online instance is augmented with foresight of length ℓif every time a good garrives (and still needs to be allocated immediately and irrevocably), we also get a preview of the ℓnext goods. We mentioned abov...
https://arxiv.org/abs/2505.22174v1
al. [2025]. Definition 2.5 (Temporal Fairness) .Consider a sequence of partial allocations At= (At 1, At 2, . . . , At n), for t∈Z≥0, such that At i⊆At+1 ifor any i∈Nand any t≥0. IfAtisρ1-EFk(resp. ρ2-MMS ) for all t∈Z≥0, then we say that the sequence of allocations (At)t≥0isρ1-temporal -EFk(resp. ρ2-temporal -MMS). Wh...
https://arxiv.org/abs/2505.22174v1
1 5 . . . 1 1 1 5 5 . . . C C C C Ca a a a as s s s se e e e e1 1 1 1 1:g3g3g3g3g3i i i i is s s s sg g g g gi i i i iv v v v ve e e e en n n n nt t t t to o o o oa a a a ag g g g ge e e e en n n n nt t t t t1 1 1 1 1. . . . .In this case, consider a next good g4, such that v1(g4) = v2(g4) = 5 . Whoever gets g4, the re...
https://arxiv.org/abs/2505.22174v1
that degrades with the number of agents. Theorem 3.2. Letε >0be a constant. There is no deterministic algorithm that, given a 2-value instance with nagents, always builds a (1/(2n−1) +ε)-temporal-MMS allocation. Proof. Suppose we have a deterministic algorithm Afor the problem. We are going to consider a 2-value instan...
https://arxiv.org/abs/2505.22174v1
goods. Now suppose that no agent i≤λhas received value more than n+ 1yet and that λgoods, g′ 1, . . . , g′ λ, which are high-valued for all agents arrive in that order. By this induction hypothesis, either algorithm Abuilds at some point an allocation that is not 1/2n-MMS or, for all j∈[λ−1], agent jwill get good g′ j....
https://arxiv.org/abs/2505.22174v1
. . . α We claim that either algorithm Afails to maintain a (1/(2n−1) + ε)-MMS allocation or it allocates g2n, g2n+1, . . . , g 3n−k−1to agents n,n−1, ...,k+ 1, in that order. Towards a contradiction, suppose that this is not the case, i.e., Amaintains a good approximation to temporal maximin share fairness, yet some o...
https://arxiv.org/abs/2505.22174v1
for personalized interval-restricted instances. Corollary 3.4. Letε >0be a constant. There is no algorithm that, given a 2-value instance with nagents and values α >1 =β, always builds a (1/√ 2α+ε)-temporal-MMS allocation. Proof. Notice that in the proof of Theorem 3.2 we have α= 2n2+ 2n. Also, by standard calculus we ...
https://arxiv.org/abs/2505.22174v1
agent, would not leave enough room for our competing goals to work simultaneously. In our proofs and statements we often need to refer to the agents’ bundles at different time steps. For clarity, we write At i(rather than just Ai) to denote the bundle of agent iat the end of time step t. In fact, we use this notation t...
https://arxiv.org/abs/2505.22174v1
separately. While parts 1. and 3. are relatively straightforward, part 2. requires an elaborate analysis using delicate inductive arguments. During any phase, an agent iis called active ifχi= 0andinactive otherwise. As it is clear by the definition of the sets 11 Nh(g, t)andNℓ(g, t)(lines 11 and 12), no agent can recei...
https://arxiv.org/abs/2505.22174v1
1out of the first n high-valued goods they see and 1out of every 3n−2high-valued goods overall, is to show that it is always possible to allocate the goods in such a way so that H[i]>0, for all i∈N, at the end of each time step t(i.e., right before the (t+ 1) -th good arrives). The reason, of course, is that H[i]has be...
https://arxiv.org/abs/2505.22174v1
viewed as high-valued by someone, it is always allocated as a high-valued good. Indeed, this is the case: if we only allocated goods so as to maximize the social welfare, then we would be able to maintain (1)for every t, even if in line 17 we only added nrather than 3n−2. The technical reason why will become clear in t...
https://arxiv.org/abs/2505.22174v1
temporarily dropped to 0), for jto get git suffices to show that χj= 0. Towards a contradiction, assume this is not the case, i.e., jhas received a high-valued good at time t′which belongs to the same phase as t. Since each phase has at most 2n−1time steps (see the proof of part 3. of Theorem 4.1), t−t′≤2n−2. But then,...
https://arxiv.org/abs/2505.22174v1
that agent iis of type 3. Given that low-valued goods are completely irrelevant to agent i, we can consider a time alternative τthat starts at 0, like t, but only increases when a high-valued good for i arrives. That is, while treflects how many goods have arrived in general, τreflects how many high-valued goods with r...
https://arxiv.org/abs/2505.22174v1
+ (2 n−2))αi. We have µi(t)≤vi(S) n≤κ(2n−1) + (2 n−2) nαi≤2nκi+ 2n nαi= 2(κ+ 1)αi≤4καi= 4vi(At i). C C C C Ca a a a as s s s se e e e e3 3 3 3 3:κ⩾1κ⩾1κ⩾1κ⩾1κ⩾1a a a a an n n n nd d d d dλ⩾1λ⩾1λ⩾1λ⩾1λ⩾1. . . . .Similarly to Case 2, vi(At i) =καi+λandimight have seen at most n+ (κ+ λ−1)(2n−1) + (2 n−2)goods (by parts 1....
https://arxiv.org/abs/2505.22174v1
is without loss of generality to assume that the agent who gets good g1is agent 1, as it is a matter of renaming the agents, if needed, and is consistent with our lexicographic tie-breaking. For the sub-instance that only involves agents 2through n, has a properly defined initial priority vector H′ 0(see within the cas...
https://arxiv.org/abs/2505.22174v1
In this case, the initial vector H′ 0of the sub-instance given as input is defined by H′ 0[i] =H0[i]−1≥n−1 fori∈ {2, . . . , n }. Like in Case 1, the priority among agents remains exactly the same as in the original instance but here H′ t′[i] =Ht′+1[i]−1for all t′∈ {0, . . . , n −1}andi∈ {2, . . . , n }; again, H′ t′[1...
https://arxiv.org/abs/2505.22174v1
to condition (1). Indeed, if Ht[i]has changed, then vi(g) =αi. Moreover, it must be χi= 0, as otherwise Nh(g, t)̸=∅andgwould be allocated as a high-valued good instead. But χi= 0means that agent ihas received a high-valued good, say at a time step th, during the current phase. Recall that each phase has at most 2n−1tim...
https://arxiv.org/abs/2505.22174v1
of Hreduces by more than 1in a single time step. The last one is that j∈Sk+1 ℓ=0Hℓ(t−1); if not, we would have Ht−1[j]≥k+ 2and the j-th entry of Hright before gwas allocated to jwould be Ht−1[j]−1≥k+ 1> k≥mini∈SH[i]≥mini∈Nh(g,t)H[i], contradicting the choice of j. Using these observations, as well as the induction hypo...
https://arxiv.org/abs/2505.22174v1
next section, it is simpler to state, much simpler to analyze and still illustrates the power of—even very limited—foresight. Theorem 5.2. For any two agent 2-value instance augmented with foresight of length 1, Algorithm 2 builds an allocation that is temporal-EF2, while it is EF1 for every even time step t≥0. Proof. ...
https://arxiv.org/abs/2505.22174v1
combined with the fact that ctrdoes not change here, implies that (ii) still holds for t= 2k+ 2. So, we may assume that the condition of line 10 is false, i.e., goods g2k+1, g2k+2induce the pattern of block I or block II of Table 1 with one universally high-valued and one universally low-valued good. Here we consider t...
https://arxiv.org/abs/2505.22174v1
the previous section, we can generalize Theorem 5.2 to any number of agents, albeit with a more complicated algorithm. Similarly to Algorithm 2, what we would like to achieve between time steps knand(k+ 1)n(i.e., with goods gkn+1, . . . , g (k+1)n) is to obtain an EF1allocation by taking the union of the EF1allocation ...
https://arxiv.org/abs/2505.22174v1
allocation that is temporal- EF2, while it is EF1(and, thus, 1/n-MMS ) for every time step t=kn, k∈ Z≥0. Moreover, if at any step t0the allocation fails to be 1/2-EF1, then it remains 1/2-EF1at the end of every time step t≥ ⌈t0/n⌉n. Proof. The proof has similar structure to the proof of Theorem 5.2, but the induction i...
https://arxiv.org/abs/2505.22174v1
goods agents iandjreceived, respectively, in round k+ 1. This means that hi, hjwere matched with iandj, respectively, in M. By the induction hypothesis, we know that the envy from agent itowards agent jat the end of time step t=kn—if it existed at all—was upper bounded by αi−βi, i.e.,vi(Akn j)−vi(Akn i)≤αi−βi. Ifvi(hi)...
https://arxiv.org/abs/2505.22174v1
contradicting the choice of M. So, it must be vi2(hi2) =αi2. The third observation we need for the proof is that for any edge (ir, ir+1)inC, such that vir(hir) =αir, it must also be the case that vir(hir+1) =αir.1To see this, notice that vir(A(k+1)n ir+1)−vir(A(k+1)n ir) =vir(Akn ir+1)−vir(Akn ir) +vir(hir+1)−vir(hir) ...
https://arxiv.org/abs/2505.22174v1
bundle can violate (even exact) EF1from i’s perspective. Besides this, we also claim that, if hi, hjare the goods agents iandjreceive, respectively, in round k, then vi(hj) =αi. Indeed, using property (iii) we showed above, even if agent jreceives its good first in round k, we have vi(At0 j)−vi(At0 i)≤vi(A(k−1)n j ) +v...
https://arxiv.org/abs/2505.22174v1
instance at the expense of an additional multiplicative factor of√αifor each agent i. Proof. The main idea is to use auxiliary valuation functions that aim to approximate the original valuation functions while taking only two values. Given the personalized interval-restricted valuation function viof an agent i, such th...
https://arxiv.org/abs/2505.22174v1
ˆvn. Then, for any i∈N, we have vi(At i)≥1√αiˆvi(At i)≥1√αiρˆµn inS j=1At j ≥ρ√αiµn inS j=1At j , where, as usual the first and third inequalities follow from Claim 6.2. Thus, (At 1, . . . , At n)isρ/√αi-MMS with respect to the original functions v1, . . . , v n. For any personalized interval-restricted instance le...
https://arxiv.org/abs/2505.22174v1
the European Union – NextGenerationEU (H.F.R.I. Project Number: 15877). This work was partially supported by the NWO Veni project No. VI.Veni.192.153. References Hannaneh Akrami, Bhaskar Ray Chaudhury, Martin Hoefer, Kurt Mehlhorn, Marco Schmalhofer, Golnoosh Shahkarami, Giovanna Varricchio, Quentin Vermande, and Ernes...
https://arxiv.org/abs/2505.22174v1
D. Procaccia, and Christos-Alexandros Psomas. How to make envy vanish over time. In Proceedings of the 2018 ACM Conference on Economics and Computation (EC), pages 593–610, 2018. Ziyad Benomar and Vianney Perchet. Non-clairvoyant scheduling with partial predictions. In Forty-first International Conference on Machine Le...
https://arxiv.org/abs/2505.22174v1
division. Math. Oper. Res. , 47 (2):945–968, 2022. Hakuei Yamada, Junpei Komiyama, Kenshi Abe, and Atsushi Iwasaki. Learning fair division from bandit feedback. In International Conference on Artificial Intelligence and Statistics, AISTATS 2024 , volume 238 of Proceedings of Machine Learning Research , pages 3106–3114....
https://arxiv.org/abs/2505.22174v1
arXiv:2505.22179v1 [cs.CL] 28 May 2025Speculative Decoding Meets Quantization: Compatibility Evaluation and Hierarchical Framework Design Yudi Zhang1, Weilin Zhao2, Xu Han2, Tiejun Zhao1*, Wang Xu2∗,Hailong Cao1,Conghui Zhu1 1Faculty of Computing, Harbin Institute of Technology, Harbin, China. 2Tsinghua University, Bei...
https://arxiv.org/abs/2505.22179v1
employs the 4-bit version of the target model as the draft and further introduces a smaller 4-bit model to enable a multi-level specula- tive decoding method. However, self-speculative decoding, which uses the same architecture for both draft and target mod- els, inherently limits speedup. In contrast, specu- lative de...
https://arxiv.org/abs/2505.22179v1
and nis the tree size. Letτ(n, d)denote the expected accepted length, defined as the expected number of tokens accepted by the target model after verifying the drafts. Td andTtdenote the decoding time of the draft model and target model, respectively. Tv(n)denotes the time taken by the target model to verify ntokens. T...
https://arxiv.org/abs/2505.22179v1
et al., 2020) was used for calibration. To evaluate the performance of different quan- tization precisions, we conduct experiments on Llama-3-8B-Instruct and Llama-3-70B-Instruct models (Grattafiori et al., 2024) quantized with various algorithms. Three benchmarks are evalu- ated: WikiText2 (Merity et al., 2016) for pe...
https://arxiv.org/abs/2505.22179v1
a single RTX 3090, rep- resenting high-performance and consumer-grade GPUs, respectively. 3.2 Experimental Observation Figure 1 presents the speedup performance of spec- ulative decoding EAGLE-2, various quantization methods, and their integration. It also includes the relative speedup improvement contributed by EAGLE-...
https://arxiv.org/abs/2505.22179v1
tree sizeAvg. accepted length (d) Accepted Length (8B, 3090)30 40 50 6011.522.5 Draft tree sizeVerification/decoding (e) Verification Ratio (8B, A100)30 40 50 6022.533.544.5 Draft tree sizeSpeedup (f) Speedup (8B, 3090) Figure 2: Comparison of average accepted length, verification-to-decoding ratio, and speedup for var...
https://arxiv.org/abs/2505.22179v1
speedup from fewer draft forward passes also con- 5 W8A8 W4A16 W4A8-QQQ W4A8-QQQ-g128 24 32 40 4833.54 Draft tree sizeAvg. accepted length (a) Accepted Length (70B, A100)24 32 40 4811.21.41.61.8 Draft tree sizeVerification/decoding (b) Verification Ratio (70B, A100)24 32 40 482.533.544.5 Draft tree sizeSpeedup (c) Spee...
https://arxiv.org/abs/2505.22179v1
in one forward pass. Compared to directly applying tree-based draft ver- ification, sequential draft verification enables fur- ther memory access optimization while fully retain- ing the memory advantages of 4-bit weight models, achieving orthogonality between speculative decod- ing and 4-bit weight quantization. 6 Met...
https://arxiv.org/abs/2505.22179v1
3.3. For HierSpec, the first- level draft length d1∈ {3,4}matches the opti- mal EAGLE-2 setting for 8B, and the second-level d∈ {6,7}follows Vanilla SP. All experiments areconducted on a single NVIDIA 80GB A100 GPU. EAGLE-2 Vanilla SP HierSpec0246810 2.75.2 3.8Draft Time (ms)Drafting time 01020304050 45.3 29.7 30.0 Ver...
https://arxiv.org/abs/2505.22179v1
by quantizing both weights and activa- tions. To alleviate the impact of activation outliers, LLM.int8() (Dettmers et al., 2022) performs mixed- precision decomposition, SmoothQuant (Xiao et al., 2023) and OmniQuant (Shao et al., 2024) adopt per-channel scaling transformation. Further, QuaRot (Ashkboos et al., 2024) an...
https://arxiv.org/abs/2505.22179v1
Bo Li, Pashmina Cameron, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. 2024. Quarot: Outlier-free 4-bit inference in rotated llms. InProceedings of NeurIPS , pages 100213–100240. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam,...
https://arxiv.org/abs/2505.22179v1
ing with pagedattention. In Proceedings of SOSP , pages 611–626. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via spec- ulative decoding. In Proceedings of ICML , pages 19274–19286. 9 Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024a. Eagle-2: Faster inference of la...
https://arxiv.org/abs/2505.22179v1
decoding. In Findings of the ACL , pages 7655–7671. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. 2023. Smoothquant: Accurate and efficient post-training quantization for large language models. In Proceedings of ICML , pages 38087–38099. Ying Zhang, Peng Zhang, Mincong Huang, Jingyang Xi...
https://arxiv.org/abs/2505.22179v1
arXiv:2505.22184v1 [cs.CL] 28 May 2025Breaking the Cloak! Unveiling Chinese Cloaked Toxicity with Homophone Graph and Toxic Lexicon Xuchen Ma1, Jianxiang Yu1, Wenming Shao2, Bo Pang2, Xiang Li1∗ 1School of Data Science and Engineering, East China Normal University 2Shanghai EastWonder Info-tech Co., Ltd. {xuchenma, jia...
https://arxiv.org/abs/2505.22184v1
naturally arises: Can we develop a model to unveil Chinese cloaked toxic contents? We notice that Chinese Spelling Correction (CSC), which aims to correct misspelled Chinese char- acters, shares some similarities with our task. However, the two tasks are not entirely equivalent. While CSC deals with unintentional user ...
https://arxiv.org/abs/2505.22184v1
English. For example, some works [ 12] leverage contextual information to correct intentionally misspelled words, while others [ 9] focus on addressing word order disruptions by employing LLMs for robust comprehension of noisy text. However, these methods are designed for English. Unlike English—a word-based language w...
https://arxiv.org/abs/2505.22184v1
obtain their pinyin regardless of tones. Then characters with identical pinyin are connected. Each node also has a self-loop, as each character is considered to have a homophonic relation with itself. We further consider polyphonic characters and phonetically similar pronunciations based on regional dialects. Polyphoni...
https://arxiv.org/abs/2505.22184v1
toxic words pairs, denoted as Wp={(w(i), l(i))}fori= 1,···, M. Here, l(i)is a probable toxic unveiling of w(i). 3.3 Filtering Candidate Toxic Words To further filter out the incorrect matches (see Figure 2) in Wp, we leverage the full semantics of the text sequence X. Formally, for each (w, l)pair, we define XpreandXta...
https://arxiv.org/abs/2505.22184v1
logNq PBERT(x|Xpre, Xtail) =1 NNX i=1logPBERT(xi|Xmask i)(3) 3.4.2 LLMs-based Method We next explore to use LLMs to filter candidate toxic words. Due to the auto-regressive nature, mainstream LLMs compute the occurrence probability of xbased on the pre-context Xpreby: PLLM(x|Xpre) =NY i=1PLLM(xi|Xpre, x1,···, xi−1). (4...
https://arxiv.org/abs/2505.22184v1
[ 29] is a training-free and prompt-free LLM-based CSC model that considers character pronunciation and shape similarities. Setup We construct homo-graph by pypinyin, and utilize lexicons after manual correction and deduplication for toxicity matching. Our filtering model consists of two versions: BERT-based and LLM-ba...
https://arxiv.org/abs/2505.22184v1