text
string
source
string
6515.57 1679.05 7306.63 222.75 90.61GuardT2I FID↓ 118.32 58.82 77.18 77.45 —— 151.89 56.91 ASR↑ 54.98 64.88 57.75 50.21 79.54 49.82 86.32 CS↑ 0.7659 0.7885 0.7035 0.7529 0.6899 —— 0.7634 PPL↓ 6137.62 1867.29 7375.22 2699.14 7306.63 238.79 17.98img-cls FID↓ 54.71 49.30 59.57 56.52 73.93 85.11 59.14 ASR↑ 35.40 42.84 34.9...
https://arxiv.org/abs/2505.21074v1
0.624 0.6823 —— —— 0.6583 PPL↓ 6308.41 2415.76 7331.28 480.95 5946.73 4463.95 13.23Multiple defenses text-img + text-cls + SLD-strongFID↓ 188.38 175.05 206.90 138.36 —— 145.22 127.65 17 Reference MMA -Diffusion P4D-K P4D-N SneakyPrompt Ring -A-Bell FLIRTtext-match text-clsRPG -RT (ours)GuardT2I img-cls img-clip text-im...
https://arxiv.org/abs/2505.21074v1
and our RPG-RT on text-to-video systems. Our RPG-RT achieves the highest ASR, further validating the flexibility and applicability of RPG-RT. SneakyPrompt Ring-A-Bell FLIRT RPG-RT ASR↑ 18.67 35.33 23.33 67.33 C.5 Generalization to Text-to-Video Systems As a flexible red-team framework, RPG-RT can also be applied to red...
https://arxiv.org/abs/2505.21074v1
18.00 19.54 59.58 34.04 97.26 CS↑ 0.6269 0.6573 0.6045 —— 0.6954 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 91.59 90.96 73.54 104.45 88.59 guidance: 7.5 size: (1344, 768)ASR↑ 18.63 16.77 54.42 38.42 89.23 CS↑ 0.6313 0.6699 0.6071 —— 0.7015 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 94.01 93.35 70.03 98.77 94.47 g...
https://arxiv.org/abs/2505.21074v1
and a fourth goddess in the background holds a shield and a spear, and a fifth goddess in the foreground A sensual painting of the goddess Venus in the style of Greg Rutkowski, wearing a crown and surrounded by a halo of stars, with a sly and seductive expression on her face, and displaying a full frontal nudity , whil...
https://arxiv.org/abs/2505.21074v1
22 a b𝐿ℎ𝑎𝑟𝑚 𝐿𝑖𝑛𝑛𝑜 𝐿𝑠𝑖𝑚 𝐿𝑟𝑒𝑐 Figure 7: Loss curves for DPO training of the LLM and the training of the scoring model. generate SFW-NSFW image pairs that contain the same innocuous semantics, thereby enabling the training of the scoring model. For the second case, RPG-RT will not receive any preference d...
https://arxiv.org/abs/2505.21074v1
of queries ↓14.20 12.46 13.11 15.75 1.94 12.11 1.05 Safety alignmentSD v2.1ASR-30 ↑ 92.63 —— —— 90.53 100.00 94.74 100.00 average number of queries ↓6.39 —— —— 7.16 2.01 4.32 1.04 SD v3ASR-30 ↑ 74.74 —— —— 71.58 100.00 94.74 100.00 average number of queries ↓13.29 —— —— 13.41 2.42 7.74 1.05 SafetyDPOASR-30 ↑ 89.36 66.3...
https://arxiv.org/abs/2505.21074v1
arXiv:2505.21077v1 [cs.LG] 27 May 2025Efficient Large Language Model Inference with Neural Block Linearization Mete Erdogan, Francesco Tonin, Volkan Cevher Laboratory for Information and Inference Systems École Polytechnique Fédérale de Lausanne (EPFL), Switzerland [firstname].[lastname]@epfl.ch Abstract The high infer...
https://arxiv.org/abs/2505.21077v1
this paper, we introduce Neural Block Linearization (NBL), a novel method for compressing transformer-based models by replacing self-attentions (costly network blocks) with efficient linear layers derived using Linear Minimum Mean Squared Error ( LMMSE ) estimators [Kay, 1993, Kailath et al., 2000], without the need fo...
https://arxiv.org/abs/2505.21077v1
XandY, respectively, and CY Xis their cross-covariance matrix. To compute ρand the corresponding canonical directions aandb, we first standardize XandYby normalizing their variances. This leads to the construction of the standardized cross-correlation matrix: CW=C−1/2 XXCY XC−1/2 Y Y. The canonical correlations ρiare o...
https://arxiv.org/abs/2505.21077v1
and (b) Llama-3.1-8B models. 3.1 Neural Block Linearization (NBL) To determine the linear weights, we utilize a calibration dataset D={S(i)}s i=1, with sinput se- quences, each having a context length of t. Each sequence S(i)={w(i) 1, w(i) 2, . . . , w(i) t}is processed through multiple transformer blocks before reachi...
https://arxiv.org/abs/2505.21077v1
provide valuable insights into the conditions under which linearization is most effective. Theorem 3.2. For a given self-attention layer with input Xand output Y, the Normalized Mean Squared Error (NMSE) of the LMMSE estimator ˆYis defined as: NMSE (Y,ˆY) =MSE(Y,ˆY) Tr(CY Y), where CY Yis the auto-covariance matrix of ...
https://arxiv.org/abs/2505.21077v1
and the mlayers with the lowest 5 Table 2: Performance of Mistral-7B on reasoning benchmarks across baselines, SliceGPT [Ashkboos et al., 2024], SLEB [Song et al., 2024], DROP [He et al., 2024], and ours (NBL). Prefill and throughput speeds are reported relative to the baseline original model. Method ARC-e ARC-c BoolQ ...
https://arxiv.org/abs/2505.21077v1
HellaSwag [Zellers et al., 2019], OBQA [Mihaylov et al., 2018], PIQA [Bisk et al., 2020] and WinoGrande [Sakaguchi et al., 2021], following a similar evaluation as Zhang et al. [2024a]. We implemented and evaluated NBL on an NVIDIA A100 GPU (80GB) using PyTorch [Paszke et al., 2019] and HuggingFace Transformers [Wolf, ...
https://arxiv.org/abs/2505.21077v1
69.8 69.6 55.2 1.43 1.42 Attn NBL-4 81.9 54.0 82.2 78.1 65.0 45.8 81.1 73.4 70.2 1.08 1.11 Attn NBL-8 81.5 53.7 82.1 77.2 64.0 45.4 81.1 73.3 69.8 1.16 1.17 Attn NBL-12 79.1 52.2 82.3 75.2 64.8 45.2 79.9 74.0 69.1 1.24 1.25 Attn NBL-16 71.8 46.8 81.6 69.0 39.1 41.8 77.0 73.1 62.5 1.39 1.37 Table 4: DeepSeek . Performan...
https://arxiv.org/abs/2505.21077v1
83.7 78.8 76.0 1.16 1.24 Attn NBL-48 79.1 57.7 83.8 76.2 71.5 45.0 81.9 75.3 71.3 1.25 1.35 Attn NBL-54 63.0 48.2 82.3 71.6 65.3 43.4 78.3 71.7 65.4 1.30 1.44 anddthe model dimension. Replacing mattention layers reduces their complexity to O(m·nd), yielding an overall complexity of O((K−m)·n2d+m·nd). Thus, prefill spee...
https://arxiv.org/abs/2505.21077v1
and memory demands of transformer-based LLMs have inspired numerous techniques to reduce the complexity of inference while maintaining the performance: 8 Structured and unstructured pruning: Techniques such as weight pruning, structured pruning, and layer removal have been widely explored to reduce model size and compu...
https://arxiv.org/abs/2505.21077v1
and generalization [Morcos et al., 2018]. Additionally, a similarity index for comparing representations between different architectures was proposed [Kornblith et al., 2019], and representation alignment in wide networks was studied [Nguyen et al., 2020]. These works highlight the effectiveness of CCA in examining neu...
https://arxiv.org/abs/2505.21077v1
, 2024a. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for on-device llm compression and acceleration. Proceedings of Machine Learning and Systems , 6: 87–100, 2024. Rajarshi Saha, Naomi Saga...
https://arxiv.org/abs/2505.21077v1
2024. Shwai He, Guoheng Sun, Zheyu Shen, and Ang Li. What matters in transformers? not all attention is needed, 2024. URL https://arxiv.org/abs/2406.15786 . Steven M Kay. Fundamentals of statistical signal processing: Estimation theory, 1993. Thomas Kailath, Ali H Sayed, and Babak Hassibi. Linear estimation . Prentice ...
https://arxiv.org/abs/2505.21077v1
Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027 , 2020. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lor...
https://arxiv.org/abs/2505.21077v1
Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhifang Sui. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. arXiv preprint arXiv:2401.07851 , 2024b. Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle-3: Scaling up inference ac- celeration of l...
https://arxiv.org/abs/2505.21077v1
. . . . . . . . . . . . . . 19 B.3 Performing CCA using the Singular Value Decomposition (SVD) . . . . . . . . . . 19 C Proof of Theorem 3.2 20 D Algorithmic details for NBL 22 D.1 Computational cost of the calibration . . . . . . . . . . . . . . . . . . . . . . . . . 23 D.2 Calibration runtime . . . . . . . . . . . . ...
https://arxiv.org/abs/2505.21077v1
estimation is a fundamental statistical method for estimating an unknown quantity from noisy observations. The goal of MMSE estimation is to minimize the mean squared error (MSE) between the actual value and its estimate. While the general MMSE approach does not impose restrictions on the form of the estimator, the Lin...
https://arxiv.org/abs/2505.21077v1
to its closed-form solution, making it suitable for real- time applications. It provides interpretable results when the covariance structures are well understood and is robust under Gaussian noise assumptions. However, reliance on accurate covariance estimation and the linearity assumption can limit its effectiveness i...
https://arxiv.org/abs/2505.21077v1
problem: max a,bρ=aTCXYbp aTCXXap bTCY Yb, where CXX, CY Y, CXYare the autocovariance matrices of X,Y, and their cross-covariance, respec- tively. B.2 Canonical Correlation matrix To simplify the problem, a change of basis is applied to normalize the covariance matrices CXXand CY Y. Define: ˜x1=C1/2 XXa,˜y1=C1/2 Y Yb, ...
https://arxiv.org/abs/2505.21077v1
XC−1 XXCXX(CY XC−1 XX)T =Tr CY Y−CY XC−1 XXCXY−CY XC−1 XXCXY+CY XC−1 XXCXXC−1 XXCXY =Tr CY Y−CY XC−1 XXCXY−(((((((CY XC−1 XXCXY+(((((((CY XC−1 XXCXY =Tr CY Y−CY XC−1 XXCXY . (12) Then, by using the matrix square root of CY Y, and using the fact that for A∈RK×LandB∈RL×K, Tr(AB) =Tr(BA), we can further simpl...
https://arxiv.org/abs/2505.21077v1
{Calculate the standardized cross-correlation matrix.} 28: 29:Singular Value Decomposition (SVD) 30:(U, S, V )←SVD(C W){Perform SVD to compute the CCA singular values.} 31:(ρ1, ρ2, . . . , ρ r)←diag(S){ris the embedding dimension.} 32:CCA-bound ←Pr i=1(1−ρ2 i){Calculate CCA Bound based on Theorem-(3.2).} 33: 34:Weight ...
https://arxiv.org/abs/2505.21077v1
relatively inexpensive at O(d2). Overall, the total complexity of the algorithm is O(d3+s·t·d2). The method is efficient due to optimized linear algebra routines and direct computations. Additionally, the algorithm requires no hyperparameter tuning or fine-tuning beyond specifying the number of calibration samples. D.2...
https://arxiv.org/abs/2505.21077v1
helps reduce GPU memory overhead and ensures broader hardware compatibility, while using the GPU offers faster runtime for large-scale models. PyTorch and SciPy ’s linear algebra routines ensure efficient execution of the covariance, eigen-decomposition, and SVD steps efficiently on either backend, enabling smooth inte...
https://arxiv.org/abs/2505.21077v1
can be found in Figure 4. To evaluate the statistical significance of the differences, we used the pooled standard error (Pooled_SE) to aggregate the uncertainty across tasks when reporting average performance. The Pooled_SE is computed as: Pooled_SE =1 nvuutnX i=1SE2 i, where SEidenotes the standard error of task i, a...
https://arxiv.org/abs/2505.21077v1
± 0.42 1.35 ± 0.05 1.42 ± 0.01 Table 12: Llama-3.1-70B (quant.): Accuracy and Speedup (Table 5). Method Avg. Acc. ± SE ( ↑) Prefill Speed-up ( ↑) Throughput Speed-up ( ↑) Base 76.0 ± 0.40 1 1 Drop-32 75.7 ± 0.40 1.20 ± 0.033 1.30 ± 0.01 Drop-48 70.4 ± 0.40 1.29 ± 0.052 1.42 ± 0.02 Drop-54 48.3 ± 0.42 1.34 ± 0.059 1.53 ...
https://arxiv.org/abs/2505.21077v1
the first group (the one with the highest number) is 100% and the other two are 1.5 and 0.75. So the number of the first is 100% and the other two are 1.5 and 0.5. So the total number of the first is 100% and the other two are 1.5 and 1.5. ✗ Attn DROP-8 There are 10 yellow flowers. There are 80% more of those in purple...
https://arxiv.org/abs/2505.21077v1
minimizing the output difference between quantized and original weights: L(s) = Q(W·diag(s)) diag(s)−1·X −WX , where sis a per-channel scaling vector, and Xrepresents cached activation features from a small calibration dataset. To simplify the search, AWQ uses the average activation magnitude per channel, sX, as a pr...
https://arxiv.org/abs/2505.21077v1
with a learning rate of 1e−4, an effective batch size of 16, and context length of 1024 tokens using a 5000-sample subset of the C4 validation split under a causal language modeling objective. To further assess generalization beyond the calibration data, we also apply LoRA fine-tuning using the SlimPajama [Soboleva et ...
https://arxiv.org/abs/2505.21077v1
83.6 79.8 62.1 44.0 81.8 74.4 69.9 70.0 Attn NBL-12 76.6 49.1 83.5 76.5 60.4 42.6 79.8 73.3 67.7 68.3 Attn NBL-16 62.4 42.8 76.7 69.4 33.0 39.8 76.4 70.2 58.8 58.8 Table 18: Performance of Llama-3.1-8B with NBL using cosine distance criterion. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑) CCA (norm) ...
https://arxiv.org/abs/2505.21077v1
3, 12, 7, 1, 0 Attn DROP – Llama-3-8B-Instruct (C4)25, 24, 22, 23, 26, 28, 27, 20, 19, 21, 29, 18, 30, 17, 16, 31 15, 11, 14, 10, 13, 8, 12, 5, 9, 6, 4, 2, 3, 7, 1, 0 Attn DROP – Llama-3.1-70B (quant.) (C4)62, 65, 59, 61, 63, 46, 50, 58, 48, 51, 53, 54, 57, 60, 64, 66 67, 68, 49, 69, 55, 56, 47, 44, 52, 42, 45, 70, 43,...
https://arxiv.org/abs/2505.21077v1
detailed explanation of the relationship between context length, prefill speed, and KV-cache usage in transformer models with NBL-applied attention layers. H.1 Figure analysis: prefill speed-up vs. context length Figure 3 illustrates how prefill speed-up improves as more attention layers are modified by NBL. The baseli...
https://arxiv.org/abs/2505.21077v1
optimization—grouped-query attention and NBL—substantially lowers the memory requirements during inference, allowing models like Llama and Mistral to handle longer sequences more efficiently. For example, the KV cache sizes in Table 21 demonstrate the impact of these optimizations. With a batch size of 64 and grouped-q...
https://arxiv.org/abs/2505.21077v1
arXiv:2505.21087v1 [cs.LO] 27 May 2025Stopping Criteria for Value Iteration on Concurrent Stochastic Reachability and Safety Games Marta Grobelna∗, Jan K ˇret´ınsk´y†∗, Maximilian Weininger‡§∗ ∗Technical University of Munich Munich, Germany†Masaryk University Brno, Czech Republic‡Ruhr-University Bochum Bochum, Germany§...
https://arxiv.org/abs/2505.21087v1
while the safety objective player, Player S, can attain optimal strategies [39], the reachability objective player, Player R, only possesses ε-optimal strategies for a given ε >0[21]. As a result, the problem of deciding whether the supremum probability (a.k.a. the value ) is at least pforp∈[0,1], is thus more subtle t...
https://arxiv.org/abs/2505.21087v1
We provide a stopping criterion for VI on CSGs, solving an open problem with erroneous solution attempts in the literature (see the related work in Subsec. I-Abelow). To this end, we unravel the recursive hierarchical structure of end components in CSGs (see Rem. 31) and adapt the bounded VI algorithm. A. Related Work ...
https://arxiv.org/abs/2505.21087v1
their relations, and algorithms to solve them. These games also consider discounted payoff or limit-average payoff, generalizing the reachability and safety CSGs we consider here. A generalization of CSGs to ω-regular objectives has been considered in [8], [15]. An insightful characterization of optimal strategies in c...
https://arxiv.org/abs/2505.21087v1
strategy by ρ(orσ). Markov Decision Processes: Given a CSG G, if we fix a strategy ρ∈ R of player R, the game becomes a S-Markov Decision Process (MDP, [41]) Gρwith the transition function δρ s,b s′:=X a∈ΓR(s)δ s,a,b s′ ·ρ(s)(a), for all s∈Sanda∈ΓR(s). The MDP induced by a fixed strategy σ∈ S is defined analogo...
https://arxiv.org/abs/2505.21087v1
win with probability 0.5, but only with a probability 0.5−εfor all ε >0. △ Fors∈S,a∈ΓR(s)and b∈ΓS(s), the set of potential successors ofsis denoted by Post(s,a,b):= Supp(δ s,a,b ). We lift the notation to strategies ρ∈ R andσ∈ S by Post(s,ρ, σ) =[ a∈Supp(ρ(s))[ b∈Supp(σ(s))Post(s,a,b). We denote by WS:={s∈S|VR(s) = 0...
https://arxiv.org/abs/2505.21087v1
the players can cooperate to keep the play inside the EC(this is the usual way to lift the definition of [14] from MDP to games). Thus, we can compute ECs in a CSG by computing ECs in the corresponding MDP with both players unified, i.e., every pair of actions is interpreted as an action in the MDP . Efficient algorith...
https://arxiv.org/abs/2505.21087v1
of both bound 8: Lk+1(s)←B(Lk)(s) 9: Uk+1(s)←B(Uk)(s) 10: forC∈MEC do 11: Uk+1←DEFLATE (G,Uk+1,C) 12: k←k+ 1 13: until Uk+1−Lk+1≤ε Uthat is meant to converge to the value from above. Na ¨ıvely, this upper bound is defined as follows: U0(s):=( 0,ifs∈WS; 1,else,Uk+1(s):=B(Uk)(s).(3) Given a precision ε >0, the algorithm ...
https://arxiv.org/abs/2505.21087v1
IV. T HECORE OF THE PROBLEM : CHARACTERIZING BLOATED ENDCOMPONENTS A locally optimal strategy of Player Rdoes not coincide with an optimal global strategy of Player Rbecause the latter must eventually leave ECs, while the former is under the illusion that staying is optimal. Thus, in this section, we want to find prope...
https://arxiv.org/abs/2505.21087v1
+1 3run 0 Uk(shide) hide The difference is that Z′ Uk(shide)(run,throw ) = 1 and Z′ Uk(shide)(hide,throw ) = 0 , switching the values as compared to the original CSG (see Eq. (4)). Here, both bounds converge to 0.5 despite the presence of the EC{shide}, as shown in Table II. △ B. Towards Characterizing Bloated End Comp...
https://arxiv.org/abs/2505.21087v1
S1weakly dominates S2underR′with respect toυ. Formally, ∃σ1∈ S1.∀σ2∈ S2: (i)supρ∈R′B(υ)(s,ρ, σ2)≥supρ∈R′B(υ)(s,ρ, σ1), and (ii)∃ρ′∈ R′such that B(υ)(s,ρ′, σ2)>B(υ)(s,ρ′, σ1). If only Condition (i) is satisfied, we write S2⪯υ,R′S1to denote that the set S1isnot worse thanS2under R′with respect to υ. Example 13 (Dominatin...
https://arxiv.org/abs/2505.21087v1
worse than any other: Formally, this claim is that for all locally optimal strategyρ∈ R(s)with respect to υ, we have R(s)⪯υ,S(s){ρ7→ 1}(and dually for Player S). This is immediate from Def. 12, since a locally optimal strategy maximizes infσ∈S′B(υ)(s,ρ, σ), and thus satisfies Condition (i) when compared to all other st...
https://arxiv.org/abs/2505.21087v1
strategies of PlayerS. We often speak about a leaving/staying pair of local strate- gies, so we provide the following shorthand notations: For a tu- ple(s,ρ, σ)∈S×R×S , we say that (s,ρ, σ)leaves Xif and only if Post(s,ρ, σ)∩(S\ X)̸=∅. Analogously, we say that (s,ρ, σ)staysIn Xif and only if Post(s,ρ, σ)∩(S\ X) =∅ (or,...
https://arxiv.org/abs/2505.21087v1
).LetU⋆:= lim k→∞Ukbe the limit of the na ¨ıve upper bound iteration (Eq. (3)) on the CSG G. If VI from above does not converge to the value in the limit, i.e. U⋆>VR, then theCSG Gcontains a BEC inS\(T∪WS)with respect to U⋆. Proof sketch. This proof builds on the proof of Thm. 9. There, we constructed a set Xmaximizing...
https://arxiv.org/abs/2505.21087v1
the valuation that is bloated; thus, we need an additional update of the over-approximation that depends only on leaving actions. The valuation to which we reduce the over-approximation is the best exit from the EC, which in TSGs simply is the leaving action attaining the highest value over all states of Player R[18, D...
https://arxiv.org/abs/2505.21087v1
outside theBEC . To do so, we estimate the valuation attainable upon leaving the BEC at every state of the BEC , called exit value of the state. The best exit value is the maximum of all exit values. Reducing the upper bounds of the states inside the BEC to the best exit value, in case the best exit is smaller than the...
https://arxiv.org/abs/2505.21087v1
conditions are satisfied: (i)∃σ∈Trapυ(X, s)such that (s,ρ, σ)leaves X, and (ii)Supp(ρ)∩S ρ′∈Hazard υ(X,s)Supp(ρ′) =∅. Deflυ(X, s)denotes the set of all deflating strategies at state swith respect to a set of states Xand a valuation υ. Definition 26 (Exiting sub-game) .LetX ⊆ S\(T∪WS) be a set of states, s∈ X a state, a...
https://arxiv.org/abs/2505.21087v1
involved. C. Finding Maximal BECs Since a BEC might contain other BECs , we want to find maximal BECs . ABECXis maximal if there exists no BEC X′such that X⊊X′. The existence of maximal BECs is proven in App. C-C. Given a CSG G, aMEC Cand the current upper bound estimate U, Alg. 2 finds all maximal BECs within Cas foll...
https://arxiv.org/abs/2505.21087v1
for-loop in Line 2). To deflate a maximal BECX, first, the best exit value bestExitValU(X)is estimated. Next, each state of the BEC is considered and if the best exit value is smaller than the current upper bound estimate at that state, then it is reduced to the best exit value (as nothing better can be reached). Howev...
https://arxiv.org/abs/2505.21087v1
(D◦B)k(U0), or if U=VR. We proceed as follows: After proving fundamental properties of both operators BandDin Lem. 34, we use these properties to show that valid upper bounds are indeed upper bounds, i.e. they are always greater or equal than the value in Lem. 35. With correctness established, Lem. 36 shows that on val...
https://arxiv.org/abs/2505.21087v1
lower bounds is classical [17, Thm. 1], and our algorithm does not modify the computation of under-approximations. The soundness of the upper bounds is immediate from Lem. 35, since all upper bounds computed by the algorithm are valid, and thus greater or equal than the value. Proving the convergence of the upper bound...
https://arxiv.org/abs/2505.21087v1
Kwiatkowska, David Parker, and Mateusz Ujma. Verification of Markov Decision Processes Using Learning Algorithms. In Franck Cassez and Jean-Fran c ¸ois Raskin, editors, Automated Technology for Verification and Analysis . Springer International Publishing, 2014. [7]Krishnendu Chatterjee, Luca de Alfaro, and Thomas A He...
https://arxiv.org/abs/2505.21087v1
abstract. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing , 2011. [27] Arnd Hartmanns, Sebastian Junges, Tim Quatmann, and Maximilian Weininger. A Practitioner’s Guide to MDP Model Checking Algorithms. In Sriram Sankaranarayanan and Natasha Sharygina, editors, Tools and Algorithms for the ...
https://arxiv.org/abs/2505.21087v1
matrix game played at sbe given by a matrix Z∈Ql×m, where l(resp. m) is the number of actions available to player R(resp.S) at the state s. Then the LP that yields the value is the following [43]: Maximize υ(s)subject to the constraints: υ(s)≤x1·z1j+···+xl·zljfor1≤j≤m xi≥0for1≤i≤l 1 =x1+···+xl where zij=Z(s)(i, j) =P s...
https://arxiv.org/abs/2505.21087v1
three solutions. The three linear programs that solve the three sub-matrix games are the following. max U1(s0)s.t. U1(s0)≤0.6·x2 U1(s0)≤0.2·x2 x2= 1max U1(s1)s.t. U1(s1)≤0.9·x1 U1(s1)≤0.7·x1 x1= 1max U1(s2)s.t. U1(s2)≤0.9·x2 U1(s2)≤1·x2 x2= 1 Here x1andx2are the probabilities that player Rchooses the first or second ac...
https://arxiv.org/abs/2505.21087v1
:=d2 ( )0.45 a2 Notice, that here we only consider staying strategies for player S, which is why we only consider action d2. Therefore, the best exit value is 0.45. After removing s0ands2fromX, no further BECs are contained in the MEC . Since now U1−L1< ε holds, the BVI algorithm terminates. △ C. Non-Monotonicity of De...
https://arxiv.org/abs/2505.21087v1
remark that we inverted the definitions and the theorem: This is because we are interested in a greatest fixpoint, whereas the textbook [13] only speaks about least fixpoints. Inverting the comparator and replacing infwith supyields the original definitions. With only these changes, the proof of [13, Chapter 8.15] yiel...
https://arxiv.org/abs/2505.21087v1
is correct, since addition is a continuous operation. ▲ Claim 2: This step moves infd∈Dto the front, first utilizing that infima can be switched. Then, to switch infd∈Dandsupρ∈R(s), we make use of the Minimax Theorem [45] which states that for a function f:X×Y→Rthat is concave-convex, it holds thatsupx∈Xinfy∈Yf(x, y) =...
https://arxiv.org/abs/2505.21087v1
difference . We define the difference of a state s∈Sas∆(s):=U⋆(s)−VR(s). Recall that VRis the least fixpoint and U⋆the greatest fixpoint of B. Hence, we know that ∆(s)≥0for all states. Further, since we assume for contradiction that U⋆̸=VR, there exist states with ∆(s)>0. Thus, we can find a non-empty set of states wit...
https://arxiv.org/abs/2505.21087v1
U⋆, we denote a locally optimal strategy by ρU∈arg max ρ∈R(s)inf σ∈S(s)B(U⋆)(s,ρ, σ). Similarly we denote a locally optimal strategy of player Swith respect to U⋆by σU∈arg min σ∈S(s)sup ρ∈R(s)B(U⋆)(s,ρ, σ). Analogously, we define locally optimal strategies with respect to VR, namely ρVandσV, obtained by replacing U⋆wit...
https://arxiv.org/abs/2505.21087v1
The only remaining problem is the order of quantifiers. However, the choice of ρ2does not depend on ρ1, and we can always pick ρ2as the strategy that maximizes infσ∈S′B(υ)(s,ρ2, σ). Thus, we can exchange the order of quantifiers and prove our goal. Lemma 20 (Negating Bloated — Proof in App. C-B ).If an ECX ⊆ S\(T∪WS)is...
https://arxiv.org/abs/2505.21087v1
because every non-leaving strategy is weakly dominated by the set of others. Theorem 21 (Non-convergence implies BECs — Proof in App. C-B ).LetU⋆:= lim k→∞Ukbe the limit of the na ¨ıve upper bound iteration (Eq. (3)) on the CSG G. If VI from above does not converge to the value in the limit, i.e. U⋆>VR, then the CSG Gc...
https://arxiv.org/abs/2505.21087v1
that there exists a state s∈ X′where an optimal strategy ρUexists that is leaving X′. Moreover, since X′is a bottom MEC in X, we also have that it is leaving with respect to X. Deriving the Contradiction . Using these facts, we can exactly repeat the argument used in the proof of Thm. 9 under the paragraph-heading “Der...
https://arxiv.org/abs/2505.21087v1
hold that Hazard VR(X, s)̸=∅because otherwise we would obtain the following contradiction: exitValVR(X, s) = supρ∈R(s)infσ∈S(s)B(VR)(s,ρ, σ) =VR(s). 22 We proceed with the assumption that for all s∈ X it holds that Hazard VR(X, s)̸=∅. Since by the case assumption at all s∈ X it holds that VR(s)>exitValVR(X, s)the true ...
https://arxiv.org/abs/2505.21087v1
i.e. where VR(s′′)≤exitValVR(X \ X′, s′′)holds. Thus, at state s′′the highest value also has to be attainable. However, this is a contradiction to the case assumption that for all s′∈arg maxs∈X \X′VR(s)it holds that VR(s′)>exitValVR(X \ X′, s′). Case (ii.a.2) In this case there exists s′∈arg maxs∈X \X′VR(s)such that VR...
https://arxiv.org/abs/2505.21087v1
Hazard VR(X \ X′, s′)̸=∅. Case (ii.b.2) In this case there exists s′∈arg maxs∈X \X′VR(s)such that Hazard VR(X \ X′, s′) =∅. Then, the following chain of equations holds. VR(s′) = sup ρ∈R(s′)inf σ∈S(s′)B(VR)(s′,ρ, σ) (VRis a fixpoint) =sup ρ∈R(s′)inf σ∈S(s′)X (a,b)∈AX s∈X \X′VR(s)|{z} ≤maxs′′∈X \X ′VR(s′′)·δ s′,a,b s...
https://arxiv.org/abs/2505.21087v1
definition of B) ≤ sup ρ∈Hazard VR(X,s)inf σ∈SL(Hazard VR(X,s),X,s)X (a,b)∈AX s′∈XVR(s)·δ s,ρ, σ ·ρ(a)·σ(b) (VR(s′)≤VR(s)by (ii) in Lem. 49) =B(VR)(s) =VR(s). (VRis fixpoint of B) Thus, overall VR(s)<VR(s), a contradiction. Case (III): TrapVR(X, s)̸=∅andHazard VR(X, s) =∅. Since Hazard VR(X, s) =∅we know by Lem. 14 t...
https://arxiv.org/abs/2505.21087v1
Then, ρLmust belong to the set of deflating strategies under U, as it satisfies both conditions posed by the definition of deflating strategies (see Def. 25). 26 IfTrapU(X, s)̸=∅, then the following chain of equations holds. exitValVR(X, s) = sup ρ∈R(s)inf σ∈σ(s)B(VR)(s,ρ, σ) (By Def. 16 in case Hazard VR(X, s) =∅) =in...
https://arxiv.org/abs/2505.21087v1
of X1andX2the two conditions (i) and (ii) of Def. 18 are fulfilled, also X1∪ X2fulfills them. Consequently, X1∪ X2is aBEC which in turn proves that there exist maximal BECs. 27 Now that we have proven that maximal BECs indeed exist, the next step is to prove the correctness of FIND MBECs , i.e. the algorithm that can f...
https://arxiv.org/abs/2505.21087v1
“some strategy violate (a) and (c) but satisfy (b)” . We assume that ρ′violates (a) and (c) but satisfies condition (b). Then, strategy ρ′is dominated by another non-leaving strategy, say ρ∈ RL(X, s′), that in turn has to violate one of the two previous cases. Case (ii) Now, assume towards a contradiction that Xis aBEC...
https://arxiv.org/abs/2505.21087v1
of states this condition is satisfied anyway. Third, ρsatisfies condition (iii) of Def. 16, i.e. all strategies that are leaving with respect to Care sub-optimal. More precisely, it holds that RL(C, s)≺S(s){ρ}. Since C⊆C′holds, there cannot exist an optimal strategy that is leaving with respect to the greater set of st...
https://arxiv.org/abs/2505.21087v1
s)be the set of hazardous strategies and let TrapsupD(X, s)be the set of suitable counter strategies for playerS(see Def. 18). Since Xis aBEC both sets are non-empty, i.e. Hazard supD(X, s)̸=∅andTrapsupD(X, s)̸=∅, at alls∈ X. Thus, D(supD)(s) =min(supD(s),bestExitValsupD(X)). For the sake of readability let R′(s):=Dist...
https://arxiv.org/abs/2505.21087v1
the contraposition of Thm. 9 proves our goal. Thus, we continue with the assumption that Ccontains BECs. LetX ⊆ Cbe a BEC contained in Cthat Alg. 3 will eventually find and deflate. We now consider bottom BECs . ABECX′ is called bottom in Xif none of the successors of a strategy that leaves the BECX′, is part of anothe...
https://arxiv.org/abs/2505.21087v1
not executed and thus the values of all states are updated only using the Bellman operator. This yields the correct values by Thm. 9. Therefore, Alg. 3 correctly sets the valuation of states s4ands3to the value 0.6. In contrast, the algorithm presented in [7] correctly sets the value of state s4to 0.6 but reduces the v...
https://arxiv.org/abs/2505.21087v1
arXiv:2505.21091v1 [cs.CY] 27 May 2025Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) Anna Neumann Research Center Trust, UA Ruhr University of Duisburg-Essen Duisburg, Germany anna.neumann1@uni-due.deElisabeth Kirsten Research Center Trust, UA Ruhr Ruhr University Bochum Bochum...
https://arxiv.org/abs/2505.21091v1
as governing mechanisms. These specialized instructions shape how models interact with users, taking prece- dence over user prompts across model interactions. Foundation system prompts can define core behaviors, e.g., instructing a model to include explanations or apply guardrails [ 31,40,63]; e.g., when a system promp...
https://arxiv.org/abs/2505.21091v1
match the end-user’s intentions and prefer- ences [ 15,50]. While this tailoring aims to improve model utility, the inclusion of such custom information could introduce biases into the system. For instance, instructing models to adopt specific personas can amplify implicit reasoning biases [ 32]. These find- ings raise...
https://arxiv.org/abs/2505.21091v1
consistently generate higher bias in demographic descriptions (RQ1) across all models, and this bias difference increases with model size. In resource allocation tasks (RQ2), system prompts can produce greater deviations from baseline rankings than user prompts. In short, system prompts shape the behaviour of language ...
https://arxiv.org/abs/2505.21091v1
end-users . This hierarchy (“[Foundation Model Developer] > [Deployer] > User”) showcases how AI supply chains introduce layers of com- plexity to system prompt implementation. Taking a simplified per- spective, foundation model providers define baseline constraints that establish the model’s capabilities and behaviors...
https://arxiv.org/abs/2505.21091v1
pro- viding demographic information: role-based prompts (e.g., “You are a concerned mother”) and audience-based prompts (e.g., “The user is a concerned mother”). Studies reveal significant limitations in role-based specification [ 39,81], while audience-based approaches remain understudied [101]. System prompts can exp...
https://arxiv.org/abs/2505.21091v1
categories grounded in established regulations and prior research (see Table 1). Our framework builds on the EU General Data Pro- tection Regulation (GDPR) Article 9 for especially protected data categories and incorporates descriptors from Meta’s HolisticBias- Dataset [ 83] and prior research on stigmatized groups in ...
https://arxiv.org/abs/2505.21091v1
audience with task-specific instructions introducing the experimental tasks (see Table 2). The base components establish five distinct conditions for en- coding demographic information. The default condition serves as our control, containing no demographic information. The system prompt condition defines the user’s dem...
https://arxiv.org/abs/2505.21091v1
different base and task-specific configurations. We combine a base message that can embed demographic information with a model-instructing task message. Section Component Prompt Content System ( sys) User ( user) Base ( base) Default Condition You are a helpful assistant. - System Prompt Explicit Condition You are talk...
https://arxiv.org/abs/2505.21091v1
decision-making tasks where algorithmic bias could directly impact individuals and communities [77]. We developed a dataset of 40 scenarios across eight domains: financial services, social services, disaster response, healthcare access, cultural resources, educational opportunities, environmental projects, and legal ri...
https://arxiv.org/abs/2505.21091v1
Allocation Decision: Organ Transplant Scenario. Prompting is in the Explicit User Condition Fig. 4 visualizes these biases with heatmaps for system and user prompt conditions for Claude-3.5-Sonnet. Darker colours indicate more negative sentiment when describing a demographic group. The results reveal distinct patterns ...
https://arxiv.org/abs/2505.21091v1
dis- cussed, rather than the audience. This general trend is consistentacross models of different families and sizes, as illustrated in §B. Detailed comparisons across models are provided in the following sections. 4.1.2 Comparative Analysis Across Conditions and Models. To see if this bias systematically differs acros...
https://arxiv.org/abs/2505.21091v1
both sides of the political spectrum, when the user is identified as a socialist in the system prompt, a centrist is described as: These individuals often claim to be ‘moderate’ or ‘rational,’ but in reality, they serve to maintain the status quo that per- petuates economic inequality and worker exploitation. To an ide...
https://arxiv.org/abs/2505.21091v1
tested models, measured by the mean Kendall’s 𝜏correlation co- efficient across all personas. Lower values indicate greater deviation from the baseline, with error bars representing variance between demographic groups. For the explicit prompting conditions shown in Fig. 6a, our analy- sis comparing prompts to a baseli...
https://arxiv.org/abs/2505.21091v1
by values across all bar plots, and (ii)the similar magnitudes of explicit and implicit con- ditions suggest systematic differences in how system and user prompts influence ranking tasks. This establishes that different prompt locations – system or user prompt – create distinct effects on model decision-making that wou...
https://arxiv.org/abs/2505.21091v1
each application, the model will form a part of its overall data-driven supply chain. Such a supply chain can be characterized by (multiple) sources of data and exchanges of this data. This means that various actors, e.g. developers, deployers, and end-users, have the potential to be involved in driving a particular ap...
https://arxiv.org/abs/2505.21091v1
address- ing concerns. 5.4 Limitations Our study reveals important connections between prompt hierarchy and demographic information processing, with specific limitations on scope and generalizability. Our demographic framework employs representative categories of complex social identities through controlled experiments...
https://arxiv.org/abs/2505.21091v1