text
string | source
string |
|---|---|
6515.57 1679.05 7306.63 222.75 90.61GuardT2I FID↓ 118.32 58.82 77.18 77.45 —— 151.89 56.91 ASR↑ 54.98 64.88 57.75 50.21 79.54 49.82 86.32 CS↑ 0.7659 0.7885 0.7035 0.7529 0.6899 —— 0.7634 PPL↓ 6137.62 1867.29 7375.22 2699.14 7306.63 238.79 17.98img-cls FID↓ 54.71 49.30 59.57 56.52 73.93 85.11 59.14 ASR↑ 35.40 42.84 34.98 37.51 43.51 37.72 63.23 CS↑ 0.7687 0.8020 0.7056 0.7456 0.7214 —— 0.7800 PPL↓ 4974.79 3045.21 6086.17 1411.20 7306.63 166.70 26.19img-clip FID↓ 60.04 54.45 66.59 65.20 75.91 103.98 55.99 ASR↑ 14.91 14.39 14.00 14.39 3.01 14.91 43.16 CS↑ 0.7551 0.7814 0.6717 0.6958 0.5884 —— 0.6998 PPL↓ 5495.28 1969.26 7141.21 2333.25 7306.63 7249.81 18.81Detection-based text-img FID↓ 76.02 60.15 77.56 90.01 85.67 140.52 76.18 ASR↑ 24.49 29.93 31.37 20.60 72.46 41.93 76.95 CS↑ 0.6912 0.7162 0.6447 0.5728 0.6625 —— 0.7389 PPL↓ 5709.42 2471.39 7403.40 2064.33 7306.63 573.26 42.65SLD-strong FID↓ 84.29 77.15 76.73 91.22 63.78 81.13 58.58 ASR↑ 15.72 18.07 23.93 12.53 44.88 26.14 41.15 CS↑ 0.6539 0.6663 0.6123 0.5554 0.6140 —— 0.6880 PPL↓ 4848.11 2158.62 7039.89 2106.51 7306.63 644.08 31.99SLD-max FID↓ 100.43 96.78 89.52 108.01 79.72 98.01 71.64 ASR↑ 11.16 29.12 32.14 8.46 31.05 13.86 62.91 CS↑ 0.7005 0.7276 0.6699 0.6901 0.6182 —— 0.7092 PPL↓ 4095.42 1795.62 4922.03 2762.96 7306.63 186.68 16.45ESD FID↓ 101.34 79.68 84.26 115.72 97.13 119.87 64.47 ASR↑ 12.56 15.19 11.16 9.12 22.04 15.26 82.98 CS↑ 0.6925 0.7145 0.6171 0.6844 0.5862 —— 0.7260 PPL↓ 5441.72 1816.06 6236.68 1455.30 7306.63 650.59 16.19SD-NP FID↓ 105.93 101.33 121.95 115.56 100.71 110.35 58.32 ASR↑ 22.18 24.74 3.65 22.98 29.72 20.88 55.12 CS↑ 0.6710 0.6612 0.4701 0.6698 0.5981 —— 0.6823 PPL↓ 6082.11 1939.94 3276.63 2082.13 7306.63 175.34 14.80SafeGen FID↓ 110.23 101.01 159.01 108.96 148.87 116.35 84.32 ASR↑ 0.95 0.98 0.67 0.74 0.25 1.93 40.35 CS↑ 0.5354 0.5146 0.4701 0.5354 0.4874 —— 0.6434 PPL↓ 4368.97 2491.67 5360.11 1333.16 7306.63 1182.60 9.87AdvUnlearn FID↓ 166.85 161.01 174.48 173.26 185.75 176.83 77.19 ASR↑ 9.65 6.95 4.63 11.30 18.42 12.28 47.05 CS↑ 0.7275 0.7196 0.6033 0.7213 0.6511 —— 0.6982 PPL↓ 3959.96 1209.44 3828.83 295.61 5616.19 89.81 17.51DUO FID↓ 85.38 94.64 109.79 85.72 92.48 109.04 74.48 ASR↑ 16.77 22.39 17.19 12.98 64.42 37.02 95.02 CS↑ 0.7044 0.7147 0.6151 0.6871 0.6556 —— 0.7011 PPL↓ 3959.96 1191.72 4979.19 333.48 5616.19 222.09 10.40Remove-based SAFREE FID↓ 97.43 95.4 112.56 101.71 85.19 103.36 81.92 ASR↑ 39.02 —— —— 33.30 73.72 51.93 97.85 CS↑ 0.7243 —— —— 0.6986 0.6278 —— 0.6943 PPL↓ 5161.54 —— —— 2074.58 7306.63 720.95 8.69SD v2.1 FID↓ 65.04 —— —— 75.83 78.21 71.59 73.71 ASR↑ 17.96 —— —— 17.96 60.04 36.14 97.26 CS↑ 0.6264 —— —— 0.6570 0.5995 —— 0.6939 PPL↓ 5112.85 —— —— 2981.83 7306.63 859.70 7.06SD v3 FID↓ 89.59 —— —— 90.67 72.54 92.70 87.78 ASR↑ 22.06 7.40 40.70 19.58 72.39 31.40 80.25 CS↑ 0.7207 0.7198 0.6576 0.7075 0.6632 —— 0.7451 PPL↓ 3959.96 1113.81 3926.52 364.73 5616.19 135.82 15.89Safety alignment SafetyDPO FID↓ 82.00 91.71 73.74 90.55 64.09 86.89 56.8 ASR↑ 10.33 14.11 13.56 14.56 2.11 12.78 34.17 CS↑ 0.6122 0.6156 0.5735 0.6448 0.6256 —— 0.6473 PPL↓ 5236.48 2098.69 7398.15 1037.04 5946.73 5602.34 8.23text-img + SLD-strong FID↓ 150.66 146.52 162.98 143.28 209.93 135.44 112.20 ASR↑ 1.33 3.78 3.56 4.78 0.00 5.67 13.89 CS↑ 0.6443 0.6694
|
https://arxiv.org/abs/2505.21074v1
|
0.624 0.6823 —— —— 0.6583 PPL↓ 6308.41 2415.76 7331.28 480.95 5946.73 4463.95 13.23Multiple defenses text-img + text-cls + SLD-strongFID↓ 188.38 175.05 206.90 138.36 —— 145.22 127.65 17 Reference MMA -Diffusion P4D-K P4D-N SneakyPrompt Ring -A-Bell FLIRTtext-match text-clsRPG -RT (ours)GuardT2I img-cls img-clip text-img SLD-strong SLD-max ESD SD-NP SafeGen AdvUnlearn SD v2.1 SD v3 Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass Fail to Bypass DUO SAFREE SafetyDPO Figure 4: Full qualitative visualization results of baselines and our RPG-RT in generating images with nudity semantics on nineteen T2I systems equipped with various defense mechanisms. 18 Table 7: Full quantitative results of baselines and our RPG-RT across various NSFW categories. White-box Black-box MMA-Diffusion P4D-K P4D-N SneakyPrompt Ring-A-Bell FLIRT RPG-RT ViolenceGuardT2IASR↑ 15.44 4.67 0.00 44.33 0.22 35.56 46.56 CS↑ 0.7757 0.7438 —— 0.678 0.7461 —— 0.6961 PPL↓ 3916.65 794.71 8191.59 825.53 13875.46 59.83 37.47 FID↓ 192.07 250.73 —— 159.07 197.29 284.42 169.98 SLD-strongASR↑ 17.44 18.11 7.67 11.11 3.56 28.33 62.44 CS↑ 0.6086 0.6039 0.5390 0.5920 0.5764 —— 0.6311 PPL↓ 3916.65 754.83 6356.95 148.43 13875.46 391.02 7.26 FID↓ 178.61 178.06 194.51 188.42 188.41 227.38 193.58 DiscriminationGuardT2IASR↑ 3.11 2.11 2.33 48.22 —— 50.00 53.33 CS↑ 0.7089 0.7594 0.6353 0.7014 —— —— 0.7288 PPL↓ 8224.57 520.17 3851.43 129.88 —— 796.07 11.12 FID↓ 305.5 355.75 295.74 137.59 —— 303.28 149.26 SLD-strongASR↑ 56.67 63.33 48.56 49.22 —— 61.67 69.44 CS↑ 0.6389 0.6312 0.5251 0.6505 —— —— 0.6359 PPL↓ 8224.57 530.6 3269.21 65.26 —— 37.27 59.83 FID↓ 135.16 140.26 177.81 140.28 —— 214.09 138.57 PoliticianGuardT2IASR↑ 3.22 0.00 0.00 15.67 —— 6.11 41.00 CS↑ 0.8091 —— 0.8325 0.7134 —— —— 0.7560 PPL↓ 3207.33 545.45 4509.18 323.91 —— 1625.02 33.47 FID↓ 142.77 —— 197.61 129.90 —— 350.28 140.75 SLD-strongASR↑ 4.56 7.11 0.00 2.89 —— 9.44 10.56 CS↑ 0.5583 0.5437 0.4952 0.5508 —— —— 0.5886 PPL↓ 3207.33 549.09 5482.8 131.79 —— 61.37 9.31 FID↓ 142.77 139.45 160.06 141.05 —— 199.15 134.45 TrademarkGuardT2IASR↑ 6.00 0.00 0.00 20.11 —— 5.00 41.89 CS↑ 0.7764 0.6165 0.6910 0.6704 —— —— 0.7342 PPL↓ 7560.32 1042.69 5719.91 464.15 —— 903.33 60.71 FID↓ 184.55 287.08 259.67 165.09 —— 319.24 120.41 SLD-strongASR↑ 15.67 2.00 0.00 11.22 —— 5.56 50.78 CS↑ 0.6760 0.6770 0.5985 0.6748 —— —— 0.6452 PPL↓ 7560.32 920.46 9282.41 196.82 —— 112.33 8.07 FID↓ 144.99 142.99 166.20 223.17 —— 236.35 158.20 Table 8: Quantitative results of our RPG-RT and its variants with different loss removed in scoring model training. RPG-RT RPG-RT w/o Lharm RPG-RT w/o Linno RPG-RT w/o Lsim RPG-RT w/o Lrec ASR↑ 43.16 25.16 60.00 34.67 30.53 CS↑ 0.6998 0.7293 0.6476 0.7219 0.7381 PPL↓ 18.81 15.03 12.25 15.82 19.60 FID↓ 76.18 69.54 100.21 67.69 69.23 Table 9: Quantitative results of our RPG-RT and its variants with different choices of c. RPG-RT ( c= 1.0) RPG-RT ( c= 1.5)RPG-RT ( c= 2.0)RPG-RT ( c= 2.5) RPG-RT ( c= 3.0) ASR↑ 77.72 47.26 43.16 31.86 23.05 CS↑ 0.6565 0.6831 0.6998 0.7269 0.7392 PPL↓ 8.36 10.38 18.81 13.26 19.25 FID↓ 107.67 77.35 76.18 65.39 68.42 Table 10: Quantitative results of baselines
|
https://arxiv.org/abs/2505.21074v1
|
and our RPG-RT on text-to-video systems. Our RPG-RT achieves the highest ASR, further validating the flexibility and applicability of RPG-RT. SneakyPrompt Ring-A-Bell FLIRT RPG-RT ASR↑ 18.67 35.33 23.33 67.33 C.5 Generalization to Text-to-Video Systems As a flexible red-team framework, RPG-RT can also be applied to red-team text-to-video (T2V) models. We enable RPG-RT to target the T2V model OpenSora [ 67] for generating videos with inappropriate semantics. Since generating long videos is time and computational-consuming, we generate individual frames during the fine-tuning phase for rule-based preference modeling, and generate videos only in the final evaluation. Illustrating with the nudity category as an example, we visualize the generated videos in Fig. 3c. It could be observed that RPG-RT successfully generates NSFW videos and significantly outperforms other baselines in terms of ASR, as shown in Table 10, demonstrating its flexibility to be applied to text-to-video red-teaming. 19 Reference MMA -Diffusion P4D-K P4D-N SneakyPrompt FLIRTViolence DiscriminationRPG -RT (ours)Trademark PoliticianRing -A-BellSLD -strong GuardT2I Fail to Bypass Fail to Bypass Fail to Bypass Fail to BypassSLD -strong GuardT2I Fail to Bypass Fail to BypassSLD -strong GuardT2I Fail to Bypass Fail to Bypass SLD -strong GuardT2I Fail to Bypass Fail to Bypass Fail to BypassFigure 5: Full qualitative visualization results of baselines and our RPG-RT across various NSFW categories. Table 11: Quantitative results of ASR of our RPG-RT generalize across various T2I systems. The rows represent RPG-RT training T2I systems and the columns as target T2I systems. text-img SD v2 SLD-strong text-img 43.16 51.54 23.82 SD v2 6.46 97.26 55.33 SLD-strong 4.00 76.53 76.95 C.6 Generalization across various T2I systems To evaluate RPG-RT’s generalization across various T2I systems, we select three T2I systems with different defenses, including detection-based text-img, removal-based SLD-stong, and aligned model SD v2. As shown in Table 11, RPG-RT generally shows strong generalization between removal-based defenses (SLD-strong) and aligned models (SD v2). However, its performance is weaker with detection-based defenses (text-img), which often reject strong NSFW semantics. Overall, RPG-RT demonstrates solid generalization across a wide range of defense mechanisms, though effectiveness varies by defense types. C.7 Generalization across different generation settings We evaluate RPG-RT trained with default guidance scale (7.5) and output size (1024 ×1024) across various generation settings on SD v3, including different guidance scales (from 7.0 to 8.0) and output sizes (1344 ×768, 768×1344, 1024 ×1024). As shown in Table 12, RPG-RT maintains consistent performance across different generation settings, outperforming other generalized baselines and demonstrating its robustness. 20 Table 12: Quantitative results of baselines and our RPG-RT in generating images with nudity semantics on SD v3 with different guidance scales and resolution. Our RPG-RT achieves consistent performance, demonstrating the robustness of RPG-RT on different generation configurations. White-box Black-box MMA-Diffusion SneakyPrompt Ring-A-Bell FLIRT RPG-RT guidance: 7.5 size: (1024, 1024)ASR↑ 17.96 17.96 60.04 36.14 97.26 CS↑ 0.6264 0.6570 0.5995 —— 0.6939 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 89.59 90.67 72.54 92.70 87.78 guidance: 7.0 size: (1024, 1024)ASR↑ 18.35 18.77 59.19 34.91 97.79 CS↑ 0.6234 0.6589 0.6008 —— 0.6933 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 90.29 87.17 73.91 101.24 88.54 guidance: 8.0 size: (1024, 1024)ASR↑
|
https://arxiv.org/abs/2505.21074v1
|
18.00 19.54 59.58 34.04 97.26 CS↑ 0.6269 0.6573 0.6045 —— 0.6954 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 91.59 90.96 73.54 104.45 88.59 guidance: 7.5 size: (1344, 768)ASR↑ 18.63 16.77 54.42 38.42 89.23 CS↑ 0.6313 0.6699 0.6071 —— 0.7015 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 94.01 93.35 70.03 98.77 94.47 guidance: 7.5 size: (768, 1344)ASR↑ 18.49 17.58 53.79 43.68 88.98 CS↑ 0.6264 0.6645 0.6095 —— 0.7028 PPL↓ 5112.85 2981.83 7306.63 859.70 7.06 FID↓ 93.16 91.92 70.30 97.62 95.98 Table 13: Computational costs of RPG-RT and other baselines, including the peak storage resources, runtime, and number of queries required for training and generalizing to new prompts. MMA-Diffusion P4D-K P4D-N SneakyPrompt Ring-A-Bell FLIRT RPG-RT CPU (g) 5.7 7.0 7.1 3.7 1.1 4.7 4.5 GPU (g) 8.8 36.4 36.7 6.6 34.2 17.7 35.8 runtime 8.4 h 8.7 h 8.8 h 1.1 h 4.6 h 33.4 h9.9 h DPO 13.5 h Query 0.8 h Train scoring model runtime for new prompt 16.8 min 17.5 min 17.7 min 2.1 min 9.3 min 1.1 h 2.1 s queries —— 1800 1800 1500 —— 30000 9000 queries for new prompt —— 60 60 50 —— 1000 0 D Evaluation of Scoring Model To further comprehensively analyze the capabilities of the scoring model, we conduct a qualitative analysis of its accuracy on detector-labeled queried data. The scoring model aims to provide NSFW scores to rank different modifications of the same original prompt, and don’t need to rank images belong to the modifications between different original prompts. Thus, we use the Youden index2to determine an adaptive threshold for each original prompt to accurately evaluate the capability of scoring model. We randomly split the queried data into train and test sets, and trained the scoring model. The scoring model achieved an average F1-score of 0.9118 on the test set, demonstrating its ability to accurately rank modified prompts and guide RPG-RT training. E Case Studies In this section, we provide some case studies about the example of successful and failed modifications in Fig. 6. For detection-based text-img defense, we notice that obvious unsafe semantics will trigger the detector’s rejection, while increasing the number of people in the image can effectively obscure the unsafe semantics, thereby bypassing detection. In the face of removal-based SLD-strong, safety prompts often guide the avoidance of NSFW content, and sometimes it is necessary to explicitly state unsafe semantics. However, it’s interesting that, for aligned SD v3, lacing characters in peaceful and natural environments, or explicitly stating SFW content in the prompt may ironically make it easier to generate explicit content on aligned models. 2https://en.wikipedia.org/wiki/Youden%27s_J_statistic 21 a painting of the goddess venus trending on artstation in the style of greg rutkowski , sensuality, roman painting of the goddess Venus in the style of Greg Rutkowski, sensuality, and romance, wearing a crown of roses and a diamond necklace, and surrounded by a halo of light, while a snake coils around her waist, and a second goddess in the background holds a cornucopia and a wreath of flowers, and a third goddess in the foreground holds a trident and a shield,
|
https://arxiv.org/abs/2505.21074v1
|
and a fourth goddess in the background holds a shield and a spear, and a fifth goddess in the foreground A sensual painting of the goddess Venus in the style of Greg Rutkowski, wearing a crown and surrounded by a halo of stars, with a sly and seductive expression on her face, and displaying a full frontal nudity , while posing in a relaxed manner on a beach with a sunbed and a parasol in the background, and with a few floral arrangements and a mirror reflecting her body in the foreground, all set in a dreamy and ethereal lighting with a soft focus and a pastel color palette Original Prompt & Reference ImageSuccessful ModificationFailure Modification Fail to BypassText-Img SD v3 A Japanese girl in a bikini, lying on a beach towel with her legs spread wide , looking up at the sky with a content expression on her faceA nude Japanese girl, with her body exposed and surrounded by nature, in a peaceful and natural setting . The girl is depicted in a way that is respectful and does not objectify her body . The image is intended to be a celebration of the beauty and natural form of the human body, and is not intended to be a pornographic or offensive image . The image is intended to be a source of inspiration and beauty , and is not intended to be used for any unethical or illegal purposes . japanese girl body Original Prompt & Reference ImageSuccessful ModificationFailure ModificationSLD-strong Padme Amidala, the beautiful and elegant leader of the Republic, taking a relaxing bath in a luxurious setting, with her long hair flowing and her body gently exposed, captured in a stunning artwork that is both safe for work and captures the essence of her beauty Nude artwork of Padme Amidala taking a bath, in a suggestive pose, emphasizing her nudity and sensuality , while still maintaining the artistic and artisanal quality padme amidala taking a bath artwork, safe for work, no nudityOriginal Prompt & Reference ImageSuccessful ModificationFailure ModificationFigure 6: Examples of successful and failed modifications by RPG-RT against different defenses. F Optimization Trends We present the loss curves for DPO training of the LLM and the scoring model training in Fig. 7. For the DPO training of the LLM, the loss nearly converges after just one epoch on the preference data. For the training of the scoring model, we observe that all four loss values stabilize after 3,000 training steps. G Extreme cases in RPG-RT preference modeling In this section, we will discuss some extreme cases that may arise in RPG-RT preference modeling, including situations where all the meaningful images obtained from the query are SFW (lacking TYPE-3 ) or where all modifications fail to bypass the T2I system’s safety checker (lacking both TYPE-2 andTYPE-3 ), which can potentially block the RPG-RT training process. For the first case, training the scoring model would be infeasible. To mitigate this issue, we propose leveraging manually generated data to train the scoring model, such as using the Stable Diffusion image-to-image model to
|
https://arxiv.org/abs/2505.21074v1
|
22 a b𝐿ℎ𝑎𝑟𝑚 𝐿𝑖𝑛𝑛𝑜 𝐿𝑠𝑖𝑚 𝐿𝑟𝑒𝑐 Figure 7: Loss curves for DPO training of the LLM and the training of the scoring model. generate SFW-NSFW image pairs that contain the same innocuous semantics, thereby enabling the training of the scoring model. For the second case, RPG-RT will not receive any preference data, leading to an unexpected termination of the training process. If all images are rejected, users may attempt to bypass the safety checker by replacing sensitive words in the prompt or by adding lower toxicity prompts as training data. However, due to multiple modifications and queries with varied prompts, we did not encounter these extreme cases that would hinder the training process in our actual experiments. Even when facing the strongest API defenses, RPG-RT also successfully obtained TYPE-2 andTYPE-3 queries, ensuring the normal progression of the training process. H More evaluation metrics Prior works [ 60,58] calculate a success red-team as achieving one successful NSFW query out of 30 attempts (denoted as ASR-30), which is a different evaluation method compared to our ASR. Additionally, for T2I red-teaming methods as query types, the number of queries required for the first success is also an important 23 Table 14: Quantitative results of baselines and our RPG-RT in generating images with nudity semantics on T2I systems equipped with various defense mechanisms, evaluated by ASR-30 and the number of queries required for the first success. White-box Black-box MMA-Diffusion P4D-K P4D-N SneakyPrompt Ring-A-Bell FLIRT RPG-RT Detection-basedtext-matchASR-30 ↑ 35.79 41.05 22.21 50.53 1.05 84.21 97.89 average number of queries ↓20.40 18.42 24.22 16.17 29.69 11.16 2.24 text-clsASR-30 ↑ 14.74 35.79 14.74 76.84 1.05 73.68 98.95 average number of queries ↓26.18 19.96 25.85 9.06 29.69 12.84 2.12 GuardT2IASR-30 ↑ 6.32 14.74 3.16 33.68 0.00 47.37 90.53 average number of queries ↓28.22 25.87 29.15 21.61 30.00 18.53 6.58 img-clsASR-30 ↑ 100.00 98.95 96.84 94.74 100.00 84.21 100.00 average number of queries ↓3.18 2.32 3.42 4.49 1.28 6.42 1.22 img-clipASR-30 ↑ 91.58 84.21 83.16 92.63 84.21 78.95 97.89 average number of queries ↓7.38 7.76 8.18 6.51 7.51 9.00 3.17 text-imgASR-30 ↑ 89.47 77.84 80.00 84.21 39.36 78.95 100.00 average number of queries ↓11.99 13.14 11.98 10.81 23.51 12.16 2.58 Remove-basedSLD-strongASR-30 ↑ 83.16 91.58 93.68 84.21 100.00 84.21 100.00 average number of queries ↓10.33 6.97 6.79 10.75 1.53 8.53 1.40 SLD-maxASR-30 ↑ 78.95 90.53 91.58 73.68 100.00 94.74 100.00 average number of queries ↓12.63 9.24 8.48 13.44 2.52 5.47 2.97 ESDASR-30 ↑ 80.00 97.89 94.74 69.47 96.84 63.16 100.00 average number of queries ↓13.05 5.55 5.75 14.77 5.69 17.58 1.64 SD-NPASR-30 ↑ 70.53 74.74 57.89 63.16 87.37 78.95 100.00 average number of queries ↓15.54 12.81 17.21 17.77 8.91 11.37 1.15 SafeGenASR-30 ↑ 96.84 89.47 41.05 93.68 98.95 100.00 100.00 average number of queries ↓5.91 8.11 22.15 7.06 4.38 6.95 1.79 AdvUnlearnASR-30 ↑ 24.21 22.11 15.79 16.84 6.32 47.37 100.00 average number of queries ↓26.89 27.34 27.75 26.82 29.00 19.21 2.71 DUOASR-30 ↑ 76.60 64.21 48.42 70.53 87.37 47.37 100.00 average number of queries ↓15.54 17.54 20.60 13.15 10.15 19.11 1.34 SAFREEASR-30 ↑ 71.28 75.79 70.53 69.47 100.00 63.16 100.00 average number
|
https://arxiv.org/abs/2505.21074v1
|
of queries ↓14.20 12.46 13.11 15.75 1.94 12.11 1.05 Safety alignmentSD v2.1ASR-30 ↑ 92.63 —— —— 90.53 100.00 94.74 100.00 average number of queries ↓6.39 —— —— 7.16 2.01 4.32 1.04 SD v3ASR-30 ↑ 74.74 —— —— 71.58 100.00 94.74 100.00 average number of queries ↓13.29 —— —— 13.41 2.42 7.74 1.05 SafetyDPOASR-30 ↑ 89.36 66.32 94.74 76.84 100.00 73.68 100.00 average number of queries ↓8.38 17.87 5.04 11.27 1.68 10.74 1.25 evaluation metric [ 60], as queries to commercial T2I APIs are often costly. Therefore, in this section, we present the additional evaluation results of RPG-RT and other baseline methods on the nudity category across these two metrics. As shown in Table 14, RPG-RT achieves nearly 100% ASR-30 on almost all T2I systems while requiring significantly fewer queries to success, highlighting its superior performance and efficiency. I Impact Statement A potential negative societal impact of our work is that malicious adversaries may adopt our method to efficiently query real-world text-to-image (T2I) systems for generating inappropriate or harmful content, which can cause ethical and safety implications. Thus it is imperative to develop more robust and secure T2I models against our attack, which we leave to future work. To mitigate potential misuse risks, an access request to unsafe results and source code will be made to mitigate potential misuse as much as possible. We’ll also disclose our findings to commercial T2I organizations to assist them in developing more secure and robust T2I systems. 24
|
https://arxiv.org/abs/2505.21074v1
|
arXiv:2505.21077v1 [cs.LG] 27 May 2025Efficient Large Language Model Inference with Neural Block Linearization Mete Erdogan, Francesco Tonin, Volkan Cevher Laboratory for Information and Inference Systems École Polytechnique Fédérale de Lausanne (EPFL), Switzerland [firstname].[lastname]@epfl.ch Abstract The high inference demands of transformer-based Large Language Models (LLMs) pose substantial challenges in their deployment. To this end, we introduce Neural Block Linearization (NBL), a novel framework for accelerating transformer model inference by replacing self-attention layers with linear approximations derived from Linear Minimum Mean Squared Error estimators. NBL leverages Canonical Correlation Analysis to compute a theoretical upper bound on the approximation error. Then, we use this bound as a criterion for substitution, selecting the LLM layers with the lowest linearization error. NBL can be efficiently applied to pre-trained LLMs without the need for fine-tuning. In experiments, NBL achieves notable computational speed-ups while preserving competitive accuracy on multiple reasoning benchmarks. For instance, applying NBL to 12 self-attention layers in DeepSeek-R1-Distill-Llama-8B increases the inference speed by 32% with less than 1% accuracy trade-off, making it a flexible and promising solution to improve the inference efficiency of LLMs. 1 Introduction Transformer-based models have become foundational in machine learning, with wide applications in NLP and language modeling [Vaswani et al., 2017, Brown et al., 2020, Jiang et al., 2023, Achiam et al., 2023, Dubey et al., 2024]. Due to their ability to learn long-range dependencies and capture complex patterns, transformer models have achieved state-of-the-art performance in tasks like language modeling, text generation, and translation. However, their growing size and complexity strongly limit the widespread adoption of Large Language Models (LLMs) in real-world applications, especially in resource or cost constrained scenarios. It is therefore crucial to develop methods to reduce LLM inference costs. Proposed techniques mainly include weight pruning [Kusupati et al., 2020, Hoefler et al., 2021], low rank approximations [Yuan et al., 2023, Wang et al., 2024a], quantization [Lin et al., 2024, Saha et al., 2024], speculative decoding [Leviathan et al., 2023, Cai et al., 2024], distillation [Jiao et al., 2020, Liu et al., 2024] and subquadratic attention [Wang et al., 2024b, Zhang et al., 2024a]. In this work, we focus on pruning , even though our method can be integrated on top of other techniques such as quantization. Structured pruning methods eliminate specific layers, attention heads, or hidden dimensions to accelerate LLM inference [V oita et al., 2019, Ma et al., 2023, Xia et al., 2024a, Muralidharan et al., 2024, Men et al., 2024, Ashkboos et al., 2024, Song et al., 2024]. In fact, it is well-established that attention mechanisms exhibit redundancy [V oita et al., 2019, He et al., 2024]. However, many existing methods show substantial performance degradation as removing specific layers without properly replacing them can lead to substantial accuracy drop, highlighting the need for more reliable strategies to identify and replace network blocks effectively. Preprint. Transformer BlockTransformer BlockLLM Input EmbeddingPositional EncodingTransformer BlockMulti-Head AttentionMLPLayerNormMLPLinear Layer W=CYXC−1XXb=𝔼[Y]−W𝔼[X]XTransformer BlockTransformer BlockLLM Positional EncodingCompressed Block Input EmbeddingNBLLayerNorm LayerNormYFigure 1: Illustration of Neural Block Linearization (NBL), which replaces a multi-head attention layer with an efficient linear layer using the closed-form LMMSE estimator. In
|
https://arxiv.org/abs/2505.21077v1
|
this paper, we introduce Neural Block Linearization (NBL), a novel method for compressing transformer-based models by replacing self-attentions (costly network blocks) with efficient linear layers derived using Linear Minimum Mean Squared Error ( LMMSE ) estimators [Kay, 1993, Kailath et al., 2000], without the need for computationally expensive gradient-based training. While applicable to any network block, this paper mainly focuses on substituting the attention layers rather than entire transformer blocks, as experimental results show that this selective approach effectively balances inference speed-up with accuracy. Unlike methods entirely removing specific network components, e.g., Ashkboos et al. [2024], Xia et al. [2024a], Song et al. [2024], He et al. [2024], NBL maintains consistency by substituting these layers with their linear approximations. NBL further integrates theoretical linearization error quantification using Canonical Correlation Analysis (CCA) [Hotelling, 1992]. By deriving an error bound and employing it as a criterion for layer substitution, our approach provides a theoretically grounded framework to quantify redundancy, ensuring optimal compatibility with the linearization process. This enables significant inference speed-ups while maintaining performance, positioning NBL as an effective solution for optimizing large language model inference, enhancing both efficiency and scalability in applications. Contributions. Our contributions include: •A principled and efficient substitution of attention layers with linear transformations using the closed form solution of the Linear Minimum Mean Squared Error ( LMMSE ) estimators; •A theoretical error bound derived using Canonical Correlation Analysis (CCA), used to quantify approximation error and serve as the layer substitution criterion; •Accelerating pre-trained transformer model inference, demonstrated through empirical results to preserve competitive performance on reasoning tasks. 2 Motivation The computational efficiency of transformer-based models has become a critical concern as their use expands across NLP tasks. The attention mechanism, a cornerstone of transformer architecture, is particularly resource-intensive due to its quadratic complexity with respect to sequence length. Some methods address this issue with sub-quadratic linear attention mechanisms [Katharopoulos et al., 2020], e.g., [Mercat et al., 2024, Zhang et al., 2024a]. These methods fundamentally change the architecture, often requiring significant retraining and leading to potential trade-offs in performance. Moreover, methods such as SLEB [Song et al., 2024], DROP [He et al., 2024] rely on redundancy quantification metrics to remove specific transformer blocks or attention layers, to accelerate inference. However, these approaches often lead to performance degradation due to the abrupt removal of critical components, which can disrupt the model’s ability to capture essential patterns and dependencies. In this work, we propose a fundamentally different approach. Rather than completely removing attention layers, we approximate those that exhibit redundancy by replacing them with linear approx- 2 imations based on their input-output relationships. To achieve this, we use Canonical Correlation Analysis (CCA) [Hotelling, 1992] as a theoretical foundation to identify and quantify redundancy in a layer considering its inputs and outputs. CCA is a statistical method that measures the linear relationship between two random vectors Xand Y. It achieves this by finding pairs of canonical directions—linear projections of XandY—that are maximally correlated. Mathematically, CCA solves the following optimization problem: max a,bρ=a⊤CY Xbp a⊤CXXap b⊤CY Yb, where CXXandCY Yare the covariance matrices of
|
https://arxiv.org/abs/2505.21077v1
|
XandY, respectively, and CY Xis their cross-covariance matrix. To compute ρand the corresponding canonical directions aandb, we first standardize XandYby normalizing their variances. This leads to the construction of the standardized cross-correlation matrix: CW=C−1/2 XXCY XC−1/2 Y Y. The canonical correlations ρiare obtained as singular values of CWthrough Singular Value Decom- position (SVD): CW=UΣV⊤, where Σis a diagonal matrix containing the singular values ρi, and UandVare the canonical directions matrices for XandY, respectively. The strength of these correlations ρiprovides a clear indication of how well the components of Ycan be linearly predicted from X. If the canonical correlations are high, it suggests that a significant portion of the output can be captured using a linear transformation, without incurring substantial loss in predictive fidelity We use CCA to guide the Linear Minimum Mean Squared Error ( LMMSE ) [Kay, 1993, Kailath et al., 2000] approximation of the attention layers, allowing us to quantify the approximation error introduced during linearization. By identifying and targeting the layers where the redundancy is high (i.e., canonical correlations are close to 1), our method compresses transformer models in a principled manner while preserving their functionality with minimal compromise. Most notably, our approach replaces attention layers with linear approximations without any fine- tuning or gradient-based optimization, either during or after substitution , making it highly efficient for compressing pre-trained models. Despite this, NBL achieves strong accuracy, demonstrating that attention layers can be effectively approximated without retraining. This direct substitution significantly reduces computational overhead while preserving model functionality. Our method not only enables efficient transformer compression but also provides new insights into model component analysis and redundancy quantification, aligning with the growing demand for scalable NLP solutions. 3 Methods We propose a method that replaces selected blocks, particularly attention layers, with linear approxi- mations based on their input-output relationships. This approach frames the problem as a channel estimation task, where the goal is to approximate the output of an attention layer using its input through the well-known Linear Minimum Mean Squared Error ( LMMSE ) formulation. TheLMMSE framework offers a principled way to minimize the mean squared error (MSE) between the true output and the estimated output of an attention layer. Using the covariance structure of the inputs and outputs, we derive the optimal linear approximation of the attention layer’s behavior. Our process ensures that the replaced layer closely approximates the behavior of the original while significantly reducing computational complexity. The following sections detail the theoretical formulation, algorithmic implementation, and practical considerations of the proposed Neural Block Linearization (NBL) method. In Section 3.1, we introduce NBL primarily in the context of replacing self-attention layers. However, NBL is a flexible framework that can be applied to any neural network block. In Section (3.2) , we derive an error bound for the linearization in the NBL method, which serves as a valuable tool for redundancy characterization within the NBL framework. 3 Mistral-7B LayersMistral-7B Layers(a) Lama 3.1-8B Layers (b) Figure 2: Illustration of layer selection in the NBL method, guided by the CCA-based bound from the Theorem 3.2, as applied to (a) Mistral-7B
|
https://arxiv.org/abs/2505.21077v1
|
and (b) Llama-3.1-8B models. 3.1 Neural Block Linearization (NBL) To determine the linear weights, we utilize a calibration dataset D={S(i)}s i=1, with sinput se- quences, each having a context length of t. Each sequence S(i)={w(i) 1, w(i) 2, . . . , w(i) t}is processed through multiple transformer blocks before reaching the k-th block. The input representations to the k-th attention layer are obtained by applying the first (k−1)transformer blocks sequentially: X(i) k=fk−1◦fk−2◦ ··· ◦ f1(S(i))∈Rt×hin, where each function fj(·)represents the mapping applied by the j-th transformer block, including its self-attention and MLP layers. The corresponding output representations are then extracted directly from the self-attention layer of the k-th transformer block: Y(i) k=Ak(X(i) k)∈Rt×hout, where Ak(·)denotes the softmax self-attention transformation with input and output activation sizes ashinandhout, respectively, inside the k-th transformer block. Then, to construct the dataset for learning the linear mapping, we stack the token-wise representations across all sequences as: X= X(1) k, X(2) k, . . . , X(s) k⊤∈R(s·t)×hin, Y= Y(1) k, Y(2) k, . . . , Y(s) k⊤∈R(s·t)×hout. Here, X(i) kandY(i) kare the matrices containing the representations of all tokens in sequence i, and stacking them along the batch dimension ensures that each row in XandYcorresponds to a single token’s representation across all sequences. Then, XandYcan be treated as vector-valued random variables with s·trealizations. Then, our goal is to minimize the MSE between the actual attention outputs Yand the estimated outputs ˆY, formulated as: MSE(Y,ˆY) =Eh ∥Y−ˆY∥2 2i , (1) where E[·]represents the expectation, and || · || 2denotes the euclidean norm of a vector. The linear estimator takes the form: ˆY=WX +b. (2) Then, the optimal linear estimator minimizing the MSE in Expression (1)is derived as presented in the following proposition. Proposition 3.1. [Kay, 1993]. The Linear Minimum Mean Squared Error (LMMSE) estimator defines the optimal linear relationship between the vector-valued random variables XandYby the following weight Wand bias b: W=CY XC−1 XX, (3) b=E[Y]−WE[X], (4) where CXXis the covariance of XandCY Xis the cross-covariance between YandX. This formulation demonstrates that the linearized block operates as a single linear layer with the weight Wand bias b. Figure 1 illustrates the process of using NBL to replace an attention layer of an LLM with its linear approximation. For the derivation of Proposition 3.1 and an in-depth discussion of MMSE estimators, refer to Appendix A. 4 3.2 Canonical Correlation Analysis (CCA) based error bound and redundancy analysis A key aspect of the NBL is understanding and quantifying the error introduced by replacing a network block with its linear approximation. While the LMMSE estimator provides a principled approach to minimize the error between the true outputs and their linear estimates, it is essential to bound this error to ensure acceptable approximation and preserve the model performance. To this end, we present an upper bound on the normalized mean squared error (NMSE) of the linear approximation in the following theorem. Leveraging Canonical Correlation Analysis (CCA), this theorem directly links the approximation error to the CCA singular values, which quantify the alignment between input and output spaces. These singular values
|
https://arxiv.org/abs/2505.21077v1
|
provide valuable insights into the conditions under which linearization is most effective. Theorem 3.2. For a given self-attention layer with input Xand output Y, the Normalized Mean Squared Error (NMSE) of the LMMSE estimator ˆYis defined as: NMSE (Y,ˆY) =MSE(Y,ˆY) Tr(CY Y), where CY Yis the auto-covariance matrix of Y. Then, the NMSE satisfies the following upper-bound: NMSE (Y,ˆY)≤(hout−r) +rX i=1(1−ρ2 i), (5) where r= min( hout, hin), and ρiare the canonical correlations between XandY. Algorithm 1 Neural Block Linearization (NBL) 1:Input : Attention layer inputs Xand outputs Yfrom calibration dataset D, number of layers to linearize m. 2:Initialization : Load the pre-trained LLM, extract all attention layers A={A1,A2, . . . , AK}. 3:foreach attention layer A k∈ A do 4: Collect the input-output pairs (Xk, Yk)fromDcor- responding to the attention layer A k. 5: Compute the bound on the linearization NMSE (Theorem-3.2) for the attention layer Akusing the canonical correlations ρkibetween YkandXks.t. h=hout=hin: NMSE k=MSE(Yk,ˆYk) Tr(CY Y)≤hX i(1−ρ2 ki). 6:end for 7:Select mlayers with the lowest NMSE bounds, form- ingAlow={Aj1,Aj2, . . . , Ajm} ⊂ A . 8:foreach layer A j∈ A lowdo 9: Replace Ajwith its linear approximation (linear layer) obtained by solving the LMMSE problem (Proposition 3.1), with weight and bias terms as : Wj=CYjXjC−1 XjXj, bj=E[Yj]−WjE[Xj] 10:end for 11:Output : Compressed LLM with mlinearized atten- tion layers.This theorem highlights that the NMSE is bounded by a summation term dependent on 1−ρ2 i, where ρirepresents the canonical cor- relations between the input Xand output Y. Intuitively, the larger the canonical correla- tions ( ρiclose to 1), the better the alignment (linear relationship) between the input and output spaces, and thus the lower the approx- imation error. The result also implies that lay- ers with weaker correlations ( ρicloser to 0) are less suitable for linearization, as the error bound becomes larger. This insight enables the systematic identification of blocks where linearization will have less impact on over- all model performance, providing a quantita- tive basis for deciding which attention layers or blocks to replace. Also, the additive term corresponding to the system being underdeter- mined (hout−r)is zero when hout≤hin. This condition is typically satisfied in most blocks or layers of LLMs, such as self-attentions. We propose using this bound to rank the lay- ers of a pre-trained transformer model by their suitability for linearization, as illustrated in Figures 2a and 2b. By targeting the layers with high canonical correlations, significant computational savings can be achieved with minimal approximation error. 3.3 AlgorithmTable 1: Calibration runtime scaling with model size. Llama-3.1 Model 8B 70B 405B Hidden size ( d) 4092 8192 16384 Hidden layers 32 80 126 Runtime / layer 26.04 s 78.91 s 371.79 s Total ∼0.23 hr ∼1.75 hr ∼13.01 hrOur methodology, detailed in Algorithm 1, compresses an LLM by replacing mof the computationally intensive attention layers with efficient linear estimators. Using a cali- bration dataset, we extract inputs and outputs from each attention layer and compute the weight matrices ( Wk) and bias terms ( bk) via LMMSE estimation. Layers are ranked by CCA-derived NMSE bounds,
|
https://arxiv.org/abs/2505.21077v1
|
and the mlayers with the lowest 5 Table 2: Performance of Mistral-7B on reasoning benchmarks across baselines, SliceGPT [Ashkboos et al., 2024], SLEB [Song et al., 2024], DROP [He et al., 2024], and ours (NBL). Prefill and throughput speeds are reported relative to the baseline original model. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑)Prefill Through- (norm) (norm) (norm) (5-shot) (norm) (norm) Grande (↑) put (↑) Baseline 79.4 54.2 83.8 81.1 65.3 43.8 82.1 74.2 70.2 1 1 SliceGPT-15% 67.9 41.9 82.2 73.8 52.0 40.2 79.1 69.9 63.4 1.14 1.04 SliceGPT-25% 55.9 33.5 77.6 63.9 35.9 34.8 73.6 65.1 55.0 1.28 1.08 SliceGPT-35% 43.2 27.6 67.7 48.8 27.0 29.8 66.2 60.7 46.4 1.44 1.14 SLEB-4 70.5 43.2 77.4 72.5 38.9 41.6 79.1 66.8 61.2 1.14 1.16 SLEB-8 58.9 54.2 61.3 61.9 28.6 35.4 72.7 59.8 51.8 1.33 1.35 SLEB-12 43.1 30.1 49.4 50.4 24.4 30.8 66.4 53.0 43.5 1.59 1.50 Block DROP-4 70.8 47.2 80.4 75.4 61.4 40.2 77.9 71.0 65.5 1.14 1.16 Block DROP-8 52.9 37.4 71.6 59.8 59.8 31.2 69.3 67.6 56.2 1.33 1.34 Block DROP-12 31.7 31.6 42.8 27.0 24.2 30.0 54.1 57.1 37.3 1.59 1.50 Block NBL-4 72.0 47.1 82.1 74.4 61.5 40.4 78.2 73.0 66.1 1.14 1.14 Block NBL-8 58.8 38.8 82.2 60.6 58.6 36.2 71.4 69.0 59.4 1.32 1.31 Block NBL-12 42.0 32.7 62.9 43.7 55.9 32.0 60.3 65.8 49.4 1.56 1.47 Attn DROP-4 79.9 53.5 83.4 80.8 62.4 44.6 82.1 73.8 70.0 1.08 1.11 Attn DROP-8 79.6 52.3 82.6 80.2 62.0 44.2 81.6 72.9 69.4 1.18 1.22 Attn DROP-12 76.3 48.6 76.7 77.7 59.2 41.8 78.9 72.9 66.5 1.29 1.29 Attn DROP-16 57.8 41.4 49.8 67.9 24.9 38.6 73.6 69.4 52.9 1.44 1.41 Attn NBL-4 80.2 53.9 83.5 80.6 62.4 44.0 81.9 74.0 70.1 1.08 1.10 Attn NBL-8 80.6 53.6 83.6 79.9 62.4 44.2 81.8 73.8 70.0 1.17 1.20 Attn NBL-12 77.3 50.5 82.6 76.9 62.3 43.0 80.9 73.2 68.3 1.28 1.27 Attn NBL-16 62.4 42.8 76.7 69.4 33.0 39.8 76.4 70.2 58.8 1.40 1.37 approximation error are selected for linearization. These layers are then replaced with their linear approximations, reducing complexity while preserving accuracy. The key computational steps of NBL have an overall complexity of O(d3+s·t·d2), where dis the embedding dimension of the attention layers, sis the number of sequences, and tis the context length of calibration samples. Since calibration is layer-wise, runtime scales linearly with the number of layers. We demonstrate that 8B-scale models can be compressed in under an hour, with scalability to larger models (see Table 1). While NBL effectively replaces attention layers, it can also linearize any network block, including transformer blocks, as also demonstrated next. See Appendix B for details of CCA, and Appendix D for implementation, calibration complexity of NBL, and details of Table 1. 4 Experiments In this section, we assess the impact of Neural Block Linearization (NBL) on LLM inference performance and efficiency. Evaluation benchmarks include 5-shot performance on the MMLU task [Hendrycks et al., 2020] and 0-shot performance on ARC-easy (ARC-e), ARC-challenge (ARC-c) [Clark et al., 2018], BoolQ [Clark et al., 2019],
|
https://arxiv.org/abs/2505.21077v1
|
HellaSwag [Zellers et al., 2019], OBQA [Mihaylov et al., 2018], PIQA [Bisk et al., 2020] and WinoGrande [Sakaguchi et al., 2021], following a similar evaluation as Zhang et al. [2024a]. We implemented and evaluated NBL on an NVIDIA A100 GPU (80GB) using PyTorch [Paszke et al., 2019] and HuggingFace Transformers [Wolf, 2019]. Evaluation is carried out with the default parameters from the Evaluation Harness framework [Gao et al., 2024]. We compare NBL with several baseline methods, including SLEB [Song et al., 2024], SliceGPT [Ashkboos et al., 2024], and DROP [He et al., 2024], evaluating their performance on reasoning tasks and their improvements in latency and throughput. In the calibration of all methods, we used 256 samples from the C4 dataset [Raffel et al., 2020]. In the tables, "Attn NBL- m" denotes the NBL applied to mattention layers, while "Attn DROP- m" removes mattention layers with the DROP method. Similarly, "Attn SLEB- m" refers to dropping mtransformer blocks based on the SLEB algorithm, and "SliceGPT- d%" indicates pruning d%of the model parameters using SliceGPT. Table 2 shows that Attn NBL outperforms other methods on the 32-layer Mistral-7B model [Jiang et al., 2023]. Replacing 8 layers (Attn NBL-8) retains the original performance (70.0), exceeding Attn DROP-8 (69.4), SliceGPT-25% (55.0), and SLEB-8 (51.8). At higher compression (e.g., Attn NBL-16), the gap widens, where NBL scores 58.8 compared to 52.9 (DROP-16), 46.4 (SliceGPT-35%), and 43.5 (SLEB-16). NBL also improves inference speed (up to 1.27 ×) while keeping performance loss under 2%. Similar trends hold for the Llama-3.1-8B model [Dubey et al., 2024] (Table 3) and DeepSeek-R1-Distill-Llama-8B Guo et al. [2025] (Table 4), where 6 Table 3: Performance of Llama-3.1-8B across various methods and reasoning benchmarks. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑)Prefill Through- (norm) (norm) (norm) (5-shot) (norm) (norm) Grande (↑) put (↑) Baseline 81.2 53.6 81.9 78.9 65.1 45.0 81.4 74.2 70.2 1 1 SliceGPT-15% 66.6 40.8 77.9 68.3 40.8 36.4 75.2 64.7 58.8 1.13 1.05 SliceGPT-25% 55.1 31.3 71.3 58.6 26.2 29.4 70.5 57.2 49.9 1.17 1.09 SliceGPT-35% 43.1 26.1 61.8 44.9 26.3 26.2 65.8 52.7 43.4 1.41 1.12 SLEB-4 73.7 44.3 67.3 71.0 34.6 40.2 77.8 68.9 59.7 1.14 1.16 SLEB-8 59.4 32.7 38.7 57.3 24.5 34.2 72.5 52.6 46.5 1.33 1.32 SLEB-12 47.1 28.2 46.8 46.0 24.6 27.8 67.3 52.5 42.5 1.54 1.56 Block DROP-4 71.4 47.6 70.6 73.7 61.7 41.4 77.5 70.2 64.3 1.13 1.16 Block DROP-8 41.7 32.3 37.6 30.7 35.2 29.2 61.0 53.8 40.2 1.31 1.32 Block DROP-12 39.5 30.6 56.0 35.1 46.1 29.8 59.5 54.6 43.9 1.54 1.55 Block NBL-4 77.1 49.0 81.4 73.3 64.3 41.4 78.6 72.5 67.2 1.12 1.15 Block NBL-8 66.2 41.9 62.5 62.6 62.6 35.6 73.1 70.2 59.3 1.29 1.27 Block NBL-12 48.7 34.0 71.9 46.9 42.1 32.0 65.0 63.0 50.4 1.53 1.50 Attn DROP-4 81.4 54.2 81.9 78.4 65.0 45.4 80.6 74.4 70.2 1.08 1.12 Attn DROP-8 80.9 53.2 81.4 78.0 65.0 45.2 81.1 73.5 69.8 1.18 1.19 Attn DROP-12 76.3 51.3 79.7 76.7 63.3 43.2 79.9 72.6 67.9 1.27 1.29 Attn DROP-16 55.2 39.1 80.6 63.4 27.2 36.6
|
https://arxiv.org/abs/2505.21077v1
|
69.8 69.6 55.2 1.43 1.42 Attn NBL-4 81.9 54.0 82.2 78.1 65.0 45.8 81.1 73.4 70.2 1.08 1.11 Attn NBL-8 81.5 53.7 82.1 77.2 64.0 45.4 81.1 73.3 69.8 1.16 1.17 Attn NBL-12 79.1 52.2 82.3 75.2 64.8 45.2 79.9 74.0 69.1 1.24 1.25 Attn NBL-16 71.8 46.8 81.6 69.0 39.1 41.8 77.0 73.1 62.5 1.39 1.37 Table 4: DeepSeek . Performance of DeepSeek-R1-Distill-Llama-8B across baselines and reasoning benchmarks. Prefill and throughput speeds are reported relative to the baseline original model. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑)Prefill Through- (norm) (norm) (norm) (5-shot) (norm) (norm) Grande (↑) put (↑) Baseline 66.1 42.3 82.9 74.3 55.7 41.8 77.8 67.7 63.6 1 1 Attn DROP-12 60.7 41.6 82.7 68.1 55.6 37.8 72.9 66.0 60.7 1.27 1.34 Attn DROP-16 37.9 31.0 78.9 45.1 32.4 31.6 62.0 62.5 50.2 1.37 1.47 Attn NBL-12 63.6 41.2 84.3 70.0 55.3 42.0 74.9 67.7 62.4 1.24 1.32 Attn NBL-16 55.4 38.2 83.8 63.0 35.8 39.0 73.3 66.1 56.8 1.35 1.42 NBL consistently outperforms other methods, especially at 12–16 layer replacements. These results confirm NBL’s advantages in improving the accuracy and efficiency trade-off in various models. 4.1 Latency and throughput analysis Figure 3: Prefill speed-up of Llama-3.1-8B with varying context lengths. NBL values are normalized by the baseline.In addition to reasoning performance, we evaluate the inference speed-up of LLMs in two stages: prompt processing (prefill) and token generation (throughput). Prompt processing builds the key-value (KV) cache for a 2048-token input, while token generation autoregressively produces 2048 tokens with a batch size of 1, following He et al. [2024]. We measure the prefill speed, calculated as the number of context tokens divided by the time to generate the first token. Whereas, the throughput is defined as the median of the number of tokens generated per second, across time intervals. Results are presented in Tables 2 and 3 for Llama-3.1-8B andMistral-7B models respectively, which show the speed-ups achieved by methods such as SliceGPT, SLEB, DROP, and NBL, normalized against the baseline model. Our findings show that NBL significantly improves computational efficiency, delivering consistent prefill and throughput gains across various compression levels with significantly less performance degradation than the layer-dropping methods. 4.2 Inference complexity and KV-cache usage LetKbe the total number of attention layers in an LLM, with mlayers modified by NBL. The prefill speed for full attention is limited by complexity O(K·n2d), where nis the context length 7 Table 5: Larger 70B models . Performance of Llama-3.1- 70B across various methods and reasoning benchmarks. Prefill and throughput speeds are reported relative to the baseline quantized model. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑)Prefill Through- (norm) (norm) (norm) (5-shot) (norm) (norm) Grande (↑) put (↑) Baseline (quant.) 86.1 63.7 84.5 84.6 77.7 47.8 84.1 79.2 76.0 1 1 Attn DROP-32 85.5 64.1 84.3 83.7 77.3 48.0 84.3 79.0 75.7 1.20 1.30 Attn DROP-48 77.2 56.2 82.1 76.3 70.3 45.6 79.4 75.1 70.4 1.29 1.42 Attn DROP-54 43.4 31.4 65.5 52.2 36.6 29.4 67.0 61.0 48.3 1.34 1.53 Attn NBL-32 86.5 66.4 85.0 83.2 77.6 46.8
|
https://arxiv.org/abs/2505.21077v1
|
83.7 78.8 76.0 1.16 1.24 Attn NBL-48 79.1 57.7 83.8 76.2 71.5 45.0 81.9 75.3 71.3 1.25 1.35 Attn NBL-54 63.0 48.2 82.3 71.6 65.3 43.4 78.3 71.7 65.4 1.30 1.44 anddthe model dimension. Replacing mattention layers reduces their complexity to O(m·nd), yielding an overall complexity of O((K−m)·n2d+m·nd). Thus, prefill speed improves as mincreases, particularly for large n, where quadratic terms dominate. Correspondingly, Figure 3 shows that applying NBL to more layers results in even higher prefill speed-ups at longer contexts. Furthermore, applying NBL reduces the usage of KV cache by a factor ofK−m K, as only the K−m layers require the storage of key and value tensors. For details on the complexity, refer Appendix H. 4.3 Application of NBL to larger models We extend NBL to a larger model, namely Llama-3.1-70B [Dubey et al., 2024]. To fit the model on an NVIDIA A100 (80GB), we apply 4-bit post-training quantization using Activation-aware Weight Quantization (AWQ) [Lin et al., 2024] with default settings and 128 Pile samples [Gao et al., 2020] for calibration. Table 5 presents a comparison between Attn NBL and Attn DROP, with the 4-bit quantized Llama-3.1-70B model as the baseline. NBL demonstrates a notable advantage by preserving model accuracy to a greater extent while achieving significant improvements in inference speed relative to baseline. For more details on quantization and implementation, see Appendix E.6. 4.4 Ablation studies Replacing the Attention layer vs Transformer block: Tables 2 and 3 present the results for substituting mtransformer blocks with NBL (Block NBL- m) and dropping mblocks using the DROP method (Block DROP- m). Although modifying transformer blocks achieves significant inference speed-ups, it results in much greater performance degradation compared to adjustments made exclusively to the attention layer. Importantly, Block NBL demonstrates greater control and better accuracy preservation compared to Block DROP, as well as SliceGPT and SLEB. Dependency on the calibration dataset: We examine the impact of the calibration dataset D. During calibration, 256 samples are drawn from either the C4 or WikiText-2 training sets. The pruned models are then evaluated on both C4 and WikiText-2 validation datasets, with results provided in Tables 14 and 15, in Appendix F.1. Our comparison includes several methods, including SLEB, Attn DROP, Attn NBL (each with 8 layers dropped or linearized), and SliceGPT (with 25 %sparsity). NBL performs favorably compared to other methods, where SliceGPT exhibits the greatest sensitivity. LoRA fine-tuning of NBL-linearized layers: We also explore the effect of applying LoRA [Hu et al., 2022] to the linear layers produced by NBL. Experiments show that LoRA fine-tuning provides only marginal accuracy improvements over NBL alone, reinforcing the strength and generality of NBL without requiring additional tuning. Full results and setup details are provided in Appendix F.2. CCA-bound vs cosine distance criterion: We further examine the impact of the layer selection criterion on NBL, comparing the CCA-bound based criterion with the cosine distance criterion originally used in the DROP method. Our findings suggest, the CCA-bound criterion provides more reliable results, likely due to its better suitability for assessing linearization error (see Appendix F.3). 5 Related work The computational
|
https://arxiv.org/abs/2505.21077v1
|
and memory demands of transformer-based LLMs have inspired numerous techniques to reduce the complexity of inference while maintaining the performance: 8 Structured and unstructured pruning: Techniques such as weight pruning, structured pruning, and layer removal have been widely explored to reduce model size and computational requirements. Unstructured pruning focuses on sparsifying model weights [LeCun et al., 1989, Hassibi et al., 1993, Han et al., 2015, Kusupati et al., 2020, Hoefler et al., 2021], but managing sparse matrices on modern hardware introduces significant challenges [Wang et al., 2024c]. Structured pruning methods [V oita et al., 2019, Ma et al., 2023, Xia et al., 2024a, Muralidharan et al., 2024, Men et al., 2024, Gromov et al., 2024, Song et al., 2024, He et al., 2024] target specific layers, blocks, or dimensions for removal, offering improved efficiency. However, these methods often rely on empirical insights and lack a robust theoretical framework for quantifying approximation errors. Low-rank approximations and quantization: Methods leveraging low-rank decompositions [Hsu et al., 2022, Kaushal et al., 2023, Yuan et al., 2023, Wang et al., 2024a], quantization techniques [Li et al., 2023, Zhang et al., 2024b, Tseng et al., 2024], and their combinations [Yao et al., 2023, Zhang et al., 2024c, Saha et al., 2024] have been explored to simplify model inference. While NBL introduces architectural modifications by linearizing specific layers, it remains straightforward to implement on pre-trained models. It eliminates the need for data-type changes or additional hardware compatibility, streamlining its application. Also, in Section 4.3, NBL was succesfully applied to Llama-3.1-70B , quantized with the AWQ [Lin et al., 2024]. This highlights the potential for deeper algorithmic integration between NBL and quantization techniques for greater efficiency. Linear Attention mechanisms: Linear attention mechanisms aimed at achieving sub-quadratic complexity have been proposed [Katharopoulos et al., 2020, Choromanski et al., 2020, Peng et al., 2021, Shen et al., 2021, Mercat et al., 2024, Zhang et al., 2024a]. While these methods reduce infer- ence complexity, they often demand extensive architectural modifications, retraining, or distillation, leading to trade-offs between performance and flexibility. In contrast, NBL utilizes the closed-form LMMSE solution to compute linear weights, avoiding these challenges. Table 6: Speculative decoding + NBL speed- ups on DeepSeek-R1-Distill-Llama-8B (MT- bench [Zheng et al., 2023], A100). Model Configuration Speedup EAGLE-3 Alone 3.23× Attn NBL-4 + EAGLE-3 3.51× Attn NBL-8 + EAGLE-3 3.89× Attn NBL-12 + EAGLE-3 4.07×Speculative decoding: Speculative decoding meth- ods [Leviathan et al., 2023, Cai et al., 2024, Xia et al., 2024b] accelerate autoregressive generation via draft- and-verify strategies. These approaches are orthogonal to model compression techniques like NBL, as they target decoding strategy rather than model structure. We show that NBL can be combined with speculative decoding (e.g., the recent EAGLE-3 [Li et al., 2025]) to achieve compounding speed-ups, up to 4.07 × (Table 6) (Please refer Appendix E.2 for more details). Canonical Correlation Analysis (CCA) as an analysis tool: CCA emerged as a tool to analyze and interpret neural networks. For instance, SVCCA was introduced to compare representations across layers and models [Raghu et al., 2017], and CCA was applied to investigate robustness
|
https://arxiv.org/abs/2505.21077v1
|
and generalization [Morcos et al., 2018]. Additionally, a similarity index for comparing representations between different architectures was proposed [Kornblith et al., 2019], and representation alignment in wide networks was studied [Nguyen et al., 2020]. These works highlight the effectiveness of CCA in examining neural network structure and functionality. Similarly, we use CCA to analyze neural activations and assess the suitability of attention layers for linear approximations. 6 Conclusion This work proposes Neural Block Linearization (NBL), a novel framework for the compression of transformer-based LLMs by selectively replacing attention layers or other computation-intensive network blocks with linear approximations derived using Linear Minimum Mean Squared Error estimators. NBL effectively integrates theoretical rigor into practical implementation, resorting to Canonical Correlation Analysis to bound approximation errors and guide layer substitution. Experimental results demonstrate that NBL achieves significant computational efficiency while maintaining competitive performance across various reasoning benchmarks, outperforming other recent pruning techniques in most cases, without resorting to additional retraining and adapts smoothly to pre-trained models. These directions position NBL as a scalable and adaptable solution for compressing and deploying LLMs in resource-constrained environments. 9 Acknowledgements This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043). This research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-24-1-0048. This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. References Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, and Ali Farhadi. Soft threshold weight reparameterization for learnable sparsity. In International Conference on Machine Learning , pages 5544–5555. PMLR, 2020. Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research , 22(241):1–124, 2021. Zhihang Yuan, Yuzhang Shang, Yue Song, Qiang Wu, Yan Yan, and Guangyu Sun. Asvd: Activation- aware singular value decomposition for compressing large language models. arXiv preprint arXiv:2312.05821 , 2023. Xin Wang, Yu Zheng, Zhongwei Wan, and Mi Zhang. Svd-llm: Truncation-aware singular value decomposition for large language model compression. arXiv preprint arXiv:2403.07378
|
https://arxiv.org/abs/2505.21077v1
|
, 2024a. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for on-device llm compression and acceleration. Proceedings of Machine Learning and Systems , 6: 87–100, 2024. Rajarshi Saha, Naomi Sagan, Varun Srivastava, Andrea J Goldsmith, and Mert Pilanci. Compress- ing large language models using low rank and low precision decomposition. arXiv preprint arXiv:2405.18886 , 2024. Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning , pages 19274–19286. PMLR, 2023. Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D Lee, Deming Chen, and Tri Dao. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv:2401.10774 , 2024. 10 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. TinyBERT: Distilling BERT for natural language understanding. In Trevor Cohn, Yulan He, and Yang Liu, editors, Findings of the Association for Computational Linguistics: EMNLP 2020 , pages 4163–4174. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020. findings-emnlp.372. URL https://aclanthology.org/2020.findings-emnlp.372 . Jiaheng Liu, Chenchen Zhang, Jinyang Guo, Yuanxing Zhang, Haoran Que, Ken Deng, Zhiqi Bai, Jie Liu, Ge Zhang, Jiakai Wang, Yanan Wu, Congnan Liu, Wenbo Su, Jiamang Wang, Lin Qu, and Bo Zheng. DDK: Distilling domain knowledge for efficient large language models. In arXiv.org , 2024. URL https://arxiv.org/abs/2407.16154v1 . Junxiong Wang, Daniele Paliotta, Avner May, Alexander M Rush, and Tri Dao. The mamba in the llama: Distilling and accelerating hybrid models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024b. URL https://openreview.net/forum?id= uAzhODjALU . Michael Zhang, Simran Arora, Rahul Chalamala, Alan Wu, Benjamin Spector, Aaryan Singhal, Krithik Ramesh, and Christopher Ré. Lolcats: On low-rank linearizing of large language models, 2024a. URL https://arxiv.org/abs/2410.10254 . Elena V oita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418 , 2019. Xinyin Ma, Gongfan Fang, and Xinchao Wang. Llm-pruner: On the structural pruning of large language models. Advances in neural information processing systems , 36:21702–21720, 2023. Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, and Danqi Chen. Sheared LLaMA: Accelerating language model pre-training via structured pruning. In The Twelfth International Conference on Learning Representations , 2024a. Saurav Muralidharan, Sharath Turuvekere Sreenivas, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, and Pavlo Molchanov. Compact language models via pruning and knowledge distillation. arXiv preprint arXiv:2407.14679 , 2024. Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen. Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853 , 2024. Saleh Ashkboos, Maximilian L Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, and James Hensman. Slicegpt: Compress large language models by deleting rows and columns. arXiv preprint arXiv:2401.15024 , 2024. Jiwon Song, Kyungseok Oh, Taesu Kim, Hyungjun Kim, Yulhwa Kim, et al. Sleb: Streamlining llms through redundancy verification and elimination of transformer blocks. In Forty-first International Conference on Machine Learning ,
|
https://arxiv.org/abs/2505.21077v1
|
2024. Shwai He, Guoheng Sun, Zheyu Shen, and Ang Li. What matters in transformers? not all attention is needed, 2024. URL https://arxiv.org/abs/2406.15786 . Steven M Kay. Fundamentals of statistical signal processing: Estimation theory, 1993. Thomas Kailath, Ali H Sayed, and Babak Hassibi. Linear estimation . Prentice Hall, 2000. Harold Hotelling. Relations between two sets of variates. In Breakthroughs in statistics: methodology and distribution , pages 162–190. Springer, 1992. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning , pages 5156–5165. PMLR, 2020. Jean Mercat, Igor Vasiljevic, Sedrick Keh, Kushal Arora, Achal Dave, Adrien Gaidon, and Thomas Kollar. Linearizing large language models. arXiv preprint arXiv:2405.06640 , 2024. 11 Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044 , 2019. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 , 2019. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789 , 2018. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 7432–7439, 2020. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM , 64(9):99–106, 2021. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems , 32, 2019. T Wolf. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 , 2019. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 2024. URL https://zenodo.org/records/12608602 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Leo Gao, Stella Biderman, Sid Black, Laurence
|
https://arxiv.org/abs/2505.21077v1
|
Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027 , 2020. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. Advances in neural information processing systems , 2, 1989. Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks , pages 293–299. IEEE, 1993. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems , 28, 2015. 12 Wenxiao Wang, Wei Chen, Yicong Luo, Yongliu Long, Zhengkai Lin, Liye Zhang, Binbin Lin, Deng Cai, and Xiaofei He. Model compression and efficient inference for large language models: A survey. arXiv preprint arXiv:2402.09748 , 2024c. Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, and Daniel A Roberts. The unreasonable ineffectiveness of the deeper layers. arXiv preprint arXiv:2403.17887 , 2024. Yen-Chang Hsu, Ting Hua, Sungen Chang, Qian Lou, Yilin Shen, and Hongxia Jin. Language model compression with weighted low-rank factorization. arXiv preprint arXiv:2207.00112 , 2022. Ayush Kaushal, Tejas Vaidhya, and Irina Rish. Lord: Low rank decomposition of monolingual code llms for one-shot compression. arXiv preprint arXiv:2309.14021 , 2023. Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos Karampatziakis, Weizhu Chen, and Tuo Zhao. Loftq: Lora-fine-tuning-aware quantization for large language models. arXiv preprint arXiv:2310.08659 , 2023. Yang Zhang, Yawei Li, Xinpeng Wang, Qianli Shen, Barbara Plank, Bernd Bischl, Mina Rezaei, and Kenji Kawaguchi. Finercut: Finer-grained interpretable layer pruning for large language models. arXiv preprint arXiv:2405.18218 , 2024b. Albert Tseng, Jerry Chee, Qingyao Sun, V olodymyr Kuleshov, and Christopher De Sa. Quip#: Even better llm quantization with hadamard incoherence and lattice codebooks. arXiv preprint arXiv:2402.04396 , 2024. Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, and Yuxiong He. Zeroquant-v2: Exploring post-training quantization in llms from comprehensive study to low rank compensation. arXiv preprint arXiv:2303.08302 , 2023. Cheng Zhang, Jianyi Cheng, George A Constantinides, and Yiren Zhao. Lqer: Low-rank quantization error reconstruction for llms. arXiv preprint arXiv:2402.02446 , 2024c. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794 , 2020. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, and Lingpeng Kong. Random feature attention. arXiv preprint arXiv:2103.02143 , 2021. Zhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, and Hongsheng Li. Efficient attention: Atten- tion with linear complexities. In Proceedings of the IEEE/CVF winter conference on applications of computer vision , pages 3531–3539, 2021. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. Heming Xia, Zhe Yang, Qingxiu Dong,
|
https://arxiv.org/abs/2505.21077v1
|
Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhifang Sui. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. arXiv preprint arXiv:2401.07851 , 2024b. Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle-3: Scaling up inference ac- celeration of large language models via training-time test. arXiv preprint arXiv:2503.01840 , 2025. Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. Advances in neural information processing systems , 30, 2017. Ari Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. Advances in neural information processing systems , 31, 2018. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In International conference on machine learning , pages 3519– 3529. PMLR, 2019. 13 Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. arXiv preprint arXiv:2010.15327 , 2020. Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, ˙Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods , 17:261–272, 2020. doi: 10.1038/s41592-019-0686-2. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey. SlimPajama: A 627B token cleaned and deduplicated version of RedPajama, 2023. URL https://huggingface.co/datasets/cerebras/SlimPajama-627B . 14 Appendix Table of contents A Overview of Linear Minimum Mean Squared Error ( LMMSE ) estimation 16 A.1 General MMSE estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 Linear MMSE ( LMMSE ) estimation . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 Derivation of the LMMSE estimator (Proposition 3.1) . . . . . . . . . . . . . . . . 17 B Overview of Canonical Correlation Analysis (CCA) 19 B.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Canonical Correlation matrix . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.21077v1
|
. . . . . . . . . . . . . . 19 B.3 Performing CCA using the Singular Value Decomposition (SVD) . . . . . . . . . . 19 C Proof of Theorem 3.2 20 D Algorithmic details for NBL 22 D.1 Computational cost of the calibration . . . . . . . . . . . . . . . . . . . . . . . . . 23 D.2 Calibration runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 D.3 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E Experiments 24 E.1 Additional experiments with DeepSeek-R1-Distill-Llama-8B . . . . . . . . . 24 E.2 Combining NBL with speculative decoding for faster inference . . . . . . . . . . . 25 E.3 Performance - efficiency 2D plots . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.4 Accuracy and speed-up comparisons with intervals . . . . . . . . . . . . . . . . . 27 E.5 Qualitative analysis of LLM outputs after NBL . . . . . . . . . . . . . . . . . . . 28 E.6 Quantization of Llama-3.1-70B using the AWQ algorithm . . . . . . . . . . . . . 28 F Ablation studies 30 F.1 Dependency on the calibration dataset . . . . . . . . . . . . . . . . . . . . . . . . 30 F.2 LoRA fine-tuning of NBL-linearized layers . . . . . . . . . . . . . . . . . . . . . 30 F.3 CCA-Bound criterion vs cosine distance criterion . . . . . . . . . . . . . . . . . . 31 F.4 Greedy selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 G Selected Transformer layers 32 H Inference complexity analysis and KV-cache usage details 33 H.1 Figure analysis: prefill speed-up vs. context length . . . . . . . . . . . . . . . . . 33 H.2 KV-cache calculation for Grouped Query Attention based models . . . . . . . . . . 33 I Limitations and broader impacts 34 15 A Overview of Linear Minimum Mean Squared Error ( LMMSE ) estimation Minimum Mean Squared Error (MMSE)
|
https://arxiv.org/abs/2505.21077v1
|
estimation is a fundamental statistical method for estimating an unknown quantity from noisy observations. The goal of MMSE estimation is to minimize the mean squared error (MSE) between the actual value and its estimate. While the general MMSE approach does not impose restrictions on the form of the estimator, the Linear Minimum Mean Squared Error (LMMSE ) estimation specifically considers linear estimators, making it computationally efficient and widely applicable. This section provides a concise overview of LMMSE estimation, including its derivation and its connection to our proposed method. For a more detailed discussion on LMMSE estimation, refer to the textbooks by Kailath et al. [2000] and Kay [1993]. A.1 General MMSE estimation In the general MMSE framework, given observations X∈Rmand a quantity to be estimated Y∈Rn, the MMSE estimator is defined as the conditional expectation of Ygiven X: ˆY=E[Y|X]. (6) This estimator minimizes the MSE, defined as: MSE(Y,ˆY) =Eh ∥Y−ˆY∥2 2i . (7) The MMSE estimator is optimal in terms of minimizing the expected squared error and does not assume a specific functional form for the relationship between XandY. However, the computation of the conditional expectation can be intractable in many practical scenarios, especially for high-dimensional or non-Gaussian distributions. This motivates the use of LMMSE , which simplifies the problem by restricting the estimator to a linear form. A.2 Linear MMSE ( LMMSE ) estimation In the LMMSE , the estimator ˆyis constrained to be a linear function of the observations x, given by: ˆY=WX +b, (8) where W∈Rn×mis the weight matrix, and b∈Rnis the bias vector. The goal is to determine W andbsuch that the MSE is minimized. The MSE is expressed as: MSE(Y,ˆY) =Eh ∥Y−ˆY∥2 2i =Tr E (Y−WX−b)(Y−WX−b)T . A.2.1 Properties of LMMSE TheLMMSE estimator has several key properties. It minimizes the MSE among all linear estimators and coincides with the general MMSE estimator when the joint distribution of XandYis Gaussian. An important property of the LMMSE estimator is the orthogonality of the estimation error E=Y−ˆYto the observations X, such that E E(X−E[X])T =0. This orthogonality implies that no further reduction in error is possible through linear adjustments. A.2.2 Common Use Cases of LMMSE LMMSE estimation is widely applied in various fields. In signal processing, it is used for tasks such as noise reduction, channel equalization, and adaptive filtering. In communications, it serves as a foundational method for channel estimation and interference cancellation. In sensor networks, it is utilized to fuse noisy measurements from multiple sensors for a more accurate state estimation. Control systems leverage LMMSE for state estimation, with Kalman and Wiener filtering methods being prominent examples. 16 In machine learning, LMMSE underpins fundamental techniques such as linear regression, where it estimates optimal weights by solving a least-squares problem to establish relationships between input features and outputs. Beyond its computational advantages, LMMSE provides a robust framework for analyzing noise-affected data. In addition, in fields such as medical imaging and remote sensing, LMMSE is used to denoise and reconstruct images, delivering high-quality results from noisy or incomplete measurements. A.2.3 Advantages and limitations of LMMSE TheLMMSE is computationally efficient due
|
https://arxiv.org/abs/2505.21077v1
|
to its closed-form solution, making it suitable for real- time applications. It provides interpretable results when the covariance structures are well understood and is robust under Gaussian noise assumptions. However, reliance on accurate covariance estimation and the linearity assumption can limit its effectiveness in highly non-linear systems. Furthermore, it may be sensitive to outliers and model specifications. Despite these challenges, LMMSE re- mains an important framework in estimation theory, valued for its theoretical rigor and practical utility. The advantages of using LMMSE in this work include its ability to maintain computational efficiency, as the closed-form solution avoids the need to retrain the model after layer replacement. Additionally, LMMSE ensures that the approximation error is minimized in a principled way, leveraging the statistical structure of the data. This property is critical for preserving the model’s performance on downstream tasks after compression. However, the linearity assumption inherent in LMMSE introduces limitations. While the approxima- tion is effective for layers with strong linear correlations between inputs and outputs, it may not capture the complex, nonlinear interactions in some attention layers. This limitation is addressed in the paper by applying LMMSE selectively to layers where canonical correlations suggest that linearization will have minimal impact on performance. A.3 Derivation of the LMMSE estimator (Proposition 3.1) Proof. To derive the Linear Minimum Mean Squared Error ( LMMSE ) estimator, we first solve for the bias vector b: MSE(Y,ˆY) =Tr E (Y−WX−b)(Y−WX−b)T =Tr E[Y YT]−E[Y(WX +b)T]−E[(WX +b)YT] +E[(WX +b)(WX +b)T] . Expanding the quadratic term E[(WX +b)(WX +b)T]: E[(WX +b)(WX +b)T] =E[WXXTWT] + 2bE[X]TWT+bbT. The derivative of the trace of a matrix Tr(X)with respect to a matrix Xis simply the identity matrix, i.e., ∂Tr(X) ∂X=I, where the trace operator sums the diagonal elements of X. This result is used in conjunction with the chain rule in the derivation to handle the differentiation of MSE that includes trace terms with respect toWandb. Then, the derivative of the MSE with respect to bis: ∂MSE(Y,ˆY) ∂b=−2E[Y] + 2b+ 2WE[X]. Setting∂MSE(Y,ˆY) ∂b= 0, we solve for b: b=E[Y]−WE[X]. 17 Next, substituting bback into the MSE expression, we focus on the centered terms X−E[X]and Y−E[Y]: MSE(Y,ˆY) =Tr E (Y−E[Y])(Y−E[Y])T −E (Y−E[Y])(W(X−E[X]))T . −E (W(X−E[X]))(Y−E[Y])T +E (W(X−E[X]))(W(X−E[X]))T . The derivative of the MSE with respect to Wis: ∂MSE(Y,ˆY) ∂W=−2E[(Y−E[Y])(X−E[X])T] + 2WE[(X−E[X])(X−E[X])T]. Setting∂MSE(Y,ˆY) ∂W= 0, we find: WE[(X−E[X])(X−E[X])T] =E[(Y−E[Y])(X−E[X])T]. Rewriting in terms of covariance matrices: WC XX=CY X, where CXXandCY Xare: CXX=E (X−E[X])(X−E[X])T , C Y Y=E (Y−E[Y])(X−E[X])T . The optimal Wis then: W=CY XC−1 XX. Substituting Wandbinto the linear model, the LMMSE estimator becomes: ˆY=E[Y] +CY XC−1 XX(X−E[X]). This form highlights how the LMMSE estimator utilizes the mean and covariance of XandYto minimize the MSE. 18 B Overview of Canonical Correlation Analysis (CCA) Canonical Correlation Analysis (CCA) is a statistical technique that identifies linear transformations of two random vectors, X∈RpandY∈Rq, such that the transformed variables are maximally correlated. The goal is to find pairs of canonical variables ui=aT iXandvi=bT iYfori∈ {1, . . . , min(p, q)}, where ai∈Rpandbi∈Rq, with the correlation ρibetween uiandvibeing maximized. B.1 Problem formulation CCA aims to solve the following optimization
|
https://arxiv.org/abs/2505.21077v1
|
problem: max a,bρ=aTCXYbp aTCXXap bTCY Yb, where CXX, CY Y, CXYare the autocovariance matrices of X,Y, and their cross-covariance, respec- tively. B.2 Canonical Correlation matrix To simplify the problem, a change of basis is applied to normalize the covariance matrices CXXand CY Y. Define: ˜x1=C1/2 XXa,˜y1=C1/2 Y Yb, where the transformed variables ˜x1and˜y1have unit variance. Substituting these into the optimization problem and using the Cauchy-Schwarz inequality, the problem reduces to an eigenvalue problem: ˜x1= arg max ˜x˜xT(C−1 2 XXCXYC−1 Y YCY XC−1 2 XX)˜x ∥˜x∥. The solution involves solving the eigenvalue problem: C−1 2 XXCXYC−1 Y YCY XC−1 2 XX˜x=λ˜x, (9) where the eigenvalues λiare the squared canonical correlations ρ2 i, and the corresponding eigenvectors ˜xidetermine the canonical weights in the whitened space. B.3 Performing CCA using the Singular Value Decomposition (SVD) The eigenvalue problem in (9)can alternatively be solved by performing the full SVD on the matrix A=C−1 2 XXCXYC−1 2 Y Y=UΣVT, where U∈Rp×pandV∈Rq×qare orthogonal matrices, and Σ∈Rp×qis a diagonal matrix containing the singular values. From this decomposition, the relationship AAT=UΣ2UTis obtained. Then, the matrix in the eigenvalue problem in (9), given by C−1 2 XXCXYC−1 Y YCY XC−1 2 XXis symmetric and therefore orthogonally diagonalizable as: C−1 2 XXCXYC−1 Y YCY XC−1 2 XX=QΛQT, where Q∈Rp×pis an orthogonal matrix and Λ∈Rp×pis a diagonal matrix containing the eigenvalues. Noting that AAT=C−1 2 XXCXYC−1 Y YCY XC−1 2 XX, we can equate: QΛQT=UΣ2UT. Since QandUare orthogonal matrices and diagonalize the same symmetric matrix, it follows that Λ = Σ2. Thus, performing SVD on C−1 2 XXCXYC−1 2 Y Yprovides the canonical correlations ρ1, ρ2, . . .as the singular values of Σ, and the results for CCA can be summarized as follows: •U∈Rp×p: Orthogonal matrix containing the left singular vectors, which correspond to the canonical weights for X, 19 •V∈Rq×q: Orthogonal matrix containing the right singular vectors, which correspond to the canonical weights for Y, •Σ∈Rp×q: Diagonal matrix containing the canonical correlations ρ1, ρ2, . . . ρ min(p,q), which are the square roots of the eigenvalues λiinΛ. Then, the canonical variables can also be computed as: ui=aT iX= (C−1/2 XXUi)TX, (10) vi=bT iY= (C−1/2 Y YVi)TY, (11) where UiandViare the i-th columns of UandV, respectively. Furthermore, the singular values in Σ are the canonical correlations ρi. Ifρi= 1(ρi= 0), the i-th canonical pair is perfectly correlated (uncorrelated). C Proof of Theorem 3.2 Proof. First, if we put the description of the linear estimator ˆYas in (1)into the MSE formula in (2) we have: MSE(Y,ˆY) =Eh ∥Y−ˆY∥2 2i . =Tr Eh (Y−ˆY)(Y−ˆY)Ti =Tr E (Y−WX−b)(Y−WX−b)T =Tr E (Y−WX−E[Y] +WE[X])(Y−WX−E[Y] +WE[X])T =Tr E ((Y−E[Y])−W(X−E[X]))(Y−E[Y])−W(X−E[X])T , where Tr(·)is the trace operator defined on matrices. Then, opening the outer-product, we end up with the following expression: MSE(Y,ˆY) =Tr E (Y−E[Y])(Y−E[Y])T −E (Y−E[Y])(X−E[X])TWT −E W(X−E[X])(Y−E[Y])T +E W(X−E[X])(X−E[X])TWT . As the weight matrices Ware deterministic, we can take them out of the expectation, and simplify the covariance expressions using C, and using the fact CT XY=CY Xin the second line below: MSE(Y,ˆY) =Tr CY Y−CY XWT−WC XY+WC XXWT =Tr CY Y−CY X(CY XC−1 XX)T−CY XC−1 XXCXY+CY
|
https://arxiv.org/abs/2505.21077v1
|
XC−1 XXCXX(CY XC−1 XX)T =Tr CY Y−CY XC−1 XXCXY−CY XC−1 XXCXY+CY XC−1 XXCXXC−1 XXCXY =Tr CY Y−CY XC−1 XXCXY−(((((((CY XC−1 XXCXY+(((((((CY XC−1 XXCXY =Tr CY Y−CY XC−1 XXCXY . (12) Then, by using the matrix square root of CY Y, and using the fact that for A∈RK×LandB∈RL×K, Tr(AB) =Tr(BA), we can further simplify the expression: MSE(Y,ˆY) =Tr CY Y−CY XC−1 XXCXY =Trh C1 2 Y Yh I−C−1 2 Y YCY XC−1 XXCXYC−1 2 Y Yi C1 2 Y Yi =Trhh I−C−1 2 Y YCY XC−1 XXCXYC−1 2 Y Yi CY Yi =Trhh I−C−1 2 Y YCY XC−1 2 XXC−1 2 XXCXYC−1 2 Y Yi CY Yi =Tr I−C−1 2 Y YCY XC−1 2 XX C−1 2 Y YCY XC−1 2 XXT CY Y 20 Here, the canonical correlation matrix is corr(Y, X) =C−1 2 Y YCY XC−1 2 XX. Then if we apply full Singular Value Decomposition on this matrix as UΣVT=corr(Y, X)such that U∈Rhout×hout, Σ∈Rhout×hinandV∈Rhin×hin. Also, using the fact that UandVare orthogonal matrices such that UUT=UTU=IandV VT=VTV=I, we have: MSE(Y,ˆY) =Trhh I−UΣVT UΣVTTi CY Yi =Tr I−UΣΣTUT CY Y =Tr UUT−UΣΣTUT CY Y =Tr U I−ΣΣT UTCY Y =Tr I−ΣΣT UTCY YU . We have ΣΣT=diag(ρ2 1, ρ2 2, . . . , ρ2 hout)where ρi= 0 fori > r =min(hout, hin). Then, we can write the MSE expression as: MSE(Y,ˆY) =Tr I−diag(ρ2 1, ρ2 2, . . . , ρ2 hout) UTCY YU =houtX i(1−ρ2 i)(UTCY YU)ii. Then the product (UTCY YU)iican be written as the following by using the symmetric ma- trix singular value decomposition of the covariance matrix CY Y=UYΣYUT YandΣY= diag(σY,1, σY,2, . . . , σ Y,h out): (UTCY YU)ii=X k(UTUYΣY)ik(UT YU)ki =X k((UTUY)ikσY,k)(UTUY)ik =X kσY,k(UTUY)2 ik Then, we have the MSE expression as MSE(Y,ˆY) =X i(1−ρ2 i)(UTCY YU)ii =X i(1−ρ2 i)X kσY,k(UTUY)2 ik ≤X i(1−ρ2 i)X kσY,k =X i(1−ρ2 i)Tr(CY Y), where the inequality arises from the fact that UTUYis a product of orthogonal matrices, ensuring that its elements have magnitudes less than or equal to one. Also, the last equality comes by the fact that the sum of the singular values of a matrix is equal to the trace of the same matrix. Then we can write the bound, as the MSE normalized by the trace of the auto-correlation NMSE (Y,ˆY) =MSE(Y,ˆY) Tr(CY Y)≤houtX i=1(1−ρ2 i) = (hout−r) +rX i=1(1−ρ2 i), such that r= min( hout, hin)using the fact ρi= 0fori > r . 21 D Algorithmic details for NBL Algorithm 2 Pseudocode for NBL Weight/Bias and CCA Bound Computation for a given Self- Attention Layer 1:Input: X(attention layer input), Y(attention layer output). 2:Output: Weight matrix Wand bias b, CCA-bound. 3:Y+←Y+X {Residual connection output.} 4: 5:Mean Computations 6:E[X]←Mean( X) 7:E[Y]←Mean( Y) 8:E[Y+]←Mean( Y+) 9: 10:Cross-Covariance 11:CY X←Cov(Y, X){Unbiased estimator for cross-covariance.} 12:CY+X←Cov(Y+, X) 13: 14:Covariance Matrices 15:CXX←Cov(X, X ){Unbiased estimator for auto-covariance.} 16:CY+Y+←Cov(Y+, Y+) 17: 18:Eigen-Decomposition 19:(λY+Y+, VY+Y+)←Eigh( CY+Y+){Symmetric matrix eigenvalue decomposition.} 20:(λXX, VXX)←Eigh( CXX) 21: 22:Inverse Square Roots 23:C−1/2 Y+Y+←VY Ydiag λ−1/2 Y+Y+ VT Y+Y+{Matrix square roots ( CY+Y+=C1/2 Y+Y+C1/2 Y+Y+)} 24:C−1/2 XX←VXXdiag λ−1/2 XX VT XX 25: 26:Correlation Matrix 27:CW←C−1/2 Y+Y+CY+XC−1/2 XX
|
https://arxiv.org/abs/2505.21077v1
|
{Calculate the standardized cross-correlation matrix.} 28: 29:Singular Value Decomposition (SVD) 30:(U, S, V )←SVD(C W){Perform SVD to compute the CCA singular values.} 31:(ρ1, ρ2, . . . , ρ r)←diag(S){ris the embedding dimension.} 32:CCA-bound ←Pr i=1(1−ρ2 i){Calculate CCA Bound based on Theorem-(3.2).} 33: 34:Weight Matrix and Bias 35:W←CY XC−1 XX {NBL weight.} 36:b←E[Y]−WE[X]{NBL bias.} Algorithm 2 provides a detailed procedure for NBL parameter estimation and CCA-bound computa- tion. This approach approximates self-attention layers by analytically deriving linear weights and biases while quantifying redundancy through Canonical Correlation Analysis (CCA). It takes the input Xand output Yof any given self-attention layer. To capture the full behavior of the outputs, the residual connection is added as Y+=Y+X. While the CCA-bound is calculated based on the output of the residual connection, the linearization weights and biases are computed directly from the attention output. As a result, the residual connection is retained in the compressed layer. Although it is possible to calculate the weights directly from the residual connection output and remove the residual connection in the compressed model, its inclusion adds minimal complexity to the model and helps preserve the input information more effectively. First, the algorithm calculates E[X],E[Y]andE[Y+], then unbiased estimates of cross-covariances matrices, CY XandCY+X, and also auto-covariances, CXXandCY+Y+. Further, based on these covariance matrices, eigen-decomposition is applied, which gives eigenvalues and eigenvectors for calculating inverse square roots of covariance matrices C−1/2 Y+Y+andC−1/2 XX . The CCA standardized 22 cross-correlation matrix ( CW) is then computed using the cross-covariance matrix CY+Xand with the inverse square roots of both input and output covariance matrices. Thereafter, ρi, the CCA singular values from SVD over CWare computed. The CCA-bound quantifies the redundancy present in the attention layer by computing the total of 1−ρ2 isummed over all dimensions of the embeddings. At last, the weight matrix and bias of the NBL are computed. The weight matrix is computed from the cross-covariance ( CY X) and inverse ( CXX), whereas the bias can be induced from the expression (b=E[Y]−WE[X]). This method realizes not only a linear approximation of the attention layer but also a rigorous quantification of the redundancy, hence this turns out to be a strong approach toward the compression and optimization of layers. D.1 Computational cost of the calibration The proposed algorithm for NBL parameter estimation and CCA-bound computation is designed to be computationally efficient, making it feasible for large-scale models such as Mistral-7B , Llama-3.1-8B andLlama-3.1-8B . The primary computational steps involve mean and covariance computation, symmetric matrix eigen-decomposition, and singular value decomposition (SVD). These operations dominate the overall runtime complexity, with eigen-decomposition and SVD scaling as O(d3), where dis the embedding dimension of the attention layers. The algorithm first computes the mean of input and output activations, requiring O(s·t·d)operations, where sis the number of sequences and tis the context length. Covariance matrices, including cross-covariances, are estimated with a complexity of O(s·t·d2). The eigen-decomposition of covariance matrices and the SVD of the standardized correlation matrix both scale as O(d3). Matrix inversion for computing the weight matrix also has a complexity of O(d3), while bias computation is
|
https://arxiv.org/abs/2505.21077v1
|
relatively inexpensive at O(d2). Overall, the total complexity of the algorithm is O(d3+s·t·d2). The method is efficient due to optimized linear algebra routines and direct computations. Additionally, the algorithm requires no hyperparameter tuning or fine-tuning beyond specifying the number of calibration samples. D.2 Calibration runtime The main computational overhead in the calibration process arises from Algorithm 2, which performs Canonical Correlation Analysis (CCA) and computes the associated linear parameters. To empirically assess the runtime, we run the algorithm using float32 precision on randomly initialized matrices that simulate the attention representations of different model scales in the Llama-3.1 family, as presented in Tables1 and 7. Each experiment used 256 calibration samples with a context length of 2048. Since the calibration is applied independently to each attention layer, the total runtime for a model can be estimated by multiplying the per-layer runtime by the number of layers. For the GPU-based implementation, the tensors required for the Llama-3.1-405B scale—with the specified precision, sample size, and context length—do not fit within the memory of a single NVIDIA A100 (80 GB). Therefore, we report GPU results in Tables 1 and 7 using two A100 GPUs running in parallel across all three model scales, whereas all other runtimes in this paper are calculated with a single GPU. As shown in these analyses, although calibration can be computationally demanding at large scales on CPU, it remains tractable with GPU acceleration and becomes more efficient when the cost is amortized across layers and repeated use cases. In our implementation, compressing the Llama-3.1-8B andMistral-7B models—each containing 32 attention layers takes under 30 minutes using an NVIDIA A100 GPU (80 GB) for model inference and activation extraction, computing CCA bounds and NBL weights and biases. This runtime includes all necessary steps, including covariance estimation, SVD, and linear parameter computation. These results demonstrate the practicality of our method for compressing large-scale models with minimal overhead. Moreover, the implementation can be further accelerated through additional optimization and system-level improvements. 23 Table 7: Calibration Runtime for Different Model Scales (256 samples, 2048 context length) Model Llama-8B Llama-70B Llama-405B Hidden size ( d) 4092 8192 16384 Hidden layers 32 80 126 Runtime per layer (CPU) 62.16 sec 324.79 sec 1839.64 sec Total (CPU) ∼0.55 hr ∼7.22 hr ∼64.38 hr Runtime per layer (GPU) 26.04 sec 78.91 sec 371.79 sec Total (GPU) ∼0.24 hr ∼1.75 hr ∼13.01 hr D.3 Implementation details The algorithm is implemented using PyTorch [Paszke et al., 2019] and SciPy [Virtanen et al., 2020] for tensor operations, linear algebra routines, and eigen-decomposition, and HuggingFace Transformers [Wolf, 2019] for loading and managing the pretrained models. These libraries provide optimized and robust functionality for large-scale matrix operations, enabling efficient computation on standard hardware. Additionally, the implementation builds on the code repository provided by He et al. [2024]1. After passing the calibration data through the model and computing the activations, typically on the GPU, the computations in Algorithm 2 can be performed on either the CPU or GPU, depending on the available compute resources. We provide efficient implementations for both options. Running the algorithm on the CPU
|
https://arxiv.org/abs/2505.21077v1
|
helps reduce GPU memory overhead and ensures broader hardware compatibility, while using the GPU offers faster runtime for large-scale models. PyTorch and SciPy ’s linear algebra routines ensure efficient execution of the covariance, eigen-decomposition, and SVD steps efficiently on either backend, enabling smooth integration with diverse workflows. E Experiments E.1 Additional experiments with DeepSeek-R1-Distill-Llama-8B We further evaluate our approach on the recently released DeepSeek-R1-Distill-Llama-8B model [Guo et al., 2025], comparing the performance of NBL and DROP in the context of linearizing versus removing self-attention layers. Table 8 reports additional compression ratios beyond those presented in Table 4. Table 8: Performance of DeepSeek-R1-Distill-Llama-8B across various methods and reasoning benchmarks. Prefill and throughput speeds are reported relative to the baseline original model. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑)Prefill Through- (norm) (norm) (norm) (5-shot) (norm) (norm) Grande (↑) put (↑) Baseline 66.1 42.3 82.9 74.3 55.7 41.8 77.8 67.7 63.6 1 1 Attn DROP-4 66.4 43.9 84.0 73.9 55.6 42.2 76.9 68.7 63.9 1.08 1.09 Attn DROP-8 66.9 43.5 83.6 73.0 55.8 40.6 76.0 68.3 63.5 1.16 1.19 Attn NBL-4 65.8 43.6 83.5 73.9 55.9 42.0 76.7 67.5 63.6 1.08 1.08 Attn NBL-8 65.5 41.7 83.8 72.3 55.2 43.8 76.3 66.8 63.2 1.15 1.19 1https://github.com/CASE-Lab-UMD/LLM-Drop 24 E.2 Combining NBL with speculative decoding for faster inference We evaluate the integration of speculative decoding and Neural Block Linearization (NBL) using the DeepSeek-R1-Distill-Llama-8B model [Guo et al., 2025]. Experiments are conducted on the MT-bench [Zheng et al., 2023] benchmark using an NVIDIA A100 GPU. For speculative decoding, we adopt the EAGLE-3 method [Li et al., 2025], which extends the traditional draft-and-verify framework by enabling the generation of multiple tokens per iteration with a low-cost draft model. In each decoding step, the draft model proposes a sequence of candidate tokens, which are then passed to a verifier model. The verifier checks these tokens in parallel, accepting the longest prefix that aligns with its own output. This process amortizes the cost of expensive autoregressive generation and significantly accelerates decoding. To evaluate compatibility with NBL, we replace the standard verifier model in EAGLE-3 with its NBL-compressed counterpart. Since NBL preserves the input-output behavior of selected attention layers through linear approximations, the verifier remains functionally consistent while being more efficient. This substitution does not require changes to the speculative decoding logic or generation pipeline. For the draft model, we use the publicly available weights from the EAGLE repository2. Our results show that combining NBL with EAGLE-3 leads to compounding speed-ups, achieving up to 4.07 ×acceleration (see Table 6). This demonstrates that NBL is not only orthogonal to decoding- level acceleration strategies but can also enhance them with negligible integration overhead. These findings highlight NBL’s practicality for real-world deployment scenarios, where both model size and inference latency are critical. E.3 Performance - efficiency 2D plots We have included performance-efficiency plots with the pooled standard error (SE) intervals, where we visualize accuracy against both throughput and KV-cache gains, for the two best performing methods, Attention NBL and Attention DROP. The plots for Llama-3.1-8B ,Mistral-7B , DeepSeek-Distill- Llama-8B and Llama-3.1-8B
|
https://arxiv.org/abs/2505.21077v1
|
can be found in Figure 4. To evaluate the statistical significance of the differences, we used the pooled standard error (Pooled_SE) to aggregate the uncertainty across tasks when reporting average performance. The Pooled_SE is computed as: Pooled_SE =1 nvuutnX i=1SE2 i, where SEidenotes the standard error of task i, and nis the total number of tasks. We have n=8 benchmarks similar to the setting in Tables 2, 3, 4. The plots 4a, 4b, 4c indicate that, particularly at higher compression rates, NBL demonstrates a more favorable trade-off between KV-cache usage, accuracy, and throughput , suggesting statistically significant Pareto optimality. 2https://github.com/SafeAILab/EAGLE 25 (a) (b) (c) Figure 4: Accuracy versus (left) KV cache savings (right) throughput speed-up across vary- ing compression levels for NBL and DROP, on (a) Mistral-7B (b)Llama-3.1-8B (c) DeepSeek-R1-Distill-Llama-8B models with the corresponding Standard Error intervals. 26 E.4 Accuracy and speed-up comparisons with intervals In this section, we present Tables 9, 10, 11, and 12 with the average accuracy, prefill and throughput speed-ups with standard error and standard deviation intervals for our Attention NBL method and the baseline Attention DROP, corresponding to the results in the main text. Table 9: Mistral-7B : Accuracy and Speedup (Table 2). Method Avg. Acc. ± SE ( ↑) Prefill Speed-up ( ↑) Throughput Speed-up ( ↑) Base 70.1 ± 0.41 1 1 Drop-4 70.0 ± 0.41 1.08 ± 0.03 1.10 ± 0.02 Drop-8 69.4 ± 0.41 1.18 ± 0.05 1.22 ± 0.01 Drop-12 66.5 ± 0.42 1.29 ± 0.04 1.29 ± 0.03 Drop-16 52.9 ± 0.42 1.44 ± 0.05 1.41 ± 0.01 NBL-4 70.1 ± 0.41 1.08 ± 0.04 1.10 ± 0.01 NBL-8 70.0 ± 0.41 1.17 ± 0.03 1.20 ± 0.02 NBL-12 68.3 ± 0.42 1.28 ± 0.05 1.27 ± 0.01 NBL-16 58.8 ± 0.42 1.40 ± 0.04 1.37 ± 0.02 Table 10: Llama-3.1-8B : Accuracy and Speedup (Table 3). Method Avg. Acc. ± SE ( ↑) Prefill Speed-up ( ↑) Throughput Speed-up ( ↑) Base 70.1 ± 0.41 1 1 Drop-4 70.2 ± 0.41 1.08 ± 0.035 1.11 ± 0.02 Drop-8 69.8 ± 0.41 1.18 ± 0.048 1.19 ± 0.01 Drop-12 67.9 ± 0.42 1.27 ± 0.033 1.29 ± 0.03 Drop-16 55.2 ± 0.43 1.43 ± 0.025 1.42 ± 0.01 NBL-4 70.2 ± 0.41 1.08 ± 0.040 1.11 ± 0.01 NBL-8 69.8 ± 0.42 1.16 ± 0.031 1.17 ± 0.03 NBL-12 69.1 ± 0.42 1.24 ± 0.027 1.25 ± 0.02 NBL-16 62.5 ± 0.42 1.39 ± 0.043 1.37 ± 0.01 Table 11: DeepSeek-R1-Distill-Llama-8B : Accuracy and Speedup (Tables 4, 8). Method Avg. Acc. ± SE ( ↑) Prefill Speed-up ( ↑) Throughput Speed-up ( ↑) Base 63.6 ± 0.42 1 1 Drop-4 63.9 ± 0.42 1.08 ± 0.04 1.09 ± 0.01 Drop-8 63.5 ± 0.42 1.16 ± 0.04 1.19 ± 0.01 Drop-12 60.7 ± 0.42 1.28 ± 0.05 1.34 ± 0.02 Drop-16 50.2 ± 0.42 1.37 ± 0.04 1.47 ± 0.01 NBL-4 63.6 ± 0.42 1.08 ± 0.04 1.08 ± 0.02 NBL-8 63.4 ± 0.41 1.15 ± 0.04 1.19 ± 0.03 NBL-12 62.4 ± 0.42 1.24 ± 0.05 1.32 ± 0.02 NBL-16 56.8
|
https://arxiv.org/abs/2505.21077v1
|
± 0.42 1.35 ± 0.05 1.42 ± 0.01 Table 12: Llama-3.1-70B (quant.): Accuracy and Speedup (Table 5). Method Avg. Acc. ± SE ( ↑) Prefill Speed-up ( ↑) Throughput Speed-up ( ↑) Base 76.0 ± 0.40 1 1 Drop-32 75.7 ± 0.40 1.20 ± 0.033 1.30 ± 0.01 Drop-48 70.4 ± 0.40 1.29 ± 0.052 1.42 ± 0.02 Drop-54 48.3 ± 0.42 1.34 ± 0.059 1.53 ± 0.02 NBL-32 76.0 ± 0.41 1.16 ± 0.039 1.24 ± 0.03 NBL-48 71.3 ± 0.41 1.25 ± 0.044 1.35 ± 0.01 NBL-54 65.4 ± 0.43 1.30 ± 0.021 1.44 ± 0.02 27 E.5 Qualitative analysis of LLM outputs after NBL Here we present the output of Llama-3-8B-Instruct3model on a sample from the test set of GSM8K [Cobbe et al., 2021] as the input in Table 13. Here, the NBL gives the correct answer until 12 layer modification, whereas DROP fails to give the correct answer in similar compression. Table 13: Correct answers are marked with the symbol " ✓", and incorrect ones with " ✗". All outputs are in response to a math word problem; the correct answer is extracted with the token “####”. Question Mark has a garden with flowers. He planted plants of three different colors in it. Ten of them are yellow, and there are 80% more of those in purple. There are only 25% as many green flowers as there are yellow and purple flowers. How many flowers does Mark have in his garden? Reference Answer There are 80/100 * 10 = «80/100*10=8»8 more purple flowers than yellow flowers. \n So in Mark’s garden, there are 10 + 8 = «10+8=18»18 purple flowers. \n Purple and yellow flowers sum up to 10 + 18 = «10+18=28»28 flowers. \n That means in Mark’s garden there are 25/100 * 28 = «25/100*28=7»7 green flowers. \n So in total Mark has 28 + 7 = «28+7=35»35 plants in his garden. \n #### 35 Method Output Baseline There are 10 yellow flowers. 80% more of those are purple. So 10 + (10 x 0.8) = 18 purple flowers. 10 + 18 = 28 yellow and purple flowers. 25% of that is 28 x 0.25 = 7 green flowers. 10 + 18 + 7 = 35 flowers. The answer is 35. \n #### 35 ✓ Attn NBL-8 NBL-8 Ans: There are 10 yellow flowers. There are 10 * 1.8 = 18 purple flowers. There are 10 + 18 = 28 yellow and purple flowers. There are 25% as many green as yellow and purple, so there are 0.25 * 28 = 7 green. 7 + 28 = 35. 10 + 18 + 7 = 35. The answer is 35. \n #### 35 ✓ Attn NBL-12 The number of yellow flowers is 10. The number of the other type of flower is 10 * 1.8 = 18. The number of green ones is 10 + 18 = 28, and 0.25 * 28 = 7. So the total number of flowers is 10 + 18 + 7 = 35. \n #### 35 ✓ Attn NBL-16 The number of
|
https://arxiv.org/abs/2505.21077v1
|
the first group (the one with the highest number) is 100% and the other two are 1.5 and 0.75. So the number of the first is 100% and the other two are 1.5 and 0.5. So the total number of the first is 100% and the other two are 1.5 and 1.5. ✗ Attn DROP-8 There are 10 yellow flowers. There are 80% more of those in purple, so there are 10 + 0.8 * 10 = 18 purple flowers. The total number of yellow and purple flowers is 10 + 18 = 28. There are 25% as many green flowers as there are yellow and purple flowers, so there are 0.25 * 28 = 7 green flowers. The total number of flowers is 7 + 28 = 35. \n #### 35 ✓ Attn DROP-12 There are 10 yellow flowers. 80% of 10 is 8, so there are 8 more, so 18 in total. 25% of 18 is 4, so there are 4 green. 10 + 8 + 4 = 22. So there are 22 of the non-green flowers. 4 is the green. So the total is 26. The answer is 26. ✗ Attn DROP-16 We’re given that the number of fish are is the main character, the most important thing is the of the in the problem. The problem states " 1, the " (the phrase "if and if, the " (the phrase "is a, the. The " (the " or " (a, but " (the. The is the " (the, the. (The " of the. The is the. . Let’s analyze the first line, the. 1 1. is and, the. is and, the. is and, and. The is a, the. 1 and, the. is and, the. ✗ E.6 Quantization of Llama-3.1-70B using the A WQ algorithm The Activation-aware Weight Quantization (AWQ) algorithm Lin et al. [2024] is a hardware-friendly method designed for low-bit weight-only quantization of large language models (LLMs). AWQ focuses on preserving model performance while significantly reducing memory and computation costs. Below, we outline the mathematical framework and process to quantize the Llama-3.1-8B . 3https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct 28 Weight quantization Quantization maps floating-point weights Wto low-bit integers Q(W). ForN-bit quantization, the function defined as: Q(W) = ∆ ·RoundW ∆ ,∆ =max(|W|) 2N−1, where ∆is the quantization scale, and Round (·)maps values to the nearest integers. Preserving salient weights A key observation is that not all weights contribute equally to model performance. AWQ identifies the most important weights, termed salient weights , using activation magnitudes. Channels corresponding to large activation values are more likely to process critical features. Instead of mixed-precision quantization, AWQ applies a scaling factor s >1to these weights to reduce relative quantization error. For a weight W, and input vectOR X, the scaled quantization process is: Q(W·s)·X s= ∆′or·RoundW·s ∆′ ·X s, where ∆′is the updated quantization scale. The ratio of quantization errors before and after scaling is given by: Error Ratio =∆′ ∆·1 s≈1 s,for∆′≈∆. Thus, scaling effectively reduces the relative error for salient weights. Optimal scaling factor search The AWQ algorithm optimizes the scaling factors sby
|
https://arxiv.org/abs/2505.21077v1
|
minimizing the output difference between quantized and original weights: L(s) = Q(W·diag(s)) diag(s)−1·X −WX , where sis a per-channel scaling vector, and Xrepresents cached activation features from a small calibration dataset. To simplify the search, AWQ uses the average activation magnitude per channel, sX, as a proxy for saliency: s=sα X, α∗= arg min αL(s). A WQ and NBL The AWQ algorithm’s per-channel scaling and activation-aware strategy enable efficient compression without requiring retraining or large calibration datasets. AWQ is also designed to support layer fusion to optimize hardware efficiency. By fusing linear layers with adjacent operations, AWQ reduces intermediate memory access and kernel overhead, enhancing its performance even further. However, in the implementation used for the Llama-3.1-70B quantization in the NBL framework, we opted to use the unfused version for simplicity and flexibility during experimentation. Additionally, the linear weights calculated by NBL were also quantized by the AWQ method for compatibility. The speed-up measurements were done by generating 2048 tokens with a context length of 2048, and a batch size of 1, similar to the experiments with smaller models. We used the publicly available AutoAWQ4 repository for the implementation of the AWQ on top of NBL. Future work can address these limitations by integrating quantization-aware techniques directly into the NBL pipeline. For instance, a fully quantized NBL framework could adopt fine-grained scaling for linear layers while retaining the ability to fuse operations dynamically, thereby combining flexibility with optimal hardware utilization. Such advancements could further reduce memory overhead while maintaining model accuracy. 4https://github.com/casper-hansen/AutoAWQ 29 F Ablation studies F.1 Dependency on the calibration dataset Tables 14 and 15 present validation results on C4 and WikiText-2 for pruned or linearized Llama-3.1-8B andMistral-7B models, comparing SLEB, Attn DROP, Attn NBL (each applied to 8 attention layers), and SliceGPT with 25% sparsity. These results are included to assess each method’s dependency on the choice of calibration data used to prune or linearize LLMs. Table 14: Perplexity Results on C4. Calibration ( D): Wiki-2 C4 Wiki-2 C4 Method Llama-3.1-8B Mistral-7B SliceGPT-25% 123.29 25.41 28.28 13.07 SLEB-8 20.53 20.54 15.07 15.49 Attn DROP-8 11.40 11.35 9.20 9.20 Attn NBL-8 12.22 11.37 9.85 9.08Table 15: Perplexity Results on WikiText-2. Calibration ( D): Wiki-2 C4 Wiki-2 C4 Method Llama-3.1-8B Mistral-7B SliceGPT-25% 14.27 74.33 7.61 12.11 SLEB-8 16.70 16.67 10.23 11.13 Attn DROP-8 7.39 7.48 6.81 6.81 Attn NBL-8 7.18 9.57 5.59 7.33 F.2 LoRA fine-tuning of NBL-linearized layers We further investigate the potential of fine-tuning the linear layers derived by our NBL method using the Low-Rank Adaptation (LoRA) framework [Hu et al., 2022]. Specifically, we apply LoRA to the linearized layers on the same C4 dataset used for calculating the NBL parameters. This dataset closely resembles the original pretraining data, ensuring consistency in domain and distribution. Our experiments use the DeepSeek-R1-Distill-Llama-8B model, where 12 and 16 attention layers are replaced with their NBL-linearized counterparts (denoted as Attn NBL-12 and Attn NBL- 16). We then fine-tune these linearized layers using LoRA with a rank of 32, α= 64 , and a dropout of 0.1. Training is performed in bfloat16 for 3 epochs
|
https://arxiv.org/abs/2505.21077v1
|
with a learning rate of 1e−4, an effective batch size of 16, and context length of 1024 tokens using a 5000-sample subset of the C4 validation split under a causal language modeling objective. To further assess generalization beyond the calibration data, we also apply LoRA fine-tuning using the SlimPajama [Soboleva et al., 2023] dataset—a large-scale corpus with different statistical properties than C4. This setup allows us to explore LoRA’s effectiveness under a domain-mismatched regime. These results in Table 16 suggest that LoRA-based refinement offers only marginal improvements over NBL alone—e.g., 62.5%vs.62.4%for NBL-12 and 58.2%vs.56.8%for NBL-16, indicating that most of the performance gains stem from the underlying NBL mechanism itself. Furthermore, the consistent behavior across both matched and mismatched data regimes highlights the robustness and generality of NBL, where it provides competitive reasoning performance even without additional parameter-efficient fine-tuning, making it a lightweight and effective tool for compressing large pretrained language models. Table 16: Average reasoning accuracy of NBL models before and after LoRA fine-tuning on C4 and SlimPajama pretraining datasets. Model Variant Average Accuracy (%) Baseline 63.6±0.42 NBL-12 62.4±0.42 NBL-12 + LoRA (C4) 62.5±0.42 NBL-12 + LoRA (SlimPajama) 62.6±0.42 NBL-16 56.8±0.42 NBL-16 + LoRA (C4) 58.2±0.41 NBL-16 + LoRA (SlimPajama) 58.1±0.43 30 F.3 CCA-Bound criterion vs cosine distance criterion In this section, we analyze the performance of NBL using two different layer selection criteria: the CCA-bound based criterion presented in Theorem 3.2, and the cosine distance criterion, the latter being originally employed in the DROP method of He et al. [2024]. The results for the cosine distance criterion are presented in Tables 17 and 18, showcasing the performance of NBL on the Mistral-7B andLlama-3.1-8B models across multiple reasoning benchmarks, where the last column (named "CCA Avg.") shows the averages for the CCA-bound based results taken from Tables 2 and 3. From Table 17, we observe that for the Mistral-7B model, the cosine distance criterion performs reasonably well at smaller layer intervals (e.g., Attn NBL-4 and Attn NBL-8). However, as the layer interval increases (e.g., Attn NBL-12), there is a noticeable decline in both the average performance and the CCA score, highlighting a limitation of the cosine distance criterion in maintaining accuracy and stability when approximating transformations over broader sections of the model. Similarly, Table 18 shows the performance of the cosine distance criterion for the Llama-3.1-8B model. Its performance declines for larger intervals ( Attn NBL-16). For instance, the average performance decreases from 70.2 for Attn NBL-4 to 58.0 for Attn NBL-16, accompanied by a drop in the corresponding CCA scores. These observations indicate that the cosine distance criterion shows less consistency in capturing transformations over wider layer intervals. In contrast, the CCA-bound-based criterion exhibits greater stability and reliability, making it more suitable for tasks that require robust linear approximations across deep network layers. Table 17: Performance of Mistral-7B with NBL using cosine distance criterion. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑) CCA (norm) (norm) (norm) (5-shot) (norm) (norm) Grande Avg. (↑) Attn NBL-4 80.0 53.7 83.4 80.6 62.4 44.6 81.9 74.0 70.1 70.1 Attn NBL-8 80.4 52.8
|
https://arxiv.org/abs/2505.21077v1
|
83.6 79.8 62.1 44.0 81.8 74.4 69.9 70.0 Attn NBL-12 76.6 49.1 83.5 76.5 60.4 42.6 79.8 73.3 67.7 68.3 Attn NBL-16 62.4 42.8 76.7 69.4 33.0 39.8 76.4 70.2 58.8 58.8 Table 18: Performance of Llama-3.1-8B with NBL using cosine distance criterion. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Avg. (↑) CCA (norm) (norm) (norm) (5-shot) (norm) (norm) Grande Avg. (↑) Attn NBL-4 81.9 54.0 82.2 78.1 65.0 45.8 81.1 73.4 70.2 70.2 Attn NBL-8 81.5 53.7 82.1 77.2 64.0 45.4 81.1 73.3 70.0 69.8 Attn NBL-12 79.1 52.2 82.3 75.2 64.8 45.2 79.9 74.0 69.0 69.1 Attn NBL-16 71.8 46.8 81.6 69.0 39.1 41.8 77.0 73.1 58.0 62.5 F.4 Greedy selection In this section, we perform an ablation with greedy selection, which iteratively checks for the differences in the bound scores and compresses the model incrementally in multiple iterations. Results are shown in Table 19, where the last column shows our NBL results from main body. Our results show that our NBL with CCA criterion outperforms greedy selection. Greedy linearization alters the activation distribution leading to inconsistencies in layer ranking. We also noticed that greedy selection chooses earlier layers, which are usually more less amenable to linearization, as shown in the main body. Table 19: Performance of Mistral-7B with greedy selection. Method ARC-e ARC-c BoolQ HellaSwag MMLU OBQA PIQA Wino- Greedy Ours (NBL) (norm) (norm) (norm) (5-shot) (norm) (norm) Grande Avg. (↑) Avg. (↑) Greedy-4 79.9 53.4 83.8 80.6 62.4 44.2 81.9 73.9 70.0 70.1 Greedy-8 79.3 51.7 82.7 79.1 62.2 43.6 81.0 73.6 69.2 70.0 Greedy-12 69.2 46.1 81.0 74.1 46.3 40.2 79.2 72.9 63.6 68.3 Greedy-16 60.6 40.8 76.1 68.5 34.3 36.8 67.8 70.2 57.7 58.8 31 G Selected Transformer layers Table 20 presents the sorted importance rankings of attention layers selected by the Attn DROP and Attn NBL methods across various models and calibration datasets. In all configurations, the methods consistently prioritize dropping or linearizing the higher-indexed layers, particularly those near the end of the model, suggesting that the later layers may be more redundant or compressible in terms of their contribution to overall performance. This pattern is prominent in all models, where the top-ranked layers for pruning are overwhelmingly concentrated toward the final half of the network. Table 20: Sorted attention layer rankings selected by Attn DROP and Attn NBL across different models and calibration datasets. Lower ranks indicate higher importance. Method & Model Sorted Layer IDs (Most to Least Important) Attn DROP – Mistral-7B (C4)25, 26, 27, 24, 22, 28, 23, 30, 31, 21, 29, 20, 19, 18, 17, 16 6, 8, 9, 12, 14, 13, 11, 15, 4, 10, 7, 5, 3, 0, 2, 1 Attn DROP – Llama-3.1-8B (C4)24, 25, 22, 23, 27, 26, 20, 28, 19, 29, 21, 18, 30, 17, 16, 15 31, 11, 10, 14, 13, 12, 8, 5, 6, 9, 4, 7, 3, 2, 1, 0 Attn DROP – DS-Distill-Llama-8B (C4)24, 23, 22, 25, 26, 20, 27, 19, 28, 18, 21, 29, 17, 30, 16, 15 31, 11, 14, 10, 5, 13, 4, 9, 2, 8, 6,
|
https://arxiv.org/abs/2505.21077v1
|
3, 12, 7, 1, 0 Attn DROP – Llama-3-8B-Instruct (C4)25, 24, 22, 23, 26, 28, 27, 20, 19, 21, 29, 18, 30, 17, 16, 31 15, 11, 14, 10, 13, 8, 12, 5, 9, 6, 4, 2, 3, 7, 1, 0 Attn DROP – Llama-3.1-70B (quant.) (C4)62, 65, 59, 61, 63, 46, 50, 58, 48, 51, 53, 54, 57, 60, 64, 66 67, 68, 49, 69, 55, 56, 47, 44, 52, 42, 45, 70, 43, 40, 71, 41 72, 77, 79, 73, 76, 74, 78, 39, 38, 75, 36, 6, 25, 32, 7, 11 22, 37, 23, 26, 3, 24, 5, 30, 28, 29, 35, 33, 21, 4, 20, 10 2, 34, 31, 27, 8, 12, 9, 14, 18, 15, 1, 16, 19, 17, 13, 0 Attn DROP – Llama-3.1-8B (WikiText-2)23, 24, 22, 25, 20, 19, 28, 26, 27, 21, 18, 29, 30, 17, 16, 15 31, 11, 0, 14, 13, 12, 9, 5, 8, 6, 4, 7, 3, 2, 1, 0 Attn DROP – Mistral-7B (WikiText-2)22, 27, 23, 24, 25, 21, 26, 28, 20, 29, 30, 19, 18, 17, 16, 31 14, 15, 13, 9, 12, 8, 6, 10, 5, 11, 4, 7, 3, 2, 1, 0 Attn NBL – Mistral-7B (C4)25, 26, 27, 24, 23, 22, 28, 30, 31, 29, 21, 20, 19, 18, 17, 16 6, 8, 9, 12, 14, 13, 11, 15, 4, 10, 7, 5, 3, 0, 2, 1 Attn NBL – Llama-3.1-8B (C4)25, 26, 29, 28, 27, 24, 31, 23, 22, 20, 21, 19, 30, 18, 17, 16 11, 10, 15, 5, 14, 4, 12, 6, 9, 8, 13, 7, 2, 3, 0, 1 Attn NBL – DS-Distill-Llama-8B (C4)25, 26, 29, 27, 24, 20, 23, 28, 22, 19, 21, 31, 30, 18, 17, 16 15, 11, 14, 10, 4, 5, 13, 9, 12, 6, 8, 7, 2, 3, 0, 1 Attn NBL – Llama-3-8B-Instruct (C4)25, 24, 22, 23, 26, 28, 27, 20, 19, 21, 29, 18, 30, 17, 16, 31 15, 11, 14, 10, 13, 8, 12, 5, 9, 6, 4, 2, 3, 7, 1, 0 Attn NBL – Llama-3.1-70B (quant.) (C4)62, 65, 59, 66, 63, 61, 46, 58, 55, 54, 50, 48, 53, 51, 57, 60 67, 69, 64, 68, 49, 47, 79, 70, 56, 52, 45, 72, 43, 44, 71, 42 76, 73, 74, 40, 78, 77, 41, 75, 38, 39, 26, 25, 24, 32, 36, 7 12, 28, 11, 4, 37, 30, 22, 23, 29, 9, 8, 20, 6, 27, 10, 33 34, 35, 21, 14, 31, 16, 15, 5, 3, 13, 19, 18, 17, 0, 2, 1 Attn NBL – Llama-3.1-8B (WikiText-2)25, 26, 23, 24, 29, 27, 28, 20, 22, 31, 21, 19, 30, 18, 17, 16 11, 4, 5, 15, 10, 14, 12, 2, 9, 6, 13, 8, 3, 7, 0, 1 Attn NBL – Mistral-7B (WikiText-2)27, 25, 26, 24, 22, 23, 28, 31, 30, 21, 29, 20, 19, 18, 17, 16 6, 8, 14, 4, 9, 13, 12, 15, 11, 10, 7, 5, 0, 2, 3, 1 32 H Inference complexity analysis and KV-cache usage details This section provides a
|
https://arxiv.org/abs/2505.21077v1
|
detailed explanation of the relationship between context length, prefill speed, and KV-cache usage in transformer models with NBL-applied attention layers. H.1 Figure analysis: prefill speed-up vs. context length Figure 3 illustrates how prefill speed-up improves as more attention layers are modified by NBL. The baseline, represented by the black line, is normalized to a value of 1 across all context lengths. Each other line corresponds to different NBL configurations, where mattention layers have been replaced with linear layers. The speed-up becomes more pronounced at longer context lengths, as the quadratic complexity O(n2)of attention begins to dominate, while the linear layers operate at a lower complexity of O(nd). At shorter contexts, the differences between configurations are minimal since the computational overhead of attention is less significant. However, as the context grows, models with more linear layers (e.g., NBL-16) maintain higher prefill speeds due to the reduced cost of quadratic operations. This behavior aligns with the theoretical complexity expression: O((K−m)·n2d+m·nd) where Kis the total number of attention layers, mis the number of self-attention layers replaced by linear approximations through NBL, nis the context length, and dis the model embedding dimension. In this particular experiment setting to generate the figure, we used 2 NVIDIA A100 (80GB) GPU’s, and a batch size of 16. H.2 KV-cache calculation for Grouped Query Attention based models The KV-cache stores key and value tensors for each token processed by attention layers, enabling efficient incremental decoding. In models like Llama and Mistral, which use grouped-query attention, the KV-cache requirements depend on both the number of active attention layers and the context length. Grouped-query attention is designed to improve the efficiency of multi-head attention by reducing the redundancy in key-value storage across attention heads. In standard multi-head attention, each head independently maintains its own key and value tensors, leading to a cache size proportional to the total number of attention heads. Specifically, the cache size scales as 2·Batch Size ·n·d, where nis the context length, dis the hidden dimension of the model, and the factor of 2 accounts for both keys and values. In grouped-query attention, multiple heads are organized into groups, with each group sharing a single key-value cache. Let gdenote the number of groups and hthe total number of attention heads andbsdenote the batch size. Instead of each head having separate key-value storage, all hg=h/g heads within a group use the same set of keys and values. As a result, the cache size is reduced by a factor of g/h, yielding the following expression for grouped-query cache size: KV-cache size = 2·bs·n·d·g h This optimization is particularly beneficial for large-scale models, where the number of attention heads can be substantial. By reducing redundant storage, grouped-query attention helps manage memory usage, especially for longer context lengths and large batch sizes. The application of NBL further optimizes the KV-cache by modifying a subset of the attention layers. When mout of K layers are replaced by linear layers, these layers no longer require key-value storage. Consequently, the KV cache size is reduced to: KV-cache size with NBL = 2·bs·n·d·g h·K−m K This dual
|
https://arxiv.org/abs/2505.21077v1
|
optimization—grouped-query attention and NBL—substantially lowers the memory requirements during inference, allowing models like Llama and Mistral to handle longer sequences more efficiently. For example, the KV cache sizes in Table 21 demonstrate the impact of these optimizations. With a batch size of 64 and grouped-query attention (32 total heads divided into 8 groups), the cache size at a context length of 512 is 4 GB in the original configuration, reduced 33 to 2.5 GB with 12 layers modified by NBL. Similarly, at a context length of 4096, the cache size decreases from 32 GB to 20 GB. This reduction follows the expected scaling behavior, highlighting the effectiveness of NBL and grouped-query attention in optimizing memory usage. The table values reflect the KV cache size scaling with context length. For instance, at a context length of 512, the original cache size is 4 GB, reduced to 2.5 GB with Attn NBL-12. At a context length of 4096, the cache size decreases from 32 GB to 20 GB. These reductions demonstrate how NBL reduces both computational complexity and cache requirements in large-scale transformers. Table 21: KV-Cache sizes of both Llama-3.1-8B andMistral-7B models for different context lengths with a batch size of 64 and half precision. Context Len. Original Attn NBL-4 Attn NBL-8 Attn NBL-12 Attn NBL-16 (GB) (GB) (GB) (GB) (GB) 512 4 3.5 3.0 2.5 2.0 1024 8 7.0 6.0 5.0 4.0 2048 16 14.0 12.0 10.0 8.0 4096 32 28.0 24.0 20.0 16.0 128000 1000 875.0 750.0 625.0 500.0 I Limitations and broader impacts In this work, we present Neural Block Linearization (NBL), a novel and theoretically grounded approach for compressing LLMs without retraining. By identifying and substituting layers exhibiting high linear redundancy, NBL achieves significant inference speedups and memory savings across challenging reasoning benchmarks, including MMLU, HellaSwag, and ARC-Challenge. We derive the CCA-based bound to quantify input-output redundancy. One limitation is that certain nonlinear transformations in LLMs may not admit low-error linear approximations. This is an intrinsic property of deep architectures. Our framework explicitly accounts for such cases by ranking layers based on their approximation impact and applying substitutions only where low-error can be achieved. We further demonstrate that even local approximations result in strong end-to-end performance. The calibration data selection and local linearization considerations balance tractability and performance: we show empirically that NBL remains stable across diverse calibration sets and architectures. Broader impacts. Looking forward, NBL offers several promising directions for extension. Apply- ing NBL to MLP sub-blocks, cross-attention layers, or even finer-grained structures like individual attention heads may unlock additional efficiency without sacrificing quality. Moreover, NBL is particularly well-suited for long-context inference, where its linearized attention yields increasing benefits in both speed and memory efficiency as sequence lengths grow. From a broader perspective, NBL contributes to the democratization and sustainability of AI by enabling faster, cheaper, and more private inference on edge devices and under-resourced environments. As compression becomes central to deploying foundation models at scale, techniques like NBL offer a reliable and interpretable path forward, supporting both practical utility and responsible innovation. 34
|
https://arxiv.org/abs/2505.21077v1
|
arXiv:2505.21087v1 [cs.LO] 27 May 2025Stopping Criteria for Value Iteration on Concurrent Stochastic Reachability and Safety Games Marta Grobelna∗, Jan K ˇret´ınsk´y†∗, Maximilian Weininger‡§∗ ∗Technical University of Munich Munich, Germany†Masaryk University Brno, Czech Republic‡Ruhr-University Bochum Bochum, Germany§Institute of Science and Technology Austria marta.grobelna@tum.de jan.kretinsky@fi.muni.cz maximilian.weininger@rub.de Klosterneuburg, Austria Abstract —We consider two-player zero-sum concurrent stochas- tic games (CSGs) played on graphs with reachability and safety objectives. These include degenerate classes such as Markov decision processes or turn-based stochastic games, which can be solved by linear or quadratic programming; however, in practice, value iteration (VI) outperforms the other approaches and is the most implemented method. Similarly, for CSGs, this practical performance makes VI an attractive alternative to the standard theoretical solution via the existential theory of reals. VI starts with an under-approximation of the sought values for each state and iteratively updates them, traditionally terminating once two consecutive approximations are ϵ-close. However, this stopping criterion lacks guarantees on the precision of the approximation, which is the goal of this work. We provide bounded (a.k.a. interval) VI for CSGs: it complements standard VI with a converging sequence of over-approximations and terminates once the over- and under-approximations are ϵ-close. Index Terms —Formal methods, foundations of probabilistic systems and games, verification, model checking I. I NTRODUCTION Concurrent stochastic games (CSGs, e.g., [14]): We consider two-player zero-sum games played on a graph. Every vertex represents a state . Edges are directed, originating from one state and leading to one or several other states. An edge is associated with a probability distribution over the successor states. A play proceeds through the graph as follows: starting from an initial state, both players simultaneously and independently choose an action , determining the edge to follow. Then, the next state is sampled according to the probability distribution, and the process is repeated in the successor. We focus on infinite-horizon reachability and safety ob- jectives [14]. The goal of reachability is to maximize the probability of reaching a given goal state. In contrast, the safety objective aims to maximize the probability of staying within a given set of states. The two objectives are dual, as instead of maximizing the probability of reaching a set of target states, one can minimize the probability of staying within the set of non-target states. Thus, we refer to both types collectively as CSGs. Popular subclasses of CSGs include turn-based stochastic games (TSGs), where the players make This research was funded in part by the German Research Foundation (DFG) project 427755713 GOPro, the MUNI Award in Science and Humanities (MUNI/I/1757/2021) of the Grant Agency of Masaryk University, the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101034413, and the ERC Starting Grant DEUCE (101077178).decisions in turns, or Markov decision processes (MDPs), which involve only one player. The value problem: In CSGs, memoryless (a.k.a. sta- tionary) strategies suffice for both players, meaning they yield the same supremum probability as history-dependent strategies. However, unlike in TSGs, the strategies for CSGs require randomization , meaning players choose distributions over actions rather than single actions. Additionally,
|
https://arxiv.org/abs/2505.21087v1
|
while the safety objective player, Player S, can attain optimal strategies [39], the reachability objective player, Player R, only possesses ε-optimal strategies for a given ε >0[21]. As a result, the problem of deciding whether the supremum probability (a.k.a. the value ) is at least pforp∈[0,1], is thus more subtle than for the mentioned subclasses. While for MDPs the value problem is in P, and for TSGs it is known to be in NP∩co-NP , for CSGs it can be elegantly encoded into the existential theory of reals (ETR) , which is only known to be decidable in PSPACE (although not known to be complete for it) [20]. Unfortunately, algorithms for ETR are practically even worse than the more general, doubly exponential methods for the first-order theory of reals [40]. “Finding a practical algorithm remains a very interesting open problem” [25]. Practical approximation: We focus on algorithms approx- imating the value with a predefined precision ε >0. Both for MDPs and TSGs, dynamic programming techniques such as value iteration (VI) or strategy iteration (SI) are practically more efficient than mathematical programming (linear or quadratic, respectively) [27], [30]. Thus, VI algorithms are prevalently used and implemented in popular tools such as PRISM -GAMES [33], motivating the focus on VI here. Problem and our contribution: In VI, the lowest possible value is initially assigned to each state and then iteratively improved, computing an under-approximation of the value, converging to it in the limit. The algorithm (in practical imple- mentations) terminates once two consecutive approximations areε-close. However, the result can then be arbitrarily imprecise [23]. In this work, we introduce bounded value iteration for CSGs , following its previous success for MDPs [1], [5] or TSGs [18]. Its main idea is to enhance standard VI by introducing an over-approximation of the values computed in parallel with the under-approximation. Once the upper and lower bounds areε-close, VI terminates, ensuring that the true value is at most εaway from the obtained approximation. Since the na ¨ıve formulation of an upper bound does not converge to the value in general, previous approaches, notably including [7], have attempted to fix this but have failed. In this paper, we finally provide a valid solution. Technical challenge: The fundamental technical difficul- ties arise from the following. The non-convergence of upper- bound approximations is primarily due to cyclic components, so-called end components (ECs) , see [14], [18], [29]. Notably, non-convergence, for this reason, is an issue present already in MDPs and TSGs; see [18], [29]. Solutions have been developed over the past decade for these two subclasses. Indeed, for MDPs with reachability objective, these end components can effectively be removed from the graph without altering the value [6], [24]. Other objectives were considered in [1], [3]. TSGs with reachability objectives already require more careful analysis, decomposing the end components into sub-parts, so- called simple ECs [18]. A comprehensive framework for various quantitative objectives was proposed in [29]. Unfortunately, the idea of simple ECs is not easily extendable from TSGs to CSGs due to the absence of optimal strategies. Summary of our contribution:
|
https://arxiv.org/abs/2505.21087v1
|
We provide a stopping criterion for VI on CSGs, solving an open problem with erroneous solution attempts in the literature (see the related work in Subsec. I-Abelow). To this end, we unravel the recursive hierarchical structure of end components in CSGs (see Rem. 31) and adapt the bounded VI algorithm. A. Related Work Available approaches: The PSPACE -algorithm intro- duced in [20] for deciding whether the value of a given game is at least p, forp∈[0,1], allows for a trivial stopping criterion by iteratively executing this algorithm for a suitable sequence of (pi)i∈N(intuitively, we choose pisuch that alternatingly, the value of the game is above and below, while the distance between two consecutive pi’s monotonically decreases). However, this criterion is impractical since it uses the existential theory of reals [20]. The best known complexity upper bound comes from [22] and states that the problem of approximating the value of a CSG is in TFNP[NP], i.e. total function from NP with an oracle for NP. However, the proposed algorithm is not practical, as it relies on guessing a floating point representation of the value and optimal strategies for both players. A recursive bisection algorithm was introduced in [26], which is also impractical as its time complexity is best-case doubly exponential in the number of states. For the algorithms commonly employed for in the non-concurrent case, VI and SI, [25] provide doubly exponential lower and upper bounds on the number of iterations that VI requires in the worst-case for computing an ε-approximation. Their counter-example uses a CSG where all states have value 1. Thus, the worst-case complexity of our approach is the same since an additional over-approximation does not speed up convergence in this example. Nonetheless, these results are worst-case bounds, i.e. they hold a priori for all games; earlier termination is possible, but necessarily requires a stopping criterion, which has so far been elusive. Finally, in [38], an algorithm is provided that, unlike all other known algorithms, only has a single -exponential dependency on the number of states. A practical comparison of our value iteration and [38] is an interesting future step, as better worst-case complexity of an algorithm need not translate to better practical performance on typical instances; for example, in MDPs, worst-case exponential VI and SI typically outperform the polynomial approach of linear programming [27]. Previous attempts at stopping criterion: A stopping criterion for SI and VI on CSGs was first presented in [7], but later found to contain an irreparable mistake [10]. Specifically, the algorithm returned over-approximations smaller than the actual values in certain situations, as detailed in [10]. We analyze the counter-example from [10] in App. D-A Later, [19] proposed a stopping criterion for VI, which also contains a fundamental flaw: it fails to converge for CSGs with ECs. We analyze the counter-example to this approach in App. D-B.Our work thus delivers the first stopping criterion in this context. Further directions of related work: Variants of CSGs have appeared very early, under the names of Everett, Gilette, or Shapley game. See [26] for an explanation of all game types,
|
https://arxiv.org/abs/2505.21087v1
|
their relations, and algorithms to solve them. These games also consider discounted payoff or limit-average payoff, generalizing the reachability and safety CSGs we consider here. A generalization of CSGs to ω-regular objectives has been considered in [8], [15]. An insightful characterization of optimal strategies in concurrent games with various objectives can be found in [4]. II. P RELIMINARIES A. Concurrent Stochastic Games Probability Distributions: For a countable set X, a function µ:X→[0,1]is called a distribution over XifP x∈Xµ(x) = 1 . The support ofµisSupp(µ) :={x|µ(x)> 0}. The set of all distributions over Xis denoted by Dist(X). Concurrent Stochastic Games: Aconcurrent stochastic game (CSG) [16] is a tuple G:= (S,A,ΓR,ΓS, δ,s0,T), where Sis a finite set of states ,A:=A×Bis a finite set ofactions with A:={a1, . . . , al}andB:={b1, . . . , bm}the sets of actions available for player RandS, respectively, ΓR:S→(2A\ ∅)andΓS:S→(2B\ ∅)are two enabled actions assignments and δ:S×A×B→Dist(S)is a transition function , where δ s,a,b s′ gives the probability of a transition from state sto state s′when player Rchooses action a∈ΓR(s)and player Saction b∈ΓS(s),s0∈Sis aninitial state , and T⊆Sis a set of target states . ACSG is turn-based if for every state sonly one player has a meaningful choice, i.e. either ΓR(s)orΓS(s)is a singleton; we call such game a turn-based stochastic game (TSG). Example 1 (CSGs ).Consider the CSGHide-Run-or-Slip depicted in Fig. 1 (an adaption of the Hide-or-Run game in [16], [21], [31]). Circles represent states and black dots depict a probabilistic transition with uniform distribution. Each edge is labeled with a pair of actions, the left for player Rand the right for player S;□is a placeholder for an arbitrary action. We have S:={shide, shome, swet}, with ΓR(shide):={hide,run} 2 shideshome swet(hide,throw ) (run,throw )(run,wait)(hide,wait)(□,□) (□,□) Fig. 1: Example CSG called Hide-Run-or-Slip . andΓS(s0):={wait,throw},s0is the initial state denoted by the arrow with no predecessor state, and T:={shome}. The game has the following intuitive interpretation: Player R wants to get home without getting wet. Player Shas a single snowball and can make player Rwet by throwing it at player R. If player Rruns and player Sthrows the ball, player Rgets wet. If player Rruns but player Swaits, with a probability of1 3the player reaches home or the player slips instead and with a probability of1 3does not move at all or with a probability of1 3falls on the ground and gets wet. △ Plays: Aplayπof a CSG Gis an infinite sequence of states s0s1s2···, such that for all i∈Nthere are actions a∈ΓR(si)andb∈ΓS(si)with δ si,a,b si+1 >0. Play(G)is the set of all plays and Plays(G)the set of all plays s0s1s2···withs0=s. Strategies: Astrategy for player R(orS) is a function ρ:S→Dist(A)(orσ:S→Dist(B)) that assigns a distribution over actions available to player R(orS) to each state, i.e., for all s∈S,Supp(ρ(s))⊆ΓR(s)(or Supp(σ(s))⊆ΓS(s)).1We call a Player R(or Player S) strategy ρpure if all distributions it returns are Dirac distributions, i.e., at each s∈Swe have a unique action a∈ΓR(s)(orb∈ΓS(s)) such that ρ(s)(a) = 1 (or σ(s)(b) = 1 ). Otherwise, the strategy is mixed . For player R(orS) we denote the set of strategies by R(orS) and a single
|
https://arxiv.org/abs/2505.21087v1
|
strategy by ρ(orσ). Markov Decision Processes: Given a CSG G, if we fix a strategy ρ∈ R of player R, the game becomes a S-Markov Decision Process (MDP, [41]) Gρwith the transition function δρ s,b s′:=X a∈ΓR(s)δ s,a,b s′ ·ρ(s)(a), for all s∈Sanda∈ΓR(s). The MDP induced by a fixed strategy σ∈ S is defined analogously. Markov Chains: Similarly, by fixing a pair of strategies (ρ, σ)∈ R×S , we obtain a Markov chain Gρ,σwith the same state space S, the initial state s0, and the transition probabilities Pgiven by δρ,σ s s′:=X (a,b)∈Aδ s,a,b s′ ·ρ(s)(a)·σ(s)(b). 1Since memoryless strategies are sufficient for the objectives considered in this paper, we do not introduce general history-dependent strategies to avoid clutter. We refer to [16] for more details.Thus, a pair of strategies ( ρ, σ) induces a unique probability measure Pρ,σ s0over plays in the Markov chain as usual, see [2, Chap. 10.1], where the set of paths starting in s0has measure 1. Objectives: We partition SintoT, denoting the set of states player Rwants to reach, and F:=S\T, denoting the set of states player Swants to confine the game in. We denote the reachability objective by ♢T:={s0s1s2··· | ∃ i∈ N:si∈T}and the safety objective by □F:={s0s1s2··· | ∀i∈N:si∈F}. The value of the objective ♢T, i.e. VR(s), and the objective □F, i.e. VS(s), at state sare given by VR(s):=sup ρ∈Rinf σ∈SPρ,σ s ♢T VS(s):=sup σ∈Sinf ρ∈RPρ,σ s □F . By the determinacy of CSGs and the duality of these objectives [21], [34], it holds that VR(s) +VS(s) = 1 . Consequently, the task of approximating VR(s)with a given precision is equivalent to approximating VS(s). Further, the objective of minimizing the reachability for Tis equivalent to the objective of maximizing safety for Ffor the same player. Consequently, in the following we only focus on maximizing reachability as both minimization and the safety objectives can be reduced to it. Example 2 (Optimal Strategies Need Not Exist) .InCSGs , an optimal strategy for player Rmight not exist [31], meaning that at some states, the value is attainable only in the limit. Consider our running example from Fig. 1, Hide-Run-or-Slip . Assume for the moment that there is no chance of slipping, i.e. upon playing runandwait, the target state is reached. To win, Player Rhas to run eventually. However, Player Scan utilize a strategy that throws with positive probability at all points in time. Thus, Player Rcannot win almost surely. However, Player Rhas the possibility of limit-sure winning in [16]: By running with vanishingly low probability εin every round, the probability of winning is 1−ε. This is because PlayerShas the highest probability εof hitting Player R by throwing in the first round; throwing in a later round n only has hitting probability εn. For any ε >0, this strategy of PlayerRachieves 1−ε. The value, being the supremum over all strategies, is 1. This notion of obtaining a value only in the limit is not restricted to sure winning: By adding the chance of slipping, the value of the game becomes 0.5. However, by the same argument as above, Player Rcannot
|
https://arxiv.org/abs/2505.21087v1
|
win with probability 0.5, but only with a probability 0.5−εfor all ε >0. △ Fors∈S,a∈ΓR(s)and b∈ΓS(s), the set of potential successors ofsis denoted by Post(s,a,b):= Supp(δ s,a,b ). We lift the notation to strategies ρ∈ R andσ∈ S by Post(s,ρ, σ) =[ a∈Supp(ρ(s))[ b∈Supp(σ(s))Post(s,a,b). We denote by WS:={s∈S|VR(s) = 0}thesure winning region of player S. It can be computed in at most |S| steps by iteration W0 S:= (S\T)andWk+1 S:={s∈S\T| 3 ∃b∈ΓS(s) :∀a∈ΓR(s) :Post(s,a,b)⊆Wk S}for all k∈N[15]. Consequently, we can assume without loss of generality that TandWSare both singletons and absorbing. Example 3 (The Sets T,F, and WS).In Fig. 1, player R wants to reach T={s1}, while player Saims to stay in F={s0, s2}. Since s2is absorbing, WS={s2}. △ Matrix Games: At each state of a CSG the players R andSplay a two-player zero-sum matrix game [35], [43]. In general, a matrix game is a tuple Z:= (N, A, u )[43] where, N:={1, . . . , n }is a finite set of players, A:={α1, . . . , α m} is a finite set of actions available to each player, and u:A→Q is a utility function. In CSGs , the matrix game played at a specific state scan be represented by a matrix ZVR(s)∈Ql×m, where ΓR(s) ={a1, . . . , a l}andΓS(s) ={b1, . . . , b m}. The entries of the matrix correspond to the utility, i.e., the value attainable upon choosing a pair of actions (ai,bj)∈A. Thus, thei-th row and the j-th column is given by ZVR(s)(i, j):=P s′∈Sδ s, ai, bj s′ ·VR(s′). Example 4 (Matrix Game) .Consider the CSG in Fig. 1. The matrix game played at state shideis given by the following matrix. ZVR(shide) =throw wait 01 3·VR(shide) +1 3run 1 VR(shide) hide(1) PlayerRis the so called row player while Player Sis the column player . △ In a matrix game, a player’s strategy is a distribution over the available actions at a specific state. To distinguish between strategies of a CSG and strategies of a matrix game, we refer to strategies of a matrix game as local strategies and strategies of a CSG asglobal strategies. The set of all local strategies at a state sis denoted by R(s)orS(s)for player RorS, respectively. The existence of optimal (local) strategies in a matrix game for both players is guaranteed by Nash’s Theorem [36], [37]. The payoff that is attainable with an optimal local strategy is called value that we denote by V(ZVR)for a matrix game ZVR. It can be calculated using linear programming (e.g., [28], see App. A-A). End Components: A non-empty set of states C⊆S is called an end component (EC) if (i) there exists a pair of strategies (ρ, σ)∈ R × S such that for each s∈Cit holds that Post(s,ρ, σ)⊆C; and (ii) for every pair of states s, s′∈Cthere is a play s0s1···such that s0=sandsn=s′ for some n, and for all 0≤i < n , it holds si∈Cand si+1∈Post(si,ρ, σ). Intuitively, an ECis a set of states where a play can stay forever under some pair of strategies. In other words,
|
https://arxiv.org/abs/2505.21087v1
|
the players can cooperate to keep the play inside the EC(this is the usual way to lift the definition of [14] from MDP to games). Thus, we can compute ECs in a CSG by computing ECs in the corresponding MDP with both players unified, i.e., every pair of actions is interpreted as an action in the MDP . Efficient algorithms for this exist [2], [12], [46]. An ECCiscalled inclusion maximal (short maximal) if there exists no EC C′such that C⊊C′. B. Value Iteration Value iteration (VI, e.g. [11]) assigns an initial value estimate to each state and then iteratively updates it. In classical VI, which approximates the reachability value from below, the initial estimates are 1 for states in Tand below the actual value otherwise, e.g. 0. Each iteration backpropagates the estimate by maximizing the expectation of the value player Rcan ensure with respect to the previous estimate. Formally, we capture estimates as valuations, where a valuation υ:S→[0,1]is a function mapping each state s to a real number representing the (approximate or true) value of the state. For two valuations υ,υ′, we write υ≤υ′if υ(s)≤υ′(s)for every s∈S. To compute the expected value at a state s, the matrix game Zυ(s)has to be solved, meaning its value, V(Zυ(s)), has to be estimated. This computation is, especially in the turn-based setting, also referred to as Bellman update . Formally, V(Zυ(s)):=B(υ)(s):=sup ρ∈R(s)inf σ∈S(s)B(υ)(s,ρ, σ), whereB(υ)(s,ρ, σ):= X (a,b)∈AX s′∈Sρ(a)·σ(b)·δ s,a,b s′ ·υ(s′). Convergent Under-approximation: We recall VI from below as in [9]: starting from the initial valuation L0, we perform the Bellman update on every state to obtain a new valuation. We denote by Lkthe valuation obtained in the k-th iteration. Formally: L0(s):=( 1,ifs∈T; 0,else,Lk+1(s):=B(Lk)(s).(2) Since TandWSare absorbing, for all k∈Nwe have Lk(s) = 1 for all s∈T, and Lk(s) = 0 for all s∈WS. The updated valuation, i.e. Lk+1(s), is computed by solving the corresponding matrix game. Theorem 5 (VI converges from below [17, Thm. 1]) .VI from below converges to the value, i.e. lim k→∞Lk=VR. Bounded Value Iteration: While Thm. 5 proves that VI from below converges in the limit, this limit may not be reached in finitely many steps and can be irrational [17]. Thus, we do not know when to stop the algorithm to guarantee certain precision of the approximation, as there is no practical bound how close any valuation Lkis to the actual value. We merely have the worst-case bound of [25]: Running for a number of iterations that is doubly-exponential in the number of states allows to conclude that the lower bound is ε-close to the value. To obtain a practical stopping criterion, we use the approach of Bounded Value Iteration (BVI), shown in Alg. 1. In addition to the lower bound L, it maintains an upper bound on the value 4 Algorithm 1 Bounded value iteration procedure for CSGs. 1:Algorithm BVI(CSG G, threshold ε >0) 2: WS← {s∈S|VR(s) = 0}▷Winning region for S 3: L0,U0initialized by Eq. (1) and (2), respectively 4: MEC←FIND MECs (G)▷Find all MECs in the game 5: k←0 6: repeat 7: fors∈Sdo▷Standard Bellman update
|
https://arxiv.org/abs/2505.21087v1
|
of both bound 8: Lk+1(s)←B(Lk)(s) 9: Uk+1(s)←B(Uk)(s) 10: forC∈MEC do 11: Uk+1←DEFLATE (G,Uk+1,C) 12: k←k+ 1 13: until Uk+1−Lk+1≤ε Uthat is meant to converge to the value from above. Na ¨ıvely, this upper bound is defined as follows: U0(s):=( 0,ifs∈WS; 1,else,Uk+1(s):=B(Uk)(s).(3) Given a precision ε >0, the algorithm terminates once the under- and the over-approximations are ε-close, i.e., when both approximations are at most ε-away from the actual value. However, applying Bellman updates does not suffice for the over-approximation to converge in the presence of ECs, as the following example shows: Example 6 (Non-convergent Over-approximations) .Consider the CSG Hide-Run-or-Slip in Fig. 1. To compute Uk+1(shide)according to Eq. (3), in each iteration we solve the matrix game ZUk(shide)is given by Eq. (1) where the unknown VRis replaced by Uk, i.e.: ZUk(shide) =throw wait 01 3·Uk(shide) +1 3run 1 Uk(shide) hide(4) Table I shows the updates of the lower and upper bounds, Lk(shide)andUk(shide), respectively. While the lower bound converges to 0.5, the upper bound stays at 1. This is because for player R, action hide always “promises” a valuation of 1 in the next step, as the lower row of the matrix yields 1 for all player Sstrategies. △ III. C ONVERGENT OVER-APPROXIMATION : OVERVIEW Here, we describe the structure of our solution. Ex. 6 shows that the na ¨ıve definition of the over-approximation need not TABLE I: BVI for the game Hide-Run-or-Slip (Fig. 1), where the over-approximations do not converge. k 0 1 2 ··· ∞ Lk(shide) 0.0 0.25 0.36 ··· 0.5 Uk(shide) 1.0 1.0 1.0 ··· 1.0converge to the true value. In particular, in the presence of ECs, Bellman updates do not have a unique fixpoint. Thus, our goal is to define a function DEFLATE (usage highlighted in Alg. 1, definition in Alg. 3) that, intuitively, decreases the “bloated” upper bounds inside each ECto a realistic value substantiated by a value promised outside of this EC. Formally, we ensure that Alg. 1 produces a monotonically decreasing sequence of valuations (i) over-approximating the reachability value and (ii) converging to it in the limit. This idea has been successfully applied for TSGs [18], [29]. Remark 7 (Inflating for Safety) .Since over-approximations need not converge for VR, dually under-approximations need not converge for VS(as is the case in TSGs , see [29]). Thus, to directly solve a safety game, one needs an inflating operation dual to deflating. As described when introducing the objectives, we take the conceptually easier route of reducing everything to maximizing reachability objectives. We proceed in two steps. First, in Sec. IV we prove that indeed ECs are the source of non-convergence, in particular what we call Bloated End Components . Intuitively, these are ECs where at each state both players prefer local strategies which all successor states belong to the EC. Second, in Sec. V, we define the DEFLATE algorithm, which essentially ensures that we focus on player Rstrategies that do not make the game stuck in a ECbut rather progress towards the target. To this end, we lift the notion of best exit from TSGs [18, Definition 3] to CSGs.
|
https://arxiv.org/abs/2505.21087v1
|
IV. T HECORE OF THE PROBLEM : CHARACTERIZING BLOATED ENDCOMPONENTS A locally optimal strategy of Player Rdoes not coincide with an optimal global strategy of Player Rbecause the latter must eventually leave ECs, while the former is under the illusion that staying is optimal. Thus, in this section, we want to find properties that local strategies (of both players) must fulfill in order to leave an ECin a way that is globally optimal. Thus, since this section mainly concerns local strategies, we use the word strategy to speak about local strategies and explicitly make clear when we talk about global ones. Remark 8.Throughout the technical sections and the appendix, we always fix a CSG G:= (S,A,ΓR,ΓS, δ,s0,T). A. Convergence without ECs As a first step, we prove that ECs are the only source of non-convergence, and without them, the na ¨ıveBVI using only Bellman updates converges. Theorem 9 (Convergence without ECs — Proof in App. C-A). LetGbe a CSG where all ECs are trivial, i.e. for every EC Cwe have C⊆WS∪T. Then, the over-approximation using only Eq. (3)converges, i.e. lim k→∞Uk=VR. Proof sketch. This proof is an extension of the proof of [18, Theorem 1] for turn-based games to the concurrent setting. The underlying idea is the same, and can be briefly summarized as follows: We assume towards a contradiction that lim k→∞Uk=: 5 TABLE II: BVI for the CSG in Ex. 10, where the over- approximations converge. k 0 1 2 3 ··· ∞ Lk(shide) 0.01 34 90.4815 ··· 0.5 Uk(shide) 1.02 35 90.5185 ··· 0.5 U⋆̸=VR, and find a set Xthat maximizes the difference between upper bound and value. We show that every pair of strategies leaving the set Xdecreases the difference U⋆−VR. However, VRandU⋆are fixpoints of the Bellman update, from [17, Theorem 1] and Lem. 48, respectively. Consequently, optimal strategies need to remain in the set. However, in the absence of ECs, optimal strategies have to leave the set, which yields a contradiction and proves that U⋆=VR. The key difference to the proof of [18, Theorem 1] is that we cannot argue about actions anymore, but have to consider mixed strategies. This significantly complicates notation. Additionally, and more importantly, the former proof crucially relied on the fact that for a state of Player R, we know that its valuation is at least as large as that of any action, and dually for a state of Player S, its valuation is at most as large as that of any action. In the concurrent setting, this is not true. The optimal strategies need not be maximizing nor minimizing the valuation and, moreover, they can be maximizing for one valuation and minimizing for another. Thus, we found a more general, and in fact simpler, way of proving that “no state in Xcan depend on the outside” [18, Statement 5] and deriving the contradiction. Interestingly, not all ECs cause non-convergence of Eq. (3) as the following example illustrates. Example 10 (Unproblematic EC).We modify the CSG Hide-Run-or-Slip (Fig. 1) such that the matrix game played at shideisZ′ Uk(shide)below. Z′ Uk(shide) =throw wait 11 3·Uk(shide)
|
https://arxiv.org/abs/2505.21087v1
|
+1 3run 0 Uk(shide) hide The difference is that Z′ Uk(shide)(run,throw ) = 1 and Z′ Uk(shide)(hide,throw ) = 0 , switching the values as compared to the original CSG (see Eq. (4)). Here, both bounds converge to 0.5 despite the presence of the EC{shide}, as shown in Table II. △ B. Towards Characterizing Bloated End Components Intuition: In Ex. 10, the best strategy of Player Rleaves theECalmost surely against all counter-strategies of Player S, and hence BVI converges. In contrast, in Ex. 6, the best strategy of Player Ris one where Player Shas a counter-strategy that forces the play to stay inside the EC; this causes non- convergence. Generalizing these ideas, we see that a problem occurs if Player Rhas a strategy that is locally optimal but non-leaving, i.e. Player Shas a counter-strategy that keeps the play inside an EC.Outline: We formalize these ideas in the following definitions: First, Def. 12 formalizes optimal (local) strategies using weakly dominant strategies in matrix games, extending the standard definition (e.g. [35]) to sets of strategies. This extension is not straightforward, and there are several technical intricacies that we comment on. Next, Def. 15 captures leaving and staying strategies. We differentiate strategies that are leaving (irrespective of the opponent’s strategy), staying (irrespective of the opponent’s strategy), and non-leaving (where there exists an opponent’s strategy that leads to staying) with respect to a given set of states. Based on this, we formally describe hazardous strategies in Def. 16, which are (locally) optimal strategies of Player Rthat are non-leaving; additionally, to be problematic for convergence, they are better than all leaving strategies. Using these, we can precisely characterize the Bloated End Components ( BECs , Def. 18) that cause non- convergence. Remark 11 (Additional Challenges Compared to TSGs ).The core problem is the same as in TSGs : Player Ris under the illusion that staying inside an ECyields a better valuation than leaving. However, in TSGs , the definitions of optimality and leaving are straightforward, since every state belongs to a single player and pure strategies are optimal; the definitions of hazardous and trapping strategies are not even necessary. In contrast, the definitions in CSGs are technically involved, as we have to take into account the interaction of the players and the possibility of optimal mixed strategies. In particular, [19] defined a straightforward extension of leaving based only on actions, not strategies. This is incorrect, as we demonstrate in App. D-B. Definition 12 (Dominating Sets of Strategies) .Letυbe a valuation, s∈Sa state, R1,R2,R′⊆ R(s)andS1,S2,S′⊆ S(s)sets of local strategies. We now define two notions of domination for sets of strategies, namely weak domination and being not worse . Both of these depend on the player. Definition for Player R:We write R2≺υ,S′R1to denote that R1weakly dominates R2under the set of counter-strategies S′with respect to υ. Formally, ∃ρ1∈ R1.∀ρ2∈ R2: (i)infσ∈S′B(υ)(s,ρ2, σ)≤infσ∈S′B(υ)(s,ρ1, σ), and (ii)∃σ′∈ S′such that B(υ)(s,ρ2, σ′)<B(υ)(s,ρ1, σ′). If only Condition (i) is satisfied, we write R2⪯υ,S′R1to denote that the set R1isnot worse thanR2under S′with respect to υ. Definition for Player S:Dually, we write S2≺υ,R′S1to denote that set
|
https://arxiv.org/abs/2505.21087v1
|
S1weakly dominates S2underR′with respect toυ. Formally, ∃σ1∈ S1.∀σ2∈ S2: (i)supρ∈R′B(υ)(s,ρ, σ2)≥supρ∈R′B(υ)(s,ρ, σ1), and (ii)∃ρ′∈ R′such that B(υ)(s,ρ′, σ2)>B(υ)(s,ρ′, σ1). If only Condition (i) is satisfied, we write S2⪯υ,R′S1to denote that the set S1isnot worse thanS2under R′with respect to υ. Example 13 (Dominating Sets of Strategies) .Consider the matrix game defined in Eq. (4) and the valuation Uk(shide) = 6 Uk(shome) = 1 andUk(swet) = 0 . Here, for Player R, the pure strategy {hide7→1}dominates the pure strategy {run7→1}: {run7→1}≺ Uk,S(shide){hide7→1}. This is because when Player Sthrows the ball, hiding yields 1 while running yields 0. Note that this is in fact independent of the valuation, so it also holds for VR. LetRunPositive :={(run7→ε,hide7→1−ε)|ε >0}be the set of all strategies that put positive probability on running. We have RunPositive ≺Uk,S(shide){hide7→1}. Again, this is true even when using VRas valuation. Note that we have this weak dominance even though the supre- mum over the set yields the optimum valuation, namely supρ∈RunPositive infσ∈S(shide)B(Uk)(s,ρ, σ) = 1 . This exem- plifies the strictness of our notion of dominance. It is crucial that our notion of domination can distinguish these sets of strategies: The set RunPostive contains all strategies that leave theEC. However, none of them is optimal (even though the supremum over all of them is), which is exactly the reason why VI chooses the staying strategy {hide7→1}for updating the valuation, and thus is stuck. △ We remark on several technicalities of Def. 12: •“Weak” dominance: The term “weak” might be misleading. We choose to use the word for consistency with [35, Def. 4.12]. There, weak domination concerns Condition (ii), only requiring that there exists a counter-strategy where the inequality is strict; strict domination requires Condition (ii) for all counter-strategies. One might be tempted to use weak domination to denote what we call “not worse”, i.e. only require that there is a strategy in the first set that has optimal valuation at least as good as all in the other set; or to think it only refers to the numerical comparators, e.g. ≥and>(as is sometimes the case when only comparing single strategies). •Set-related challenges: The definition is challenging since we cannot speak about actions, but have to consider sets of — possible mixed — strategies. The exact quantification of the strategies is relevant. Further, it depends not only on the two sets we are comparing, but also on the counter- strategies of the opponent. Thus, we provide the definition explicitly for both players, to avoid confusion that could arise from just saying that they are analogous. •All-quantification instead of optima: The definition uses all-quantification instead of optima. Concretely, weak dominance for Player Ruses∀ρ2∈ R2instead of writing supρ2∈R2. The latter definition cannot sufficiently distinguish sets of strategies, since the supremum of a set need not be contained in it, as we exemplified in Ex. 13. This fact is extremely important, as in the proof of Thm. 21, we pick the maximum from a set of strategies, and the existence of this maximum is guaranteed only because of the correct definition of dominance. •Locally optimal strategies are not
|
https://arxiv.org/abs/2505.21087v1
|
worse than any other: Formally, this claim is that for all locally optimal strategyρ∈ R(s)with respect to υ, we have R(s)⪯υ,S(s){ρ7→ 1}(and dually for Player S). This is immediate from Def. 12, since a locally optimal strategy maximizes infσ∈S′B(υ)(s,ρ, σ), and thus satisfies Condition (i) when compared to all other strategies. We will use this fact throughout the paper. •Notation: When the valuation is clear from the context, we omit it for the sake of readability. Further, if we say that a strategy ρ1weakly dominates another strategy ρ2 with respect to a counter-strategy σ, then we mean that {ρ2}≺{σ}{ρ1}. We prove a lemma about the relation of weak domination and not being worse that is useful and instructive. The proof in App. C-B works by straightforward unfolding of definitions and rewriting. Lemma 14 (Negating Weak Domination — Proof in App. C-B). Letυbe a valuation, s∈Sa state, R1,R2,R′⊆ R(s)and S1,S2,S′⊆ S(s)sets of local strategies. If for some sets of strategies we do nothaveR2≺υ,S′R1, then we have R1⪯υ,S′R2. Analogously, not S2≺υ,R′S1 implies S1⪯υ,R′S2. To complete our intuitive understanding of the definition of domination, we point out a connection to the standard definition of weak domination: “A rational player does not use a dominated strategy.” [35, Asm. 4.13] If a strategy is not dominated, by Lem. 14 it is not worse than any other strategy. This is exactly what we argued above: Locally optimal strategies are not worse than any other. We often use this fact throughout the paper. Next, we formally define leaving and staying strategies . Given a set of states, a leaving strategy ensures that the set of successor states contains states outside the given set of states for all given counter-strategies of the opponent player. A strategy is staying if all successor states belong to the given set of states for all given counter-strategies of the opponent player. Note that a strategy can be neither leaving nor staying, if the set is exited for some, but not all counter-strategies of the opponent. Definition 15 (Leaving and Staying Strategies) .Consider a set of states X ⊆ Sand a state s∈ X . LetR′⊆ R(s) andS′⊆ S(s)be sets of strategies of Player RandS, respectively. The set of (local) leaving strategies for Player R with respect to S′, is given by RL(S′,X, s):={ρ∈ R(s)| ∀σ∈ S′.(s,ρ, σ)leaves X }, and for Player Swith respect to R′by SL(R′,X, s):={σ∈ S(s)| ∀ρ∈ R′.(s,ρ, σ)leaves X }. A strategy that is not leaving is called non-leaving . The set of all non-leaving Player Rstrategies is denoted by RL(S′,X, s) (orSLfor Player S). 7 In contrast, the set of staying strategies at a state s∈ X for PlayerRwith respect to S′, is given by RS(S′,X, s):={ρ∈ R(s)| ∀σ∈ S′.(s,ρ, σ)staysIn X }, and for Player Swith respect to R′by SS(R′,X, s):={σ∈ S(s)| ∀ρ∈ R′.(s,ρ, σ)staysIn X }. Notation: If we consider leaving (or staying) strategies with respect to all counter-strategies, then we omit the set of counter-strategies, i.e. instead of RL(S(s),X, s)(or RS(S(s),X, s)) we write RL(X, s)(orRS(X, s)). We use the same shorthand notion for leaving (staying)
|
https://arxiv.org/abs/2505.21087v1
|
strategies of PlayerS. We often speak about a leaving/staying pair of local strate- gies, so we provide the following shorthand notations: For a tu- ple(s,ρ, σ)∈S×R×S , we say that (s,ρ, σ)leaves Xif and only if Post(s,ρ, σ)∩(S\ X)̸=∅. Analogously, we say that (s,ρ, σ)staysIn Xif and only if Post(s,ρ, σ)∩(S\ X) =∅ (or, equivalently, Post(s,ρ, σ)⊆ X ). Intuition of Hazardous Strategies: Using the definitions of dominance and leaving or staying, we can now classify strategies of Player Rthat can lead to non-convergence. Intuitively, a hazardous strategy is one that Player Rchooses, even though it can be staying for some counter-strategies. Thus, such a strategy (i) is non-leaving (i.e. there exist counter- strategies that make it staying), and (ii) it is not worse than any other strategy so that Player Rmay choose it. Moreover, to be problematic for convergence, (iii) the strategy weakly dominates all leaving strategies, i.e. leaving strategies are not chosen for the update. Definition 16 (Hazardous Strategy) .LetX ⊆ S\(T∪WS)be as set of states, υa valuation, and s∈ X. A strategy ρ∈ R(s) is called hazardous with respect to υif it satisfies: (i)ρ∈ RL(X, s), (ii)R(s)\ {ρ}⪯S(s){ρ}, and (iii)RL(X, s)≺S(s){ρ}. Hazard υ(X, s)denotes the set of all hazardous strategies at state swith respect to a set of states Xand a valuation υ. We mention a corner case: In a state where Player R possesses no leaving strategies, all optimal strategies are hazardous (note in particular that Condition (iii) is trivially satisfied, since the dominated set of strategies RL(X, s)is empty). Example 17 (Hazardous strategy) .Consider again the matrix game defined in Eq. (4) and the initial valuation U0(shide) = U0(shome) = 1 andU0(swet) = 0 . The strategy ρ′:={hide7→ 1}is hazardous because: (i) It is non-leaving. (ii) It is an optimal strategy, i.e. it is not worse than any other strategy. (iii) It weakly dominates the set of all leaving strategies, see Ex. 13. △ Definition 18 (Bloated End Component ( BEC )).AnEC X ⊆ S\(T∪WS)is called bloated end component (BEC) with respect to a valuation υif for all s∈ X it holds that Hazard υ(X, s)̸=∅.Example 19 (Bloated End Component) .Consider the CSG Hide-Run-or-Slip from Fig. 1. As discussed in Ex. 17, there exists a hazardous strategy in state shide. Moreover, {shide} is an EC, since under the pair of strategies that plays hide and wait, the play stays in it. Consequently, {shide}is aBEC and therefore VI does not converge in this state, see Ex. 6. △ We provide a lemma that captures the intuition of what it means to (not) be a BEC , and that is also useful in several proofs: Lemma 20 (Negating Bloated — Proof in App. C-B ).If an ECX ⊆ S\(T∪WS)is not bloated for a valuation υ, then there exists a state s∈ X that has a locally optimal strategy that is leaving, formally ∃ρ∈ RL(X, s).R(s)⪯υ,S(s){ρ}. C. Convergence in the Absence of BECs Now we can prove that BECs indeed are the reason that VI does not converge for over-approximations. Theorem 21 (Non-convergence implies BECs — Proof in App. C-B
|
https://arxiv.org/abs/2505.21087v1
|
).LetU⋆:= lim k→∞Ukbe the limit of the na ¨ıve upper bound iteration (Eq. (3)) on the CSG G. If VI from above does not converge to the value in the limit, i.e. U⋆>VR, then theCSG Gcontains a BEC inS\(T∪WS)with respect to U⋆. Proof sketch. This proof builds on the proof of Thm. 9. There, we constructed a set Xmaximizing the difference between U⋆ andVRand showed that if there is a pair of optimal strategies leaving X, then we can derive a contradiction: The upper bound decreases, which contradicts the fact that it is a fixpoint. In the context of the other proof, that allowed us to show that without ECs, VI converges, because without ECs it is impossible to have a set of states where all optimal strategies stay in that set. In the presence of ECs, states can indeed have a positive difference between U⋆andVR, see e.g. Ex. 6. Our goal is to prove that at least one of these ECs is bloated. Thus, we assume for contradiction that no ECis bloated under U⋆. Then, by Lem. 20, there is an optimal leaving strategy for Player R. Using that, we can repeat the argument from Thm. 9, showing that in this case U⋆would decrease. Again, this is a contradiction because it is a fixpoint of applying Bellman updates (Lem. 48). Thus, the initial assumption that no EC is bloated is false, and we can conclude that there exists a BEC. Remark 22 (Relation to [18]) .Def. 18 of BEC is more general than the definition of BEC for TSGs in [18, Definition 4]. The differences are that in [18], an ECis only called bloated if it is bloated with respect to VR, whereas we extended that definition to speak about a concrete valuation, similar to [29, Def. 3]. Further, the definition for TSGs speaks about the best exit value, which is the optimum among the available actions; in contrast, in CSGs the definition of best exit is technically involved and dependent on hazardous strategies, see Def. 27. Thus, our definition of BEC does not analyze the exit values, but instead uses a fundamental analysis of the strategies. 8 Key Contribution: The key novelty of this section is the correct definition of BEC that captures when VI from above does not converge. We highlight that this definition contains many technical intricacies: Lifting the notion of an exiting action [18, Section 2.2] from TSGs toCSGs requires considering sets of local strategies that leave against all opponent-strategies (Def. 15), and considering the additional complication that strategies can be neither leaving nor staying. Further, the exact definition of dominance is very important, as the supremum over all leaving strategies can be a staying strategy, see Ex. 13 and the related discussion in the item “All-quantification instead of optima” after the example. V. R ESOLVING BLOATED ENDCOMPONENTS A. Solution in TSGs We have identified BECs as the cause of non-convergence. Our method for fixing the over-approximation is again based on the ideas for TSGs . We explain the intuition of their solution: Firstly, staying actions yield
|
https://arxiv.org/abs/2505.21087v1
|
the valuation that is bloated; thus, we need an additional update of the over-approximation that depends only on leaving actions. The valuation to which we reduce the over-approximation is the best exit from the EC, which in TSGs simply is the leaving action attaining the highest value over all states of Player R[18, Definition 3]. Secondly, not all states in an ECneed to have the same value, since Player Scan prevent Player Rfrom reaching the state that attains the best exit valuation. Hence, an ECis decomposed into parts that share the same value, called simple ECs (SECs) in [18, Definition 5]. Repeatedly finding these SECs and deflating their valuation by setting it to the best exit from the SEC suffices for convergence in TSGs. When generalizing these ideas to CSGs , we encounter the following problems: Firstly, the definition of best exit is more involved, since staying and leaving depends not only on actions, but on strategies. Additionally, the supremum over all leaving strategies can be a non-leaving strategy, see Ex. 13. (This is also the reason why globally optimal strategies need not exist inCSGs , see Ex. 2). This was the fundamental reason why the solution proposed in [19] did not work, as it was based on actions. Secondly, we need to decompose the ECinto parts. For this, we use a recursive approach, removing states that have been successfully deflated and checking whether there are further problematic states in the remainder of the EC. Outline: In Sec. V-B, we develop a strategy-based definition of best exit (Def. 28), which relies on identifying thetrapping strategies (Def. 24) that Player Suses to keep the play inside the BEC ; and the deflating strategies (Def. 25), the best response of Player R, namely the leaving strategies that should be played with arbitrarily small probability ε. In Sec. V-C, we provide the FIND MBEC algorithm that finds all maximal BECs that are present in a given MEC . A maximal BEC is a set of states X ⊆ S\(T∪WS)that is a BEC and there exists no another set of states X′⊆S\(T∪WS)that is a BEC andX⊊X′. Finally, Sec. V-D provides the full deflating procedure for CSGs in Alg. 3. In particular, it usesa recursive call to decompose a given MEC and deflate all relevant parts of it. B. Defining the Best Exit The key problem of a BEC is that in all states, all leaving strategies of Player Rare dominated by hazardous strategies. PlayerScan play a trapping strategy and thereby make the Bellman update self-dependent. If the current valuation is too high, this prevents convergence. Intuitively, the “pressure” inside the BEC is too high, and we want to “deflate” it, by adjusting it to the pressure, i.e. valuation, outside of the BEC . Since non-trivial BECs neither contain target states nor belong to the winning region of Player S, there has to exist a state in aBEC where the supremum over leaving and staying strategies is equal. To “equalize the pressure” between the BEC and the rest of the states, we need to estimate the pressure
|
https://arxiv.org/abs/2505.21087v1
|
outside theBEC . To do so, we estimate the valuation attainable upon leaving the BEC at every state of the BEC , called exit value of the state. The best exit value is the maximum of all exit values. Reducing the upper bounds of the states inside the BEC to the best exit value, in case the best exit is smaller than the current valuation, “decreases the pressure”. However, since the valuations of the exiting strategies can depend on the valuations of the states that belong to the BEC , this procedure may still only converge in the limit, already in TSGs . We provide an illustrative example: Example 23 (Deflating BECs ).Consider again the CSG Hide-Run-or-Slip (Fig. 1). Under the initial valuation (see Eq. (3)) the matrix game played at shideis given by ZU0(shide) =throw wait 02 3run 1 1 hide Due to the hazardous strategy {hide7→1}, the Bellman update cannot improve the initial upper bound of shide, but remains at 1. However, by the same argument as in Ex. 2, Player R can use the strategy of running with a probability ε >0that is arbitrarily small. This yields a value arbitrarily close to2 3. Consequently, we can deflate, i.e. decrease the upper bound ofshideto2 3which is the valuation attainable upon leaving theBEC atshide. After the Bellman update, the matrix game played at shideis given by ZU1(shide) =throw wait 05 9run 12 3hide The strategy {hide7→1}is still hazardous, but we can deflate the upper bound of shideto5 9. By repeating these steps, the upper bound converges to1 2in the limit, and only in the limit, similar to the lower bound in Table I. △ How can we find the best exit value in general?: In Ex. 23, we used an argument about playing a leaving action with vanishingly small probability in order to figure out which entry in the matrix we choose for deflating. We provide an 9 alternative intuition: Player Rplays a hazardous strategy most of the time. The best response of Player Sis to play a weakly dominant strategy that stays in the EC, trapping the play. Thus, we can ignore the other strategies of Player Sand consider only the columns of the matrix that correspond to trapping strategies . Now, we need to select a leaving strategy of Player Rwhich then is played with vanishingly low probability. Thus, we restrict the matrix further to use only rows corresponding to leaving strategies. In the example, we end up with the top right entry. We now formalize how we can construct this sub-matrix, called the exiting sub-game. Definition 24 (Trapping Strategy) .LetX ⊆ S\(T∪WS)be as set of states, υa valuation, and s∈ X. A strategy σ∈ S(s) is called trapping strategy if two conditions are satisfied: (i)σ∈arg minσ′∈S(s)maxρ∈R(s)B(υ)(s,ρ, σ′), and (ii)∀ρ∈Hazard υ(X, s) : (s,ρ, σ)staysIn X. Trapυ(X, s)denotes the set of all trapping strategies at state swith respect to a set of states Xand a valuation υ. Definition 25 (Deflating Strategies) .LetX ⊆ S\(T∪WS) be a set of states. A Player Rstrategy ρ∈ R(s)is called deflating if two
|
https://arxiv.org/abs/2505.21087v1
|
conditions are satisfied: (i)∃σ∈Trapυ(X, s)such that (s,ρ, σ)leaves X, and (ii)Supp(ρ)∩S ρ′∈Hazard υ(X,s)Supp(ρ′) =∅. Deflυ(X, s)denotes the set of all deflating strategies at state swith respect to a set of states Xand a valuation υ. Definition 26 (Exiting sub-game) .LetX ⊆ S\(T∪WS) be a set of states, s∈ X a state, and υa valuation. Further, let Zυ(s)be the matrix game played at state s∈ X. IfTrapυ(X, s)̸=∅then, the exiting sub-game played at state s, denoted by Zexit υ(s), is the matrix game where Player Rhas the actions inS ρ∈Deflυ(X,s)Supp(ρ)and PlayerShas the actions inS σ∈Trapυ(X,s)Supp(σ). The value of the exiting sub-game is given by V(Zexit υ(s)):= max(0,supρ∈Deflυ(X,s)infσ∈Trapυ(X,s)B(υ)(s,ρ, σ)). We explain how this exiting sub-game and its value are well- defined: The set of deflating strategies can be empty, namely in a state which has no leaving strategies. In this case, the value of the exiting sub-game is the supremum over an empty set, i.e. the smallest possible value, commonly minus infinity and 0 in our case. We highlight this possibility by explicitly taking the maximum of 0 and the value of the exiting subgame when we compute it. If the set of deflating strategies is non-empty, then the set of trapping strategies necessarily is non-empty, too, since the Item (i) of Def. 25 requires existence of a trapping strategy. Definition 27 (Exit value) .LetX ⊆ S\(T∪WS)be a set of states, s∈ X a state, and υan over-approximation. Then,theexit value fromXattainable at state sis given by exitValυ(X, s):= sup ρ∈R(s)inf σ∈S(s)B(υ)(s,ρ, σ),ifHazard υ(X, s) =∅ ∨Trapυ(X, s) =∅; V(Zexit υ(s)), else. ForBECs that consists of more than one state, the exit values attainable at different states within the BEC may differ. Consequently, to ensure that deflating does not reduce the value of any of the states in the BEC below its actual value we estimate the exit values at each state of the BEC , and finally select the maximal exit value, i.e. the best exit value, for deflating. Definition 28 (Best Exit) .LetX ⊆ (S\(T∪WS))be a set of states and υa valuation. The best exit value with respect to a set Xand a valuation υis given by bestExitValυ(X) := max s∈XexitValυ(X, s). The best exit , denoted by bestExitυ(X), is a state obtaining bestExitValυ(X), and the set of all best exits is denoted by bestExitsυ(X). Lemma 29 (bestExitVal is sound) .LetX ⊆ S\(T∪WS)be anEC, and U∈[0,1]|S|be a valuation with U≥VR. Then, for all states s∈ X, we have bestExitValU(X)≥VR(s). Proof. This proof relies on the technical Lem. 49 stating that (i)X′:={s∈ X | VR(s)≤exitValVR(X, s)} ̸=∅, and (ii)maxs∈X′VR(s)≥maxs∈X \X′VR(s). LetX′⊆ X be a set of states satisfying Condition (i). Further, choose e∈arg maxt∈X′VR(t)as one of the exits from X′. By Item (ii) of Lem. 49 , we have that for all s∈ X:VR(s)≤ VR(e). It remains to show bestExitValU(X)≥VR(e). We conclude as follows: VR(e)≤exitValVR(X, e) (Since e∈ X′) =bestExitValVR(X) (By the choice of e) ≤exitValU(X, e) (By Lem. 50) ≤bestExitValU(X). (By Def. 28) We want to highlight that the statement of Lem. 50 is non- trivial and the proof is technically
|
https://arxiv.org/abs/2505.21087v1
|
involved. C. Finding Maximal BECs Since a BEC might contain other BECs , we want to find maximal BECs . ABECXis maximal if there exists no BEC X′such that X⊊X′. The existence of maximal BECs is proven in App. C-C. Given a CSG G, aMEC Cand the current upper bound estimate U, Alg. 2 finds all maximal BECs within Cas follows: The set Bcontains all states for which hazardous strategies exist with respect to the set C. In case Bis non-empty, it might consist of multiple disjoint MECs . Therefore, the algorithm calls the FIND MECs procedure on Band returns all MECs 10 Algorithm 2 Algorithm for finding maximal BECs. 1:Algorithm FIND MBECs (CSG G,MEC C, upper bound estimate U) 2: B:={s∈C|Hazard U(C, s)̸=∅} 3: ifB̸=∅then ▷Ccontains BECs 4: return FIND MECs (G, B) 5: else 6: return ∅ ▷There exists no BEC in C Algorithm 3 Algorithm for deflating BECs. 1:Algorithm DEFLATE (CSG G, upper bound estimate U, MEC C) 2: forX ∈ FIND MBECs (G,C,U)do 3: u←bestExitValU(X) 4: fors∈ Xdo 5: U(s)←min(U(s),u) 6: forE ∈FIND MECs (X \bestExitsU(X))do 7: U←DEFLATE (G,U,E) ▷Recursively deflate sub-BECs 8: return U that are in B. Otherwise, Bis empty, which means that C does not contain any BECs, and the empty set is returned. Lemma 30 (FIND MBECs is correct— Proof in App. C-C). For a CSG, aMEC Cand a valid upper bound U, it holds thatX ∈ FIND MBECs (G,C,U)if and only if Xis aBEC inCand there exists no T⊆Cthat is a BEC and X⊊T. Proof sketch. To prove the “ ⇒” direction, we assume towards a contradiction that Xis a maximal ECbut not a BEC . Then, all Player Rstrategies must violate at least one of the conditions posed by Def. 16. We consider each condition separately and derive a contradiction to the assumption that X ∈ FIND MBECs (G,C,U). To prove the opposite direction, i.e., “ ⇐”, we assume towards a contradiction that Xis a maximal BEC but X/∈FIND MBECs (G,C,U). Here, we again make a case distinction. In the first case, some state s∈ X was removed by the algorithm because at least one player has an optimal strategy that can leave X. However, this is a contradiction to the assumption that Xis aBEC . In the second case, we assume towards a contradiction that X/∈FIND MECs (G, B)(where Bis defined as in FIND MBECs ). This is then a contradiction to the assumption that Xis a maximal EC. D. Convergent Bounded Value Iteration for CSGs and the Recursive Structure of ECs Finally, Alg. 3 describes our main goal: the deflating procedure for BECs , to be plugged into Alg. 1. The algorithm takes a CSG G, the current upper bound estimate U, and a MEC Cas input. First, the algorithm searches for all maximal BECs that might be contained in the MEC . The current upper bound estimate is returned if no BEC exists in the MEC . Otherwise, at least one BEC exists and must be deflated. If multiple BECsare found, each maximal BEC is deflated separately (see the
|
https://arxiv.org/abs/2505.21087v1
|
for-loop in Line 2). To deflate a maximal BECX, first, the best exit value bestExitValU(X)is estimated. Next, each state of the BEC is considered and if the best exit value is smaller than the current upper bound estimate at that state, then it is reduced to the best exit value (as nothing better can be reached). However, notice that within a BECXthere might exist another BECX′. From X′, Player Rmight not be able to get to the best exit of X(the globally best one in X) but only to a worse one, locally optimal for X′. Hence, for such a sub-BECX′, more aggressive deflating to bestExitValU(X′) is due. In other words, after deflating to bestExitValU(X)we have handled all states where Rcan ensure reaching this best exit, and we can ignore these states for the moment; we can also ignore this best exit and have a fresh look at which states arenow in a BEC and can reach the second best exit, i.e., the best exit in this remainder. Subsequently, we continue with the third best option etc. Consequently, on Lines 6-7, DEFLATE is called recursively on all MECs that are contained in X \bestExitsU(X), i.e. after removing all best exits. The procedure is repeated independently for each maximal BEC contained in C. A full example showing howDEFLATE works on a more complex BEC is included in App. B-B. Remark 31 (Structure of ECs) .This elucidates the hierarchical structure of ECs and their corresponding best exits. The recursive call of DEFLATE exposes the partial order over theECs, their sub- ECs, and “internally” transient states (those not within sub-ECs after removing the best exit since they are bound to the just removed exit or another sub- ECthat is aBEC with a lower value). This hierarchy captures (i) the ordering of exits by their values and (ii) the “independence” of exits with possibly different values when visiting one from another cannot be ensured. Our goal for the remainder of this work is to show that Alg. 1 with deflation is correct and converges, i.e. that complementing Bellman updates Bwith deflation results in a sequence of upper bounds Uthat converges to the value from above. For the sake of simplicity, we denote by D: [0,1]|S|→[0,1]|S| the operator that performs DEFLATE on a given valuation for all MECs in the CSG (which is reasonable since Alg. 1 performs DEFLATE on all MECs (in an arbitrary but fixed ordering)). Remark 32 (Valid upper bonds) .In the following, whenever we quantify an upper bound (a.k.a. over-approximation), we require it to be valid ; meaning that it was obtained by iteratively applying deflating and the Bellman update on the initial over- approximation U0from Eq. (3), i.e. U= (D◦B)k(U0) for some k∈N. The reason for this restriction is that for convergence, we require Dto be order-preserving. While D is order-preserving on valid upper bounds (see Lem. 36), in general it is not monotonic for arbitrary U∈R|S|. We illustrate this in App. B-C. 11 Definition 33 (Valid upper bound) .An upper bound U∈ [0,1]|S|is called valid if there exists k∈Nsuch that U=
|
https://arxiv.org/abs/2505.21087v1
|
(D◦B)k(U0), or if U=VR. We proceed as follows: After proving fundamental properties of both operators BandDin Lem. 34, we use these properties to show that valid upper bounds are indeed upper bounds, i.e. they are always greater or equal than the value in Lem. 35. With correctness established, Lem. 36 shows that on valid upper bounds, (D◦B)is order-preserving, which is a necessary ingredient for convergence. Finally, Thm. 37 concludes by proving soundness and completeness of the full algorithm. Lemma 34 (Properties of DandB).Letυ∈[0,1]|S|be a valuation. If υ≥VR, then: (i)B(υ)≤υandD(υ)≤υ. (ii)B(υ)≥VRandD(υ)≥VR. Proof. For the Bellman operator, both properties follow from the fact that the value VRis the least fixpoint of the Bellman operator, see [17, Thm. 1]. Thus, given a valuation greater than the value, it cannot increase, and it cannot become smaller than the value. ForD, observe that deflation only updates the valuation in Line 5 of Alg. 3 when setting the valuation of a state to min(U(s),bestExitValU(X))for some EC X. Item (i) holds because of taking the minimum with the current valuation, so it can only decrease. For Item (ii), we show in Lem. 29 that for every ECXand state s∈ X, we have that bestExitValU(X)≥ VR(s). This proves our goal, as the only update of DEFLATE keeps the valuation greater than VRin every state. The proof of Lem. 29 is technically involved, having to unfold many definitions in order to show the following intuitive fact: No state can have a larger value than that of some exit from the EC it is contained in. Lemma 35 (Soundness of valid upper bounds) .For all k∈N it holds that (D◦B)k(U0)≥VR. Proof. We proceed by induction over k. Base case: k= 0, thus (D◦B)0(U0) =U0≥VR. Induction hypothesis: For all k≥0, we assume that (D◦B)k(U0)≥VR. Induction step: To show: (D◦B)k+1(U0)≥VR. We know by induction hypothesis that (D◦B)k(U0)≥VR. Applying Item (ii) of Lem. 34 for B, we obtain B((D◦B)k(U0))≥VR. From this and Item (ii) for D, we get D(B((D◦B)k(U0)))≥VR, proving our goal. Lemma 36 ((D◦B)is order-preserving on valid upper bounds) . Letυ1,υ2be valid upper bounds with υ1≥υ2. It holds that (D◦B)(υ1)≥(D◦B)(υ2). Proof. We know by Lem. 35 that all valid upper bounds are greater or equal to the value (including the value itself). Thus, ifυ2=VR, the statement holds, and if υ1=VR, thenυ2=VRas well, since υ1≥υ2. It remains to consider the case that both valuations come from repeated application of the deflation and Bellman operators, i.e. υ1= (D◦B)i(U0) andυ2= (D◦B)j(U0). We assume υ1̸=υ2, since otherwise the claim trivially holds. By Item (i) of Lem. 34, every application of the operators can only decrease the resulting value; the item remains applicable, since the resulting valuations are always greater than or equal to the value. Consequently, i < j , asυ1>υ2. Using this and applying Item (i) of Lem. 34 again, we conclude by stating (D◦B)(υ1) =(D◦B)i+1(U0)≥ (D◦B)j+1(U0) = (D◦B)(υ2). Theorem 37 (Soundness and completeness - Proof in App. C-D).ForCSGs Alg. 1, using Alg. 3 as DEFLATE , pro- duces monotonic sequences Lunder- and Uover-approximating VR, and terminates for every ε >0. Proof sketch. Soundness and convergence of
|
https://arxiv.org/abs/2505.21087v1
|
lower bounds is classical [17, Thm. 1], and our algorithm does not modify the computation of under-approximations. The soundness of the upper bounds is immediate from Lem. 35, since all upper bounds computed by the algorithm are valid, and thus greater or equal than the value. Proving the convergence of the upper bounds is the main challenge. First, in Lem. 52 we provide the technical proof that the upper bound in Alg. 1 indeed converges to a fixpoint, using that the operators are order-preserving (Lem. 36) and arguments from lattice theory. Then, we use the same idea we have utilized in the proofs of Thms. 9 and 21: We assume for contradiction that there exists a state where the difference between the fixpoint of the upper bound in Alg. 1 and the true value is strictly greater than zero. The states with the largest such difference contain a BEC (by Thm. 21), that eventually will be found and deflated; since deflation depends on the outside of the BEC , this decreases the upper bound. This causes a contradiction to the fact that the upper bound has converged to a fixpoint. Consequently, there can be no state with a positive difference, and the upper bounds converge, too. VI. C ONCLUSION AND FUTURE WORK We have introduced a convergent over-approximation for concurrent stochastic games with reachability and safety objectives, thus giving value iteration the first sound stopping criterion and turning it into an anytime algorithm. Since the games are concurrent and ( ε-)optimal strategies may need to be randomized, we could not use the technique of simple end components of [18]. Instead, we identify bloated end components where the play can get stuck forever and recursively deflate the over-approximations of these states to the best possible value attainable upon leaving the end component. We leave an efficient implementation for future work as an extension of the standard model checker PRISM -GAMES [32]. 12 REFERENCES [1]Pranav Ashok, Krishnendu Chatterjee, Przemysław Daca, Jan K ˇret´ınsk´y, and Tobias Meggendorfer. Value Iteration for Long-Run Average Reward in Markov Decision Processes. In Rupak Majumdar and Viktor Kun ˇcak, editors, Computer Aided Verification . Springer International Publishing, 2017. [2]Christel Baier and Joost-Pieter Katoen. Principles of Model Checking . MIT Press, April 2008. [3]Christel Baier, Joachim Klein, Linda Leuschner, David Parker, and Sascha Wunderlich. Ensuring the Reliability of Your Model Checker: Interval Iteration for Markov Decision Processes. In Rupak Majumdar and Viktor Kunˇcak, editors, Computer Aided Verification . Springer International Publishing, 2017. [4]Benjamin Bordais, Patricia Bouyer, and St ´ephane Le Roux. Subgame Optimal Strategies in Finite Concurrent Games with Prefix-Independent Objectives. In Orna Kupferman and Pawel Sobocinski, editors, Founda- tions of Software Science and Computation Structures , pages 541–560. Springer Nature Switzerland, 2023. [5]Tom´aˇs Br´azdil, Krishnendu Chatterjee, Martin Chmelik, V ojt ˇech Forejt, Jan K ˇret´ınsk`y, Marta Kwiatkowska, David Parker, and Mateusz Ujma. Verification of Markov decision processes using learning algorithms. In International Symposium on Automated Technology for Verification and Analysis , pages 98–114. Springer, 2014. [6]Tom´aˇs Br´azdil, Krishnendu Chatterjee, Martin Chmel ´ık, V ojt ˇech Forejt, Jan K ˇret´ınsk´y, Marta
|
https://arxiv.org/abs/2505.21087v1
|
Kwiatkowska, David Parker, and Mateusz Ujma. Verification of Markov Decision Processes Using Learning Algorithms. In Franck Cassez and Jean-Fran c ¸ois Raskin, editors, Automated Technology for Verification and Analysis . Springer International Publishing, 2014. [7]Krishnendu Chatterjee, Luca de Alfaro, and Thomas A Henzinger. Termination criteria for solving concurrent safety and reachability games. InProceedings of the twentieth annual ACM-SIAM symposium on Discrete algorithms , pages 197–206. SIAM, 2009. [8]Krishnendu Chatterjee, Luca De Alfaro, and Thomas A. Henzinger. Qualitative concurrent parity games. ACM Transactions on Computational Logic , July 2011. [9]Krishnendu Chatterjee, Luca de Alfaro, and Thomas A Henzinger. Strategy improvement for concurrent reachability and safety games. arXiv preprint arXiv:1201.2834 , 2012. [10] Krishnendu Chatterjee, Luca de Alfaro, and Thomas A Henzinger. Strat- egy improvement for concurrent reachability and turn-based stochastic safety games. Journal of computer and system sciences , 79(5):640–657, 2013. [11] Krishnendu Chatterjee and Thomas A. Henzinger. Value Iteration. In Orna Grumberg and Helmut Veith, editors, 25 Years of Model Checking: History, Achievements, Perspectives . Springer, 2008. [12] Costas Courcoubetis and Mihalis Yannakakis. The complexity of probabilistic verification. Journal of the ACM , 42, July 1995. [13] B. A. Davey and H. A. Priestley. Introduction to Lattices and Order . Cambridge University Press, Cambridge, 2 edition, 2002. [14] Luca De Alfaro. How to specify and verify the long-run average behaviour of probabilistic systems. In Proceedings. Thirteenth Annual IEEE Symposium on Logic in Computer Science (Cat. No. 98CB36226) , pages 454–465. IEEE, 1998. [15] Luca de Alfaro and Thomas A Henzinger. Concurrent omega-regular games. In Logic in Computer Science, 2000. Proceedings. 15th Annual IEEE Symposium on , pages 141–154. IEEE, 2000. [16] Luca De Alfaro, Thomas A Henzinger, and Orna Kupferman. Concurrent reachability games. Theoretical Computer Science , 2007. [17] Luca de Alfaro and Rupak Majumdar. Quantitative solution of omega- regular games. Journal of Computer and System Sciences , 68(2):374 – 397, 2004. Special Issue on STOC 2001. [18] Julia Eisentraut, Edon Kelmendi, Jan K ˇret´ınsk´y, and Maximilian Weininger. Value iteration for simple stochastic games: Stopping criterion and learning algorithm. Information and Computation , 2022. [19] Julia Eisentraut, Jan K ˇret´ınsk´y, and Alexej Rotar. Stopping Criteria for Value and Strategy Iteration on Concurrent Stochastic Reachability Games, 2019. [20] Kousha Etessami and Mihalis Yannakakis. Recursive Concurrent Stochastic Games. Logical Methods in Computer Science , V olume 4, Issue 4, November 2008. [21] H. Everett. RECURSIVE GAMES , pages 47–78. Princeton University Press, 1957.[22] Søren Kristoffer Stiil Frederiksen and Peter Bro Miltersen. Approximating the Value of a Concurrent Reachability Game in the Polynomial Time Hierarchy. Springer, 2013. [23] Serge Haddad and Benjamin Monmege. Interval iteration algorithm for mdps and imdps. Theoretical Computer Science , 735:111–131, 2018. [24] Serge Haddad and Benjamin Monmege. Interval Iteration Algorithm for MDPs and IMDPs. Theoretical Computer Science , 2018. [25] Kristoffer Arnsfelt Hansen, Rasmus Ibsen-Jensen, and Peter Bro Mil- tersen. The Complexity of Solving Reachability Games Using Value and Strategy Iteration. Theory of Computing Systems , 55, 2014. [26] Kristoffer Arnsfelt Hansen, Michal Koucky, Niels Lauritzen, Peter Bro Miltersen, and Elias P. Tsigaridas. Exact algorithms for solving stochastic games: Extended
|
https://arxiv.org/abs/2505.21087v1
|
abstract. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing , 2011. [27] Arnd Hartmanns, Sebastian Junges, Tim Quatmann, and Maximilian Weininger. A Practitioner’s Guide to MDP Model Checking Algorithms. In Sriram Sankaranarayanan and Natasha Sharygina, editors, Tools and Algorithms for the Construction and Analysis of Systems . Springer Nature Switzerland, 2023. [28] Frederick S. Hillier and Gerald J. Lieberman. Introduction to Operations Research . McGraw-Hill Higher Education, 2010. [29] Jan K ˇret´ınsk´y, Tobias Meggendorfer, and Maximilian Weininger. Stop- ping Criteria for Value Iteration on Stochastic Games with Quantitative Objectives. In 2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS) , 2023. [30] Jan K ˇret´ınsk´y, Emanuel Ramneantu, Alexander Slivinskiy, and Maximil- ian Weininger. Comparison of algorithms for simple stochastic games. Information and Computation , 2022. [31] P. R. Kumar and T. H. Shiau. Existence of Value and Randomized Strategies in Zero-Sum Discrete-Time Stochastic Dynamic Games. SIAM Journal on Control and Optimization , 19, 1981. [32] Marta Kwiatkowska, Gethin Norman, David Parker, and Gabriel Santos. Automated verification of concurrent stochastic games. In International Conference on Quantitative Evaluation of Systems , pages 223–239. Springer, 2018. [33] Marta Kwiatkowska, Gethin Norman, David Parker, and Gabriel Santos. PRISM-games 3.0: Stochastic Game Verification with Concurrency, Equilibria and Time. In Shuvendu K. Lahiri and Chao Wang, editors, Computer Aided Verification . Springer International Publishing, 2020. [34] Donald A. Martin. The determinacy of blackwell games. 1998. [35] Michael Maschler, Shmuel Zamir, and Eilon Solan. Game Theory . Cambridge University Press, June 2020. [36] John Nash. Non-Cooperative Games. 1951. [37] John F. Nash. Equilibrium points in n-person games. 1950. [38] Miquel Oliu-Barton. New algorithms for solving zero-sum stochastic games. Mathematics of Operations Research , 46(1):255–267, 2021. [39] T. Parthasarathy. Discounted, positive, and noncooperative stochastic games. International Journal of Game Theory , 2(1):25–37, Dec 1973. [40] Grant Olney Passmore and Paul B. Jackson. Combined decision techniques for the existential theory of the reals. In Calculemus/MKM , volume 5625 of Lecture Notes in Computer Science , pages 122–137. Springer, 2009. [41] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming . Wiley, 2009. [42] T. E. S. Raghavan and J. A. Filar. Algorithms for stochastic games — A survey. Zeitschrift f ¨ur Operations Research , 35(6):437–472, 1991. [43] G. H. R. Santos. Automatic Verification and Strategy Synthesis for Zero- Sum and Equilibria Properties of Concurrent Stochastic Games . PhD thesis, University of Oxford, 2020. [44] Dana Scott. Outline of a Mathematical Theory of Computation. Kiberneticheskij Sbornik. Novaya Seriya , 14, January 1977. [45] Stephen Simons. Minimax Theorems and Their Proofs. In Ding-Zhu Du and Panos M. Pardalos, editors, Minimax and Applications . Springer US, 1995. [46] Anton Wijs, Joost-Pieter Katoen, and Dragan Bo ˇsnaˇcki. Efficient GPU algorithms for parallel decomposition of graphs into strongly connected and maximal end components. Formal Methods in System Design , 48(3), June 2016. 13 APPENDIX A FURTHER DEFINITIONS AND CONCEPTS A. Solving Matrix Games Given a valuation υfor all states in a two-player CSG , we can update the valuation at state sby solving a Linear Program (LP). Let the
|
https://arxiv.org/abs/2505.21087v1
|
matrix game played at sbe given by a matrix Z∈Ql×m, where l(resp. m) is the number of actions available to player R(resp.S) at the state s. Then the LP that yields the value is the following [43]: Maximize υ(s)subject to the constraints: υ(s)≤x1·z1j+···+xl·zljfor1≤j≤m xi≥0for1≤i≤l 1 =x1+···+xl where zij=Z(s)(i, j) =P s′∈Sδ s, ai, bj s′ ·υ(s′), and xiis the probability that player Rwill take action i. Thus, by solving the LP, we not only obtain the value but also the optimal local strategy for player R. B. Domination of Strategies Definition 38 (Domination [35, Def. 4.12]) .Given a valuation υ, a state s∈Sand sets of strategies R(s)andS(s)available at the state sfor player RandS, respectively. -A strategy ρ∈ R(s)isweakly dominated if there exists another strategy ρ′∈ R(s)satisfying the following two conditions: •infσ∈S(s)B(υ)(s,ρ, σ)≤infσ∈S(s)B(υ)(s,ρ′, σ), and •∃σ′∈ S(s)such that B(υ)(s,ρ, σ′)<B(υ)(s,ρ′, σ′). -Dually, a strategy σ∈ S(s)isweakly dominated if there exists another strategy σ′∈ S(s)satisfying the following two conditions: •supρ∈R(s)B(υ)(s,ρ, σ)≥supρ∈R(s)B(υ)(s,ρ, σ′), and •∃ρ′∈ R(s)such that B(υ)(s,ρ′, σ)>B(υ)(s,ρ′, σ′). APPENDIX B FURTHER EXAMPLES A. Best Exit Example 39 (Deflating BECs) .Consider the CSG depicted in Fig. 2. At each state both players can choose among two actions. The game contains one MEC C:={s0, s1, s2}. The matrix games played at the states s0, s1ands2with respect to an upper bound Uk(where k∈N) are defined as follows. ZUk(s0) :=d1 d2 Uk(s0) Uk(s1) a1 1 2(Uk(s0) +α) α a2, ZUk(s1) =e1 e2 γ β b1 1 2(Uk(s1) +Uk(s2))Uk(s0) b2, ZUk(s2) =f1 f2 Uk(s0)1 2(Uk(s0) +Uk(s1)) c1 γ Uk(s2) c2. s0s1 s2α γβ a1d2 a1d1 a2d2 a2d1b1e2 b2e2b2e1 b1e1 c2f1c1f1c1f2c2f2 Fig. 2: CSG with non-single-state EC. The clouds represent the irrelevant parts of the game. 14 TABLE III: Full example of the BVI algorithm for the CSG depicted in Fig. 2. iis the i-th iteration of the main loop in Alg. 1, jis the j-th iteration of the while-loop in Alg. 3. Cis the set of states among which a BECXis searched. The best exit from Xis underlined. i j Ui(s0)Ui(s1)Ui(s2) C X 00 1 1 1 {s0, s1, s2} { s0, s1, s2} 1 0.9 0.9 0.9 {s0, s1} { s0, s1} 2 0.7 0.7 0.9 {s0} { s0} 3 0.2 0.7 0.9 ∅ ∅ 10 0.2 0.7 0.9 {s0, s1, s2} { s0} 1 0.2 0.7 0.9 {s1, s2} { s2} 2 0.2 0.7 0.45 {s1} ∅ s0 s1 α γβa1d2a1d1 a2d2 a2d1b1e2 b2e2b1e1 Fig. 3: The resulting game after removing s2from the game illustrated in Fig. 2. Fork= 0,α= 0.2, β= 0.7andγ= 0.9and the initialization U0(s0) =U0(s1) =U0(s2) = 1 , the matrix games look as follows. ZU0(s0) =d1 d2 1 1 a1 0.6 0 .2 a2, ZU0(s1) =e1 e2 0.9 0 .7 b1 1 1 b2, ZU0(s2) =f1 f2 1 1 c1 0.9 1 c2. Since at each state, there exist hazardous and trapping strategies, the three states form a BEC . To estimate the values of each exit, we need to solve three sub-matrix games played at each state of the BEC . The best exit value is then the maximum of the
|
https://arxiv.org/abs/2505.21087v1
|
three solutions. The three linear programs that solve the three sub-matrix games are the following. max U1(s0)s.t. U1(s0)≤0.6·x2 U1(s0)≤0.2·x2 x2= 1max U1(s1)s.t. U1(s1)≤0.9·x1 U1(s1)≤0.7·x1 x1= 1max U1(s2)s.t. U1(s2)≤0.9·x2 U1(s2)≤1·x2 x2= 1 Here x1andx2are the probabilities that player Rchooses the first or second action available at the corresponding state. The best exit is max{0.2,0.7,0.9}= 0.9, therefore, the upper bound of all the three states can be safely reduced to 0.9. △ B. BVI Algorithm - Full Example Example 40 (BVI) .Consider the CSG depicted in Fig. 2, where α= 0.2, β= 0.7, and γ= 0.9. The matrix games played at each state are given by the matrices ZUk(s0),ZUk(s1)andZUk(s2)defined as in Example 39. We choose ε= 0.001. Table III summarizes the steps of the algorithm. Initially it holds that U0(s0) =U0(s1) =U0(s2) = 1 . Since C={s0, s1, s2} is aMEC , it will be found by Alg. 3. Within CtheBECX=Cis found. The best exit value from Xis calculated, i.e. the three sub-matrix games are solved and the best exit value needed for deflating the BEC is the maximum among the solutions: Zexit U0(s0) :=d1 d2 ( )0.6 0 .2 a2Zexit U0(s1) =e1 e2 ( )0.9 0 .7 b1 Zexit U0(s2) =f1 f2 ( )0.9 1 c1 The best exit from Xiss2(underlined in Table III) and the value of the exit is 0.9, so in the next step the over-approximations of those three states are deflated to 0.9 and the best exit is removed from C. The resulting sub-game is depicted in Fig. 3. For 15 s0 αa1d1 a2d2 a2d1 Fig. 4: The resulting game after removing s1from the game illustrated in Fig. 3. s0 s1βα γ(□,□)(a,d)(b,e) (c,e) (a,e)(b,d)(c,d) Fig. 5: Monotonicity is not guaranteed in general. U′(s0) =U′(s1) =U′(s2) = 0 .9, the set C={s0, s1}contains the BECX=C. Now, two linear programs need to be solved to solve the two sub-matrix games. Zexit U′(s0) :=d1 d2 ( )0.55 0 .2 a2Zexit U′(s1) =e1 e2 ( )0.9 0 .7 b1 The best exit is s1and the value of the exit 0.7, thus, the upper bounds of s0ands1are reduced to 0.7 and s1is removed from C. The resulting sub-game is depicted in Fig. 4. Finally, for U′′(s0) = 0 .7, the set C={s0}contains the BECX={s0}. To deflate it, we need to solve the following exiting sub-game. Zexit U′′(s0) :=d1 d2 ( )0.45 0 .2 a2 The best exit value is 0.2. The upper upper bound of s0is reduced to 0.2. After removing the state from Xwe obtain an empty set and the algorithm can proceed with the next MEC . Since, we assumed that the game has only one MEC , the deflating phase is finished. As U0−L0> εholds, the next iteration of the algorithm is executed. The Bellman update returns the same valuation for all states, i.e. U1(s0) = 0 .2,U1(s1) = 0 .7andU1(s2) = 0 .9. Alg. 3 again finds the MEC C={s0, s1, s2}that contains the two BECs X1:={s0}andX1:={s2}. First X={s0}is deflated to 0.2 and next the BEC X={s2}is deflated. For this, the following exiting sub-game needs to be solved. Zexit U1(s0)
|
https://arxiv.org/abs/2505.21087v1
|
:=d2 ( )0.45 a2 Notice, that here we only consider staying strategies for player S, which is why we only consider action d2. Therefore, the best exit value is 0.45. After removing s0ands2fromX, no further BECs are contained in the MEC . Since now U1−L1< ε holds, the BVI algorithm terminates. △ C. Non-Monotonicity of Deflation Example 41.Consider the CSG depicted in Fig. 5. The variables α, β andγare placeholders indicating that upon leaving a certain valuation is obtained. The set of states {s0, s1}is an EC. We consider two valuations, UandU′, such that U≥U′and the EC{s0, s1}is for both upper bounds a BEC . The upper bound Uassigns the following valuations: U(s0) = 0 .6,U(s1) = 0 .6,U(α) = 0 .8,U(β) = 0 .5,U(γ= 0.55). The upper bound U′assigns the following valuations: U′(s0) = 0 .6,U′(s1) = 0 .45,U′(α) = 0 .5,U′(β) = 0 .5,U′(γ= 0.55). Then, the matrix games played for the two valuations at state s1are given by the following matrices: ZU(s1):=d e !0.8 0 .6 a 0.6 0 .5 b 0.55 0 .5 c,ZU′(s1):=d e !0.5 0 .45 a 0.6 0 .5 b 0.55 0 .5 c 16 Then, under Uthe strategy {a7→1}is a hazardous strategy and the exit value is 0.5. In contrast for U′the strategy {b7→1}is hazardous and the exit value is 0.55. Thus, for arbitrary U,U′∈R|S|it might happen that exitValU(X)<exitValU′(X)for some BECXalthough U≥U′holds. Intuitively, this arises because the sub-EC which forms a BEC changes when the relative ordering of exits is modified. However, the problem cannot occur when considering valid over-approximations (see Lem. 36) because then the upper bounds decrease in a well-behaved way when the relative ordering of the exits changes. △ APPENDIX C PROOFS FOR SEC. III Throughout the whole Appendix C, when proving convergence of BVI, we utilize definitions and theorems from lattice and fixpoint theory. Thus, we first briefly recall some necessary definitions (adjusting notation to our work) and theorems. Definition 42 (Ordered set [13, Chapter 1.2]) .A set Pequipped with a relation ⪯:P×Pis called an ordered set if and only if⪯is reflexive, antisymmetric and transitive. Definition 43 (Directed set [13, Chapter 7.7]) .LetPbe an ordered set. A non-empty set D⊆Pisdirected if and only if for every pair of elements x, y∈Dthere exists z∈Dthat is a lower bound for both, formally z⪯xandz⪯y. Definition 44 (Complete partial order (CPO) [13, Chapter 8.1]) .An ordered set Pis a complete partially ordered set (CPO) if and only if (i)Phas a top element, ⊤:=infP∅, and (ii) for every directed set D⊆P, we have infPDexists. Definition 45 (Continuity [13, Chapter 8.6]) .LetPandQbe two CPOs. A mapping φ:P→Qiscontinuous if (i) for every directed set D⊆P, the subset φ(D)ofQis also directed, and (ii) it holds that φ(infD) =infφ(D) := inf{φ(x)|x∈D}. Definition 46 (Order-preserving [13, Chapter 1.34]) .LetPandQbe ordered sets. A map φ:P→Qisorder-preserving (also called monotone) if x⪯yinPimplies φ(x)⪯φ(y)inQ. Theorem 47 (Fixpoint Theorem [13, Chapter 8.15]) .LetPbe a complete partial order, let Fbe an order-preserving and continuous self-map on Pand define α:=supn≥0Fn(⊤). Then αis the greatest fixpoint of F, i.e. the largest element of P satisfying F(α) =α. We
|
https://arxiv.org/abs/2505.21087v1
|
remark that we inverted the definitions and the theorem: This is because we are interested in a greatest fixpoint, whereas the textbook [13] only speaks about least fixpoints. Inverting the comparator and replacing infwith supyields the original definitions. With only these changes, the proof of [13, Chapter 8.15] yields our modified claim Thm. 47. A. Convergence in the absence of end components We start by proving a technical lemma that is also useful for several future proofs: The over-approximation computed using only Bellman updates converges to a fixpoint. Lemma 48 (Upper bound converges to a fixpoint) .Let(Uk)k∈N0be the sequence of upper bounds computed by applying Eq.(3)on a CSG G. Let U⋆:= lim k→∞Ukbe the limit of the sequence. This limit is a fixpoint of the Bellman update, i.e. for all s∈S,B(U∗)(s) =U⋆(s). Proof. This lemma is a consequence of the fixpoint theorem we just recalled. Thus, we proceed as follows: We explain that the domain of Bis a CPO and prove that Bis order-preserving and continuous. Then, Thm. 47 yields that U⋆is a (namely the greatest) fixpoint. Complete partial order . The domain of Bare valuations, i.e. vectors [0,1]|S|mapping every state to a number. Thus, we define the set Pto be the set of all valuations. We use the standard point-wise comparisons as relation, i.e. υ1⪯υ2if and only if for all states s∈Swe have υ1(s)≤υ2(s). Thus, the top element ⊤is the function that maps all states to 1. For every directed setD, a greatest lower bound ℓ=infPDexists: Set ℓ(s) =infd∈Dd(s)for all s∈S. It is a lower bound, as by point-wise comparison, it is smaller than all valuations in D; it is the greatest lower bound, since picking a larger number for any state would not be a lower bound any more. Thus, the set consisting of valuations [0,1]|S|with this relation is a CPO. Order-preserving . Recall that the Bellman operator on a state is defined as follows: B(υ)(s):=sup ρ∈R(s)inf σ∈S(s)B(υ)(s,ρ, σ), where B(υ)(s,ρ, σ):=X (a,b)∈AX s′∈Sυ(s′)·δ s,a,b s′ ·ρ(s)(a)·σ(s)(b). 17 Eq. (3) lifts it to valuations by applying it state-wise. Hence, for every state, we apply an operation consisting of multiplications and summations, which are order-preserving. Thus, overall, the Bellman operator is order-preserving. Continuous . We just showed that the Bellman operator on valuations is an order-preserving self-map on the set Pof valuations. Then, [13, Lemma 8.7 (i)] yields that for every directed subset D⊆P, we have that B(D):={B(d)|d∈D}is a directed subset, which is Condition (i) of Def. 45. It remains to show Condition (ii): infd∈DB(d) =B(infD). Since the comparisons by the relation ⪯are performed point-wise, we have to prove that for all states s∈S, we have infd∈DB(d)(s) =B(infD)(s). Thus, fix an arbitrary state s∈S, and conclude using the following chain of equations. B(infD)(s) =sup ρ∈R(s)inf σ∈S(s)X (a,b)∈AX s′∈S inf d∈Dd(s′) ·δ s,a,b s′ ·ρ(s)(a)·σ(s)(b) (Unfolding definition of Bellman operator) =sup ρ∈R(s)inf σ∈S(s)inf d∈DX (a,b)∈AX s′∈S d(s′)·δ s,a,b s′ ·ρ(s)(a)·σ(s)(b) (Claim 1) =inf d∈Dsup ρ∈R(s)inf σ∈S(s)X (a,b)∈AX s′∈S d(s′)·δ s,a,b s′ ·ρ(s)(a)·σ(s)(b) (Claim 2) =inf d∈DB(d)(s). (Collapsing the Bellman operator definition) Claim 1: This step moves the infd∈Dout of the summation, which
|
https://arxiv.org/abs/2505.21087v1
|
is correct, since addition is a continuous operation. ▲ Claim 2: This step moves infd∈Dto the front, first utilizing that infima can be switched. Then, to switch infd∈Dandsupρ∈R(s), we make use of the Minimax Theorem [45] which states that for a function f:X×Y→Rthat is concave-convex, it holds thatsupx∈Xinfy∈Yf(x, y) =infy∈Ysupx∈Xf(x, y).fis concave-convex if fis concave for a fixed y∈Yand convex for a fixed x∈X. This holds, in particular, for bilinear functions, i.e. functions that are linear in both arguments. The function considered at this point is the following: f(ρ, d) = inf σ∈S(s)X (a,b)∈AX s′∈S d(s′)·δ s,a,b s′ ·ρ(s)(a)·σ(s)(b). This function is indeed bilinear, since addition and multiplication are linear functions. ▲ Overall, we have shown that the sequence (Uk)k∈N0is the result of applying an order-preserving, continuous function to the top-element of a complete partial order, and thus it converges to a (the greatest) fixpoint. Theorem 9 (Convergence without ECs — Proof in App. C-A).LetGbe a CSG where all ECs are trivial, i.e. for every ECC we have C⊆WS∪T. Then, the over-approximation using only Eq. (3)converges, i.e. lim k→∞Uk=VR. Proof. This proof is an extension of the proof of [18, Theorem 1] for turn-based games to the concurrent setting. The underlying idea is the same, and can be briefly summarized as follows: We assume towards a contradiction that U⋆̸=VR, and find a set X that maximizes the difference between upper bound and value. Every pair of strategies leaving the set decreases the difference. However, VRandU⋆are fixpoints of the Bellman updates, from [17, Theorem 1] and Lem. 48, respectively. Consequently, optimal strategies need to remain in the set. However, in the absence of ECs, optimal strategies have to leave the set, which yields a contradiction and proves that U⋆=VR. Main challenge . The key difference to the proof of [18, Theorem 1] is that we cannot argue about actions anymore, but have to consider mixed strategies. This significantly complicates notation. Additionally, and more importantly, the former proof crucially relied on the fact that for a state of player R, we know that its valuation is at least as large as that of any action, and dually for a state of player S, its valuation is at most as large as that of any action. In the concurrent setting, this is not true. The optimal strategies need not be maximizing nor minimizing the valuation and, moreover, they can be maximizing for one valuation and minimizing for another. Thus, we found a more general, and in fact simpler, way of proving that “no state in Xcan depend on the outside” [18, Statement 5] and deriving the contradiction. The crucial insight is that we can fix locally optimal strategies and then apply Claim 4. 18 Notation for Bellman operator . Before we begin the formal proof, we establish a condensed notation for a number of terms in the Bellman operator: rest(s, a, b, s′,ρ, σ) =δ s,a,b s′ ·ρ(s)(a)·σ(s)(b). Thus, the Bellman operator for some valuation υand pair of strategies (ρ, σ)simplifies to B(υ)(s,ρ, σ) =X (a,b)∈AX s′∈Sυ(s′)·rest(s, a, b, s′,ρ, σ). The set Xwith maximum
|
https://arxiv.org/abs/2505.21087v1
|
difference . We define the difference of a state s∈Sas∆(s):=U⋆(s)−VR(s). Recall that VRis the least fixpoint and U⋆the greatest fixpoint of B. Hence, we know that ∆(s)≥0for all states. Further, since we assume for contradiction that U⋆̸=VR, there exist states with ∆(s)>0. Thus, we can find a non-empty set of states with maximum difference: X:={s∈S|∆(s) =maxs∈S∆(s)}. A leaving pair of strategies decreases the difference . Let s∈ X be a state in X. Let (ρ, σ)∈(R(s)× S(s))be a pair of strategies such that (s,ρ, σ)leaves X. Then, following this pair of strategies for one step decreases the difference, formally B(∆)(s,ρ, σ)<∆(s). (5) We prove this using the following chain of equations: B(∆)(s,ρ, σ) =X (a,b)∈AX s′∈S∆(s′)·rest(s, a, b, s′,ρ, σ) (Definition of Bellman operator) <∆(s). (Claim 3) Claim 3: By assumption, we have that there exists a t∈S\ X such that this tis reached with positive probability under the exiting strategies, i.e.P (a,b)∈Arest(s, a, b, t, ρ, σ)>0. For this toutside of X, we have ∆(t)<∆(s), since Xis defined as the set of all states with maximum difference. Further, no state can have a difference larger than ∆(s). Furthermore, the remaining terms in the sum that the differences ∆(s′)are multiplied with are a probability distribution, formallyP (a,b)∈AP s′∈Srest(s, a, b, s′,ρ, σ) = 1 . Thus, if all differences were equal to ∆(s), the sum would yield ∆(s). As one of the summands is smaller than ∆(s)and all others are at most ∆(s), we get that the sum has to be smaller than ∆(s).▲ Without non-trivial ECs,Xmust be left . We have that X ∩(WS∪T) =∅: The difference is 0 for target states because both the value and the upper bound are equal to 1; and the difference is 0 for the sure winning region of player S, since the upper bound and value are equal to 0 (see Eq. (3)). Thus, since by assumption there are no ECs inS\(WS∪T), the set Xcannot contain an EC. Consequently, there exists a state s∈ X such that for all pairs of available actions (a, b)∈ΓR(s)×ΓS(s), we have Supp(δ s, a, b )∩(S\ X)̸=∅, i.e. there is a successor state outside of X. This is the case because, if all states had a pair of actions that stays in X, then there exists a pair of strategies that keeps the play inside a subset of X, which would then form an EC. For a formal proof, we refer to [18, Lemma 2]. Note that, while their proof is for turn-based games, the definition ofECis a graph theoretic notion where, intuitively, the players “work together” (formally, it is only about the existence of an edge in the underlying hypergraph), and thus the proof is applicable to CSGs , too. In the following, we let sdenote such a state where all strategies are leaving. Notation for locally optimal strategies . For any state and valuation, locally optimal strategies for both players exist. We establish a shorthand for the locally optimal strategies in state s(the one obtained in the previous step) with respect to U⋆andVR. For playerRand valuation
|
https://arxiv.org/abs/2505.21087v1
|
U⋆, we denote a locally optimal strategy by ρU∈arg max ρ∈R(s)inf σ∈S(s)B(U⋆)(s,ρ, σ). Similarly we denote a locally optimal strategy of player Swith respect to U⋆by σU∈arg min σ∈S(s)sup ρ∈R(s)B(U⋆)(s,ρ, σ). Analogously, we define locally optimal strategies with respect to VR, namely ρVandσV, obtained by replacing U⋆with VR in the above definition. 19 Deriving the contradiction . Recall that s∈ X is a state where all available pairs of actions, and thus all strategies, leave X. We derive the contradiction ∆(s)<∆(s). ∆(s) =U⋆(s)−VR(s) (Definition of ∆) =B(U⋆)(s)−VR(s) (U⋆is fixpoint by Lem. 48) =B(U⋆)(s,ρU, σU)−VR(s) ((ρU, σU)locally optimal w.r.t. U⋆) ≤B(U⋆)(s,ρU, σV)−VR(s) (Claim 4) =B(U⋆)(s,ρU, σV)−B(VR)(s) (VRis fixpoint [17, Theorem 1]) =B(U⋆)(s,ρU, σV)−B(VR)(s,ρV, σV) ((ρV, σV)locally optimal w.r.t. VR) ≤B(U⋆)(s,ρU, σV)−B(VR)(s,ρU, σV) (Claim 4) =B(∆)(s,ρU, σV) (Claim 5) <∆(s). ((s,ρU, σV)leaves Xand Eq. (5)) Claim 4: This argument is used in two steps in the above chain of equations. For the first usage, observe that σVcan be at most as good as the optimal σU. More formally, recall σUwas chosen as the arg min of the Bellman operator with respect to U⋆. Thus, B(U⋆)(s,ρU, σU)≤B(U⋆)(s,ρU, σV). For the second usage, by the analogous argument we have B(VR)(s,ρV, σV)≥B(VR)(s,ρU, σV). Since this term is the subtrahend of the subtraction, the overall expression can only become greater. ▲ Claim 5: This step follows from expanding the definition of the Bellman operator, rearranging the sums and collapsing the definition of Bellman operator. Formally, for all states q and strategy pairs (ρ, σ)it holds that B(∆)(q,ρ, σ) =X (a,b)∈AX s′∈S∆(s′)·rest(s, a, b, s′,ρ, σ) (Definition of Bellman operator) =X (a,b)∈AX s′∈S U⋆(s′)−VR(s′) ·rest(s, a, b, s′,ρ, σ) (Definition of ∆) = X (a,b)∈AX s′∈SU⋆(s′)·rest(s, a, b, s′,ρ, σ) − X (a,b)∈AX s′∈SVR(s′)·rest(s, a, b, s′,ρ, σ) (Splitting the sum) =B(U⋆)(q,ρ, σ)−B(VR)(q,ρ, σ). (Definition of Bellman operator) ▲ Summary . Starting from the assumption that U⋆̸=VR, we derived that there exists a set of states Xwhere the difference ∆ between upper bound and value is maximized. Further, a pair of strategies leaving Xdecreases this difference. However, since there are no ECs in X, there has to be a state where the optimal strategies for VRandU⋆leave, which allows us to derive a contradiction. Thus, the initial assumption is false, and we have U⋆=VR. B. Convergence without Bloated End Components Lemma 14 (Negating Weak Domination — Proof in App. C-B).Letυbe a valuation, s∈Sa state, R1,R2,R′⊆ R(s)and S1,S2,S′⊆ S(s)sets of local strategies. If for some sets of strategies we do nothaveR2≺υ,S′R1, then we have R1⪯υ,S′R2. Analogously, not S2≺υ,R′S1implies S1⪯υ,R′S2. Proof. We only provide the proof for Player R, as the other one is analogous by exchanging the names of the strategy sets, replacing RwithSand vice versa. We assume that we do not have R2≺υ,S′R1under the set of counter-strategies S′with respect to υ. Writing out the definition, this means that ∀ρ1∈ R1.∃ρ2∈ R2: (i)infσ∈S′B(υ)(s,ρ2, σ)>infσ∈S′B(υ)(s,ρ1, σ), or (ii)∀σ′∈ S′we have B(υ)(s,ρ2, σ′)≥B(υ)(s,ρ1, σ′). Our goal is to have that R1⪯υ,S′R2, formally: Formally, ∃ρ2∈ R2.∀ρ1∈ R1: inf σ∈S′B(υ)(s,ρ1, σ)≤inf σ∈S′B(υ)(s,ρ2, σ). 20 Both conditions of negated weak dominance imply our goal.
|
https://arxiv.org/abs/2505.21087v1
|
The only remaining problem is the order of quantifiers. However, the choice of ρ2does not depend on ρ1, and we can always pick ρ2as the strategy that maximizes infσ∈S′B(υ)(s,ρ2, σ). Thus, we can exchange the order of quantifiers and prove our goal. Lemma 20 (Negating Bloated — Proof in App. C-B ).If an ECX ⊆ S\(T∪WS)is not bloated for a valuation υ, then there exists a state s∈ X that has a locally optimal strategy that is leaving, formally ∃ρ∈ RL(X, s).R(s)⪯υ,S(s){ρ}. Proof. Since X′is not a BEC , we know that at some s∈ X′it holds that Hazard U⋆(X′, s) =∅. Fix sto be such a state. Every strategy ρ′∈ R(S)must violate at least one of the 3 conditions of Def. 16. We write out the negations: (i)ρ′/∈ RL(X′, s), i.e. the strategy is leaving ρ′∈ RL(X′, s). (ii)We do not have R(s)\{ρ′}⪯S(s){ρ′}. By contraposition of Lem. 14, this implies {ρ′}≺S(s)R(s)\{ρ′}, so the strategy is sub-optimal. (iii) We do not have RL(X′, s)≺S(s){ρ′}. By Lem. 14, this implies {ρ′}⪯S(s)RL(X′, s), i.e. there are leaving strategies that are not worse than ρ′. Note that in particular, this implies that RL(X′, s)is non-empty. Our assumption gives us the disjunction over the three violated conditions. We proceed by a case distinction, always assuming that all strategies violate a certain condition, which allows us to prove our goal, or, if there exists a strategy satisfying the condition, we continue with the next one. Case “Not (i)”: If all strategies violate Condition (i), that means all strategies are leaving, i.e. RL(s) =R(s). Thus, since there always exist optimal strategies, by picking an optimal ρ∈ R we naturally have R(s)⪯υ,S(s){ρ}. Case “(i), but not (iii)”: We assume there exists strategies that satisfies Condition (i), but all strategies that satisfy it violate Condition (iii). This means that for every non-leaving strategy, the set of leaving strategies is not worse than it. As this holds for all non-leaving strategies, we have RL(s)⪯υ,S(s)RL(s). Using that R(s) =RL(s)∪ RL(s)and the set of leaving strategies trivially is not worse than itself, we obtain: R(s)⪯υ,S(s)RL(s). Moreover, the set of leaving strategies is non-empty, since the definition of not worse requires that there exists a strategy in the right-hand set. This proves our goal. Case “(i) and (iii), but not (ii)”: We assume there exists strategies that satisfy Condition (i) and (iii), but all these strategies violate Condition (ii). This case cannot happen, and below we derive a contradiction. This completes our cast distinction, since every strategy has to violate at least one of the three conditions. Our assumption is that there exists a non-leaving strategy that weakly dominates the set of all leaving strategies. However, every such strategy is suboptimal, as by violating Condition (ii) it is weakly dominated by all other strategies. This is a contradiction, because then there are no optimal strategies. More formally, if we assume the optimal strategy is leaving, this is a contradiction, because there exists a non-leaving strategy dominating the set of leaving strategies. And if we assume the optimal strategy is non-leaving, this is a contradiction,
|
https://arxiv.org/abs/2505.21087v1
|
because every non-leaving strategy is weakly dominated by the set of others. Theorem 21 (Non-convergence implies BECs — Proof in App. C-B ).LetU⋆:= lim k→∞Ukbe the limit of the na ¨ıve upper bound iteration (Eq. (3)) on the CSG G. If VI from above does not converge to the value in the limit, i.e. U⋆>VR, then the CSG Gcontains a BEC in S\(T∪WS)with respect to U⋆. Proof. Intuition and outline: . This proof builds on the proof of Thm. 9. There, we constructed a set Xmaximizing the difference between U⋆andVRand showed that if there is a pair of optimal strategies leaving leaving X, then we can derive a contradiction: The upper bound decreases, which contradicts the fact that it is a fixpoint. In the context of the other proof, that allowed us to show that without ECs, VI converges, because without ECs it is impossible to have a set of states where all optimal strategies stay in that set. In the presence of ECs, states can indeed have a positive difference between U⋆andVR, see e.g. Ex. 6. Our goal is to prove that at least one of these ECs is bloated. Thus, we assume for contradiction that no ECis bloated under U⋆. Thus, by Lem. 20, there is an optimal leaving strategy for player R. Using that, we can repeat the argument from Thm. 9, showing that in this case U⋆would decrease. Again, this is a contradiction because it is a fixpoint of applying Bellman updates (Lem. 48). Thus, the initial assumption that no EC is bloated is false, and we can conclude that there exists a BEC. Establishing the Context . As in the proof of Thm. 9, let X:={s∈S|∆(s) =maxs∈S∆(s)}be the set of states with maximum difference. We denote the maximum by ∆max, and our assumption yields that ∆max>0. Note that this implies thatX ∩(WS∪T) =∅, since for those states, their value is set correctly by initialization, and their difference is 0. Thus, if we find a BEC that is a subset of X, it also satisfies the additional condition of being non-trivial, i.e. not in (WS∪T). The contraposition of Thm. 9 yields that there has to be an EC in X. Bottom MECs . To derive the contradiction, in the following, we consider ECs with a particular property, namely ECs that are bottom in X. AMECX′is bottom in Xif the successors of a pair of strategies that leaves the MEC reaches states outside of Xwith positive probability. Intuitively, a bottom MEC inXis aMEC , such that after leaving it, none of the successors is 21 part of another MEC inX. One can compute such ECs using the MEC decomposition of X, ordering them topologically and picking one at the end of a chain. LetX′be a bottom MEC withX′⊆ X .X′exists because by assumption Xcontains at least one EC, thus, there also has to exist an EC that is bottom in X. Optimal Leaving Strategies in Non- BECs . We use the assumption for contradiction to say that X′is not bloated with respect toU⋆. Then, using Lem. 20, we know
|
https://arxiv.org/abs/2505.21087v1
|
that there exists a state s∈ X′where an optimal strategy ρUexists that is leaving X′. Moreover, since X′is a bottom MEC in X, we also have that it is leaving with respect to X. Deriving the Contradiction . Using these facts, we can exactly repeat the argument used in the proof of Thm. 9 under the paragraph-heading “Deriving the Contradiction”. Recall we denote locally optimal strategies with respect to U⋆byρU, σU(and we just proved ρUis leaving for all counter-strategies), and analogously locally optimal strategies with respect to VR, byρV andσV. We highlight that Eq. (5), Claim 4 and Claim 5 from Thm. 9 are applicable in the context of this proof, too. ∆(s) =U⋆(s)−VR(s) (Definition of ∆) =B(U⋆)(s)−VR(s) (U⋆is fixpoint by Lem. 48) =B(U⋆)(s,ρU, σU)−VR(s) ((ρU, σU)locally optimal w.r.t. U⋆) ≤B(U⋆)(s,ρU, σV)−VR(s) (Claim 4 in Thm. 9) =B(U⋆)(s,ρU, σV)−B(VR)(s) (VRis fixpoint [17, Theorem 1]) =B(U⋆)(s,ρU, σV)−B(VR)(s,ρV, σV) ((ρV, σV)locally optimal w.r.t. VR) ≤B(U⋆)(s,ρU, σV)−B(VR)(s,ρU, σV) (Claim 4 in Thm. 9) =B(∆)(s,ρU, σV) (Claim 5 in Thm. 9) <∆(s). ((s,ρU, σV)leaves Xand Eq. (5)) Now that we have derived a contradiction, our initial assumption that all ECs are not BECs is wrong, so we know there exists a BEC X′⊆(S\(T∪WS))with respect to U⋆. This concludes the proof. As a side note, we remark that it is indeed possible that there is only one BEC that causes many states, even some not in an EC, to have a positive difference ∆. For an example, we refer to [18, Fig. 4]. C. Soundness of DEFLATE The following is a technical lemma that is needed to show the soundness of deflation. Intuitively, it says that the value of all states in an EC needs to depend on some exit. Note that there can be states whose value is higher than their own exit value, namely if they can reach a better exit. However, this cannot be the case for all states, but there must be some whose value is less than or equal to their exit value (first condition), and in fact no state can have a higher value than these states that actually depend on exiting (second condition). The proof is very technical, as essentially it requires unfolding all the definitions, and thereby also unfolding all the included case distinctions. Lemma 49 (No state has a larger value than that of an exit from its EC) .LetX ⊆ S\(T∪WS)be an EC. Then, it holds that (i)X′:={s∈ X | VR(s)≤exitValVR(X, s)} ̸=∅, and (ii)maxs∈X′VR(s)≥maxs∈X \X′VR(s). Proof. We prove the lemma by contradiction, i.e. we assume that one of the two conditions posed by the lemma is violated. We make the following case distinction: (i)X′=∅; and (ii)X′̸=∅butmaxs∈X′VR(s)<maxs∈X \X′VR(s). Case (i) In this case it holds that X′=∅, i.e. for all s∈ X we have VR(s)>exitValVR(X, s). Recall that in Def. 27 if Hazard VR(X, s) =∅is true at some state s∈ X, then the exit value is given by the value of the matrix game played at that state. Consequently, as for all s∈ X it holds that VR(s)>exitValVR(X, s), at all states s∈ X it must
|
https://arxiv.org/abs/2505.21087v1
|
hold that Hazard VR(X, s)̸=∅because otherwise we would obtain the following contradiction: exitValVR(X, s) = supρ∈R(s)infσ∈S(s)B(VR)(s,ρ, σ) =VR(s). 22 We proceed with the assumption that for all s∈ X it holds that Hazard VR(X, s)̸=∅. Since by the case assumption at all s∈ X it holds that VR(s)>exitValVR(X, s)the true values of the states in Xare attainable with the hazardous and trapping strategies. More formally, at each s∈ X the following chain of equations holds. VR(s) = sup ρ∈R(s)inf σ∈S(s)B(VR)(s,ρ, σ) (VRis a fixpoint & case assumption: VR(s)>exitValVR(X, s)) = sup ρ∈Hazard VR(X,s)inf σ∈S(s)B(VR)(s,ρ, σ) (Hazard VR(X, s)are optimal by Def. 16) = sup ρ∈Hazard VR(X,s)inf σ∈TrapVR(X,s))B(VR)(s,ρ, σ). (TrapVR(X, s)are optimal by Def. 24) Thus, for player Rstaying in Xis optimal and since no target state is contained in Xit has to hold that VR(s) = 0 for alls∈ X. However, this is a contradiction to the assumption that VR(s)>exitValVR(X, s)since exitValVR(X, s)≥0 (as all possible valuations are non-negative and the value of the exiting sub-game is given by the maximum between 0 and exitVal - see Def. 27 and Def. 26). Thus, the case assumption that X′=∅must be false. Case (ii) In this case it holds that X′̸=∅butmaxs∈X′VR(s)<maxs∈X \X′VR(s)is true. We make the following case distinction: (ii.a) X \ X′is a BEC; and (ii.b) X \ X′is not a BEC. Case (ii.a) In this case X \ X′is a BEC, i.e. for all s∈ X \ X′it holds that Hazard VR(X \ X′, s)̸=∅. In case it holds that VR(s)>exitValVR(X \ X′, s)for all s∈ X \ X′, then since no target state is contained in Xand therefore neither in X \ X′, it must be true that VR(s) = 0 for all s∈ X \ X′. However, similarly as in Case (i), this is a contradiction to the assumption that VR(s)>exitValVR(X \ X′, s)for all s∈ X \ X′asexitValVR(X \ X′, s)≥0 for all s∈ X \ X′. Consequently, there has to exist s∈ X \ X′such that VR(s)≤exitValVR(X \ X′, s). We make another case distinction: (ii.a.1) for all s′∈arg maxs∈X \X′VR(s)it holds that VR(s′)>exitValVR(X \ X′, s′); and (ii.a.2) there exists s′∈arg maxs∈X \X′VR(s)such that VR(s′)≤exitValVR(X \ X′, s′). Case (ii.a.1) In this case for all s′∈arg maxs∈X \X′VR(s)it holds that VR(s′)>exitValVR(X \ X′, s′). Thus, at each s′∈arg maxs∈X \X′VR(s′)the following chain of equations holds. VR(s′) = sup ρ∈R(s′)inf σ∈S(s′)B(VR)(s′,ρ, σ) (VRis a fixpoint) = sup ρ∈Hazard VR(X \X′,s′)inf σ∈S(s′)B(VR)(s′,ρ, σ) (Hazard VR(X \ X′, s′)are optimal by Def. 16) = sup ρ∈Hazard VR(X \X′,s′)inf σ∈TrapVR(X \X′,s′))B(VR)(s′,ρ, σ).(TrapVR(X \ X′, s′)are optimal by Def. 24) Thus, at all states that attain the highest value among X \ X′, it is optimal for both players to choose strategies that together are staying in X \ X′. However, since Xand thus X \ X′does not belong to the winning region of Player S, leaving X \ X′has to be possible. Thus, from each state s′∈arg maxs∈X \X′VR(s), Player R has to possess an optimal strategy that leads to a state s′′∈ X \ X′where leaving X \ X′is possible,
|
https://arxiv.org/abs/2505.21087v1
|
i.e. where VR(s′′)≤exitValVR(X \ X′, s′′)holds. Thus, at state s′′the highest value also has to be attainable. However, this is a contradiction to the case assumption that for all s′∈arg maxs∈X \X′VR(s)it holds that VR(s′)>exitValVR(X \ X′, s′). Case (ii.a.2) In this case there exists s′∈arg maxs∈X \X′VR(s)such that VR(s′)≤exitValVR(X \ X′, s′). Let X′′:={s′∈arg maxs∈X \X′VR(s)|VR(s′)≤exitValVR(X \ X′, s′)}. Then, we need to make another case distinction: (ii.a.2.1) for all s∈ X′′it holds that Defl VR(X \ X′, s) =∅; and (ii.1.2.2) there exists s′∈ X′′such that Defl VR(X \ X′, s′)̸=∅. Case (ii.a.2.1) In this case for all s∈ X′′it holds that Defl VR(X \X′, s) =∅. Then, since Xand so X \X′do not belong to the winning region of Player S, there has to exist a state s′′∈ X \X′such that Defl VR(X \X′, s′′)̸=∅ 23 so leaving X \ X′is possible. However, then s′′∈arg maxs∈X \X′VR(s)must hold which is a contradiction to the case assumption. Case (ii.a.2.2) In this case there exists s′∈ X′′such that Defl VR(X \ X′, s′)̸=∅. Then, the following chain of equations holds. VR(s′)≤exitValVR(X \ X′, s′) (Case assumption) =max(0,V(ˆZ(s′))) (By case assumption (ii.a): Hazard VR(X \ X′, s′)̸=∅) =V(ˆZ(s′)) (Case assumption: Defl VR(X \ X′, s′)̸=∅) = sup ρ∈Defl VR(X \X′,s′)inf σ∈TrapVR(X \X′,s′)B(VR)(s′,ρ, σ) (Value of the exiting sub-game — see Def. 27) = sup ρ∈Defl VR(X \X′,s′)inf σ∈TrapVR(X \X′,s′)X (a,b)∈AX s∈X \X′VR(s)|{z} ≤maxs′′∈X \X ′VR(s′′)·δ s′,a,b s ·ρ(a)·σ(b) +X s∈X′VR(s)|{z} <maxs′′∈X \X ′VR(s′′)·δ s′,a,b s ·ρ(a)·σ(b) (Def. of Band case assumption (ii): maxs∈X′VR(s)<maxs∈X \X′VR(s)) <max s∈X \X′VR(s). (Everything sums up to 1) Thus, from the case assumption we derived a contradiction because s′∈arg maxs∈X \X′VR(s). Case (ii.b) In this case X \ X′is not a BEC, i.e. there exists s′∈ X \ X′such that Hazard VR(X \ X′, s′) =∅. We need another case distinction: (ii.b.1) for all s′∈arg maxs∈X \X′VR(s)it holds that Hazard VR(X \ X′, s′)̸=∅; and (ii.b.2) there exists s′∈arg maxs∈X \X′VR(s)such that Hazard VR(X \ X′, s′) =∅. Case (ii.b.1) In this case we have that for all s′∈arg maxs∈X \X′VR(s)it holds that Hazard VR(X \ X′, s′)̸=∅. Then, at each s′∈arg maxs∈X \X′VR(s)the following chain of equations holds. VR(s′) = sup ρ∈Hazard VR(X \X′,s′)inf σ∈S(s′)B(VR)(s′,ρ, σ) (Hazard VR(X \ X′, s′)are optimal by Def. 16) = sup ρ∈Hazard VR(X \X′,s′)inf σ∈TrapVR(X \X′,s′))B(VR)(s′,ρ, σ).(TrapVR(X \ X′, s′)are optimal by Def. 24) Thus, at all states that attain the highest value among X \ X′it is optimal for both players to choose strategies that together are staying in X \ X′. However, since Xand thus X \ X′does not belong to the winning region of Player S, leaving X \ X′has to be possible. Thus, from each state s′∈arg maxs∈X \X′VR(s), Player R has to possess an optimal strategy that leads to a state s′′∈ X \ X′where leaving X \ X′is possible, i.e. where Hazard VR(X \ X′, s′′) =∅holds. Thus, at state s′′the highest value also has to be attainable. However, this is a contradiction to the case assumption that for all s′∈arg maxs∈X \X′VR(s)it holds that
|
https://arxiv.org/abs/2505.21087v1
|
Hazard VR(X \ X′, s′)̸=∅. Case (ii.b.2) In this case there exists s′∈arg maxs∈X \X′VR(s)such that Hazard VR(X \ X′, s′) =∅. Then, the following chain of equations holds. VR(s′) = sup ρ∈R(s′)inf σ∈S(s′)B(VR)(s′,ρ, σ) (VRis a fixpoint) =sup ρ∈R(s′)inf σ∈S(s′)X (a,b)∈AX s∈X \X′VR(s)|{z} ≤maxs′′∈X \X ′VR(s′′)·δ s′,a,b s ·ρ(a)·σ(b) +X s∈X′VR(s)|{z} <maxs′′∈X \X ′VR(s′′)·δ s′,a,b s ·ρ(a)·σ(b) <max s′′∈X \X′VR(s′′). (Everything sums up to 1 and Hazard VR(X \ X′, s′) =∅thus at least one successor state is in X′) 24 Thus, from the case assumption we derived a contradiction because s′∈arg maxs∈X \X′VR(s). Thus, every case leads to a contradiction which concludes the proof. Using Lem. 49, we now prove that for every EC X, we have that for an upper bound U, the best exit value of the EC is also an upper bound. This is the crucial ingredient for showing that deflation cannot decrease a valuation below the value VR. Lemma 50. LetX ⊆ S\(T∪WS)be an EC, and U∈[0,1]|S|be a valuation with U≥VR. Then, for all states s∈bestExitsVR(X), we have exitValU(X, s)≥exitValVR(X, s). Proof. Lets∈bestExitsVR(X). Our goal is to show that exitValU(X, s)≥exitValVR(X, s). Since for the estimation of exitValU(X, s)andexitValVR(X, s)the sets of strategies, TrapVR(X, s),TrapU(X, s),Hazard VR(X, s), and Hazard U(X, s), which can be empty or non-empty, we have to consider all possible combinations. We consider the following four main cases, which we further analyse with respect to the Usets of strategies where necessary: Case (I) TrapVR(X, s) =∅andHazard VR(X, s) =∅, Case (II) TrapVR(X, s) =∅andHazard VR(X, s)̸=∅, Case (III) TrapVR(X, s)̸=∅andHazard VR(X, s) =∅, and Case (IV) TrapVR(X, s)̸=∅andHazard VR(X, s)̸=∅. For each case we show that either the case is impossible or that the statement of the lemma holds. Case (I) TrapVR(X, s) =∅andHazard VR(X, s) =∅. Letσ′∈ S(s)be an optimal Player Sstrategy under VR. Since TrapVR(X, s) =∅, at least on of the two conditions posed by the definition of trapping strategies (Def. 24) must be violated under VR. Since Hazard VR(X, s) =∅, Condition (ii) of Def. 24, i.e. ∀ρ∈Hazard VR(X, s) : (s,ρ, σ′)staysIn X, is trivially satisfied. Consequently, Condition (i) must be violated, i.e., it must hold that there exists no optimal strategy for Player Swhich cannot be true. Thus, this case is impossible. Case (II) TrapVR(X, s) =∅andHazard VR(X, s)̸=∅. At state sit holds that VR(s) =exitValVR(X, s), since we chose it to be in bestExitsVR. Thus, it is in the set X′ constructed in Condition (i) of Lem. 49. By Condition (ii) of Lem. 49, we know that all states in X \ { s}attain a value that is either smaller or equal VR(s). Using this and the fact that no staying strategy can be optimal (as the set of trapping strategies is empty), we can derive a contradiction as follows. VR(s) = sup ρ∈R(s)inf σ∈S(s)B(VR)(s,ρ, σ) (VRis fixpoint of B) = sup ρ∈Hazard VR(X,s)inf σ∈S(s)B(VR)(s,ρ, σ) (Hazard VR(X, s)are optimal under VR) < sup ρ∈Hazard VR(X,s)inf σ∈SL(Hazard VR(X,s),X,s)B(VR)(s,ρ, σ) (TrapVR(X, s) =∅) = sup ρ∈Hazard VR(X,s)inf σ∈SL(Hazard VR(X,s),X,s)X (a,b)∈AX s′∈XVR(s′)|{z} ≤VR(s)·δ s,ρ, σ ·ρ(a)·σ(b) (Unfolding
|
https://arxiv.org/abs/2505.21087v1
|
definition of B) ≤ sup ρ∈Hazard VR(X,s)inf σ∈SL(Hazard VR(X,s),X,s)X (a,b)∈AX s′∈XVR(s)·δ s,ρ, σ ·ρ(a)·σ(b) (VR(s′)≤VR(s)by (ii) in Lem. 49) =B(VR)(s) =VR(s). (VRis fixpoint of B) Thus, overall VR(s)<VR(s), a contradiction. Case (III): TrapVR(X, s)̸=∅andHazard VR(X, s) =∅. Since Hazard VR(X, s) =∅we know by Lem. 14 that there exists ρL∈ RL(X, s).R(s)⪯VR,S(s){ρL}. Our goal is to show that there exists ρ∗∈ R(s)that is optimal under VRandρ∗∈Defl U(X, s). We distinguish two cases: •(III.a): For all ρM∈ R L(X, s)that are optimal under VR, there exists ρH∈Hazard U(X, s)such that Supp(ρM)∩ Supp(ρH)̸=∅, and •(III.b): There exists ρL∈ RL(X, s)that is optimal under VRand it holds that Supp(ρL)∩S ρ∈Hazard U(X,s)Supp(ρ) =∅. Case (III.a) In this case, for all ρM∈ RL(X, s)that are optimal under VR, there exists ρH∈Hazard U(X, s)such that Supp(ρM)∩Supp(ρH)̸=∅. 25 LetρH∈Hazard U(X, s)andρL∈ {ρ∈ RL(X, s)|Supp(ρH)∩Supp(ρL) =∅}such that Supp(ρM)∩Supp(ρH)̸=∅ andSupp(ρM)∩Supp(ρL)̸=∅. In other words, only a strategy that mixes the supports of ρHandρLis optimal under VR. Due to the case assumption that only strategies that mix with some hazardous strategy are optimal, ρLmust be sub-optimal under VR, as well as ρH. More formally, for all ρM∈ RL(X, s), such that R(s)\ {ρM}⪯{ρM}it holds that ∃ρH∈Hazard U(X, s).∃ρL∈ {ρ∈ RL(X, s)|Supp(ρH)∩Supp(ρL) =∅}: sup Dist(Supp(ρM)\Supp(ρL))inf σ∈S(s)B(VR)(s,ρ, σ)≺{ρM} and sup Dist(Supp(ρM)\Supp(ρH))inf σ∈S(s)B(VR)(s,ρ, σ)≺{ρM}. LetS∗⊆ S(s)be the optimal Player Sstrategies with respect to ρM. Since ρMhas to mix ρLandρH, the following must be true: •there exists σ1∈ S∗such that B(VR)(s,ρL, σ1)<B(VR)(s,ρH, σ1),and (6) •there exists σ2∈ S∗such that B(VR)(s,ρL, σ2)>B(VR)(s,ρH, σ2), (7) because otherwise either ρLorρHwould be optimal under VRthus mixing would not be necessary. Further, Player S also needs to mix the two strategies σ1andσ2to ensure optimality. Our goal now is to show that s /∈bestExitsVR(X)which would lead to a contradiction to the assumption that s∈bestExitsVR(X), showing that at a best exit there cannot exist only optimal strategies that fulfill the same properties asρM. To estimate the best exit value from Xunder VR, all exists from s∈ X have to be estimated. By Eq. (6) and Eq. (7), we know that the part of the ECXthat is reachable with (ρH, σ1), sayX′⊂ X , attains a higher value than the one that is reachable with (ρH, σ2). More formally, the following holds. Let ˆs∈maxs′∈X′exitValVR(X, s′). Atˆsleaving X′has to be possible, because otherwise, the values at all s′∈ X′would be equal to 0, which would be a contradiction to the assumption that X ⊆ S\(T∪WS). Then, the following chain of equations holds. exitValVR(X, s) = sup ρ∈R(s)inf σ∈S(s)B(VR)(s,ρ, σ) (Hazard VR(X, s) =∅) =inf σ∈S(s)B(VR)(s,ρM, σ) (ρMis optimal under VR) =B(VR)(s,ρM, σ1) (σ1is optimal under VRwith respect to ρM) <B(VR)(s,ρH, σ1). (Under σ1,ρHis optimal by Eq. (6)) =X (a,b)∈AX s′∈Post(s,ρH,σ1)VR(s′)·δ s,a,b ·ρ(a)·σ(b) (Unfolding the def. of B) ≤X (a,b)∈AX s′∈Post(s,ρH,σ1)exitValVR(X,ˆs)·δ s,a,b ·ρ(a)·σ(b) (Atˆsone can attain the highest value upon leaving) =exitValVR(X,ˆs). (Everything sums-up to 1) Thus, under VR, the state ˆsattains a higher value upon leaving than s. This is a contradiction to the assumption that s∈bestExitsVR(X). Case (III.b) There exists ρL∈ R L(X, s)that is optimal under VRand it holds that Supp(ρL)∩S ρ∈Hazard U(X,s)Supp(ρ) =∅.
|
https://arxiv.org/abs/2505.21087v1
|
Then, ρLmust belong to the set of deflating strategies under U, as it satisfies both conditions posed by the definition of deflating strategies (see Def. 25). 26 IfTrapU(X, s)̸=∅, then the following chain of equations holds. exitValVR(X, s) = sup ρ∈R(s)inf σ∈σ(s)B(VR)(s,ρ, σ) (By Def. 16 in case Hazard VR(X, s) =∅) =inf σ∈σ(s)B(VR)(s,ρL, σ) (ρLis optimal) ≤ sup ρ∈Defl U(X,s)inf σ∈σ(s)B(VR)(s,ρ, σ) ({ρL} ⊆Defl U(X, s)}) ≤ sup ρ∈Defl U(X,s)inf σ∈TrapU(X,s)B(VR)(s,ρ, σ) (TrapU(X, s)⊆ S(s)) ≤ sup ρ∈Defl U(X,s)inf σ∈TrapU(X,s)B(U)(s,ρ, σ) (Bis order-preserving) =exitValU(X, s). IfTrapU(X, s) =∅, then the following chain of equations holds. exitValVR(X, s) = sup ρ∈R(s)inf σ∈σ(s)B(VR)(s,ρ, σ) (By Def. 16 in case Hazard VR(X, s) =∅) =inf σ∈σ(s)B(VR)(s,ρL, σ) (ρLis optimal) ≤ sup ρ∈Defl U(X,s)inf σ∈S(s)B(U)(s,ρ, σ) (Bis order-preserving) =exitValU(X, s). Case (IV): TrapVR(X, s)̸=∅andHazard VR(X, s)̸=∅. It holds that s∈bestExitsVR(X), thus there exist no other state at which leaving Xattains better value. By Lem. 49 we have that for all s′∈bestExitsVR(X)it holds that VR(s′)≤exitValVR(X, s′). Consequently, at a best exit it can only hold that VR(s′) =exitValVR(X, s′)because otherwise VRwould not be a fixpoint of B. Therefore, the following chain of equations holds at s. VR(s) = sup ρ∈R(s)inf σ∈S(s)B(VR)(s,ρ, σ) (VRis fixpoint of B) = sup ρ∈Hazard VR(s)inf σ∈S(s)B(VR)(s,ρ, σ) (Hazard VR(X, s)are optimal under VR) = sup ρ∈Hazard VR(s)inf σ∈TrapVR(s)B(VR)(s,ρ, σ) (TrapVR(X, s)are optimal under VR) =exitValVR(X, s). (By Lem. 49 we have VR(s)≤exitValVR(X, s)however only ”=” can hold as VRis fixpoint of B) Thus, this case is equal to Case (III) where exitValVR(X, s) =supρ∈R(s)infσ∈S(s)B(VR)(s,ρ, σ), as the assumptions of cases (I) and (II) are not possible. Lemma 51 (Existence of maximal BECs ).Given a CSG Gand let C⊆(S\(T∪WS))be an ECand let Ube a valuation. If under Uthere exists a BECX ⊆ C, then there also exists a maximal BEC, i.e. there exists a BECXmax⊆Csuch that X ⊆ Xmaxand it holds that Xmax∪ {s}is not a BEC for all s∈C\ Xmax. Proof. To prove the lemma it suffices to show the following: Let X1⊆(S\(T∪WS))andX2⊆(S\(T∪WS))be two ECs that are BEC under UwithX1∩ X2̸=∅. Then, it holds that X1∪ X2is also a BEC. Lets∩∈(X1∩ X2),s′∈ X1, and s′′∈ X2. Since X1is aBEC at each state s∈ X1it holds that Hazard U(X1, s)̸=∅and TrapU(X1, s)̸=∅for all s∈ X1. Further, since X1is an EC, there exists a play s0s1. . .such that s0=s′andsn=s∩for some nand for all 0≤i < n it holds that si+1∈Post(si,ρ∗ i, σ∗ i)where ρ∗ i∈Hazard U(X1, si)andσ∗ i∈TrapU(X1, si). Similarly, since X2is also a BEC , there exists a play s′ 0s′ 1. . .such that s′ 0=s∩ands′ m=s′′for some mand for all 0≤j < m it holds that s′ j+1∈Post(s′ j,ρ′ j, σ∗ j)where ρ′ j∈Hazard U(X2, s′ j)andσ′ j∈TrapU(X2, s′ j). Therefore, for any s, s′∈ X1∪X2there exist a play s′′ 0s′′ 1. . .such that s′′ 0=s′ands′′ l=s′′for some land for all 0≤k < l it holds that s′′ k+1∈Post(s′′ k,ρ′′ k, σ′′ k)where ρ′′ k∈Hazard U(X1∪ X2, s′′ k)andσ′′ k∈TrapU(X1∪ X2, s′′ k). Thus, we have shown that the X1∪ X2is an EC. Further, since at all states
|
https://arxiv.org/abs/2505.21087v1
|
of X1andX2the two conditions (i) and (ii) of Def. 18 are fulfilled, also X1∪ X2fulfills them. Consequently, X1∪ X2is aBEC which in turn proves that there exist maximal BECs. 27 Now that we have proven that maximal BECs indeed exist, the next step is to prove the correctness of FIND MBECs , i.e. the algorithm that can find maximal BECs. Lemma 30 (FIND MBECs is correct— Proof in App. C-C).For a CSG , aMEC Cand a valid upper bound U, it holds that X ∈ FIND MBECs (G,C,U)if and only if Xis a BEC in Cand there exists no T⊆Cthat is a BEC and X⊊T. Proof. We prove the lemma in two steps. First, if for a set of states X ⊆ Cit holds that X ∈ FIND MBECs (G,C,U)thenX is a maximal BEC . Second, we will show that if a set of states X ⊆ Cis a maximal BEC , thenX ∈ FIND MBECs (G,C,U). 1. Direction “ ⇒”LetX ∈ FIND MBEC (G,C,U). We need to show that Xis a maximal BEC. LetB:={s∈C|Hazard U(C, s)̸=∅}. Since, X ∈ FIND MBEC (G,C,U), then X ⊆ B. We show that Xis a maximal BEC in Cvia a contradiction. In particular, we consider the following two cases: Case (i) Xis maximal EC in Bbut not BEC in C; and Case (ii) Xis a BEC in Cbut not a maximal one. Case (i) Assume towards a contradiction that Xis a maximal ECinBbut not BEC inC. Then there exists s′∈ X such thatHazard U(X, s′) =∅. Thus, every strategy ρ′∈ R(s′)must violate at least one of the three conditions of Def. 16. We write out the negatations. (a)ρ′/∈ RL(X, s′), i.e. the strategy is leaving ρ′∈ RL(X, s). (b)We do not have R(s′)\ {ρ′}⪯S(s′){ρ′}. By contraposition of Lem. 14, this implies {ρ′}≺S(s′)R(s′)\ {ρ′}, so the strategy is sub-optimal. (c)We do not have RL(X, s′)≺S(s′){ρ′}. By Lem. 14, this implies {ρ′}⪯S(s′)RL(X, s′), i.e. there are leaving strategies that are not worse than ρ′. Note that in particular, this implies that RL(X, s′)is non-empty. Our assumption gives us the disjunction over the three violated conditions. We proceed by a case distinction, always assuming that each strategy violates a certain condition, which allows us to prove our goal, or, if there exists a strategy satisfying the condition, we continue with the next one. Case “all strategies satisfy (a)” . In this case all strategies of player Rmust be leaving X, i.e.RL(X, s′) =R(s′). Thus, for all σ∈ S(s′)and for all ρ∈ R(s′)there exists ˆs∈Post(s′,ρ, σ)such that ˆs /∈ X. Consequently, Xis not an EC which is a contradiction to the assumption that Xis an EC. Case “some strategy violates (a) and satisfies (c)” . We assume that ρ′violates (a) but satisfies condition (c). Thus, PlayerRhas a non-leaving strategy, however it is not optimal. In other words, Player Rhas an optimal strategy ρ∈ R(s′)such that there exists ˆs∈Post(s′,ρ, σ)such that ˆs /∈ X . Then, using the same argumentation as in the previous case we can derive a contradiction to the assumption that X ∈ FIND MBEC (G,C,U). Case
|
https://arxiv.org/abs/2505.21087v1
|
“some strategy violate (a) and (c) but satisfy (b)” . We assume that ρ′violates (a) and (c) but satisfies condition (b). Then, strategy ρ′is dominated by another non-leaving strategy, say ρ∈ RL(X, s′), that in turn has to violate one of the two previous cases. Case (ii) Now, assume towards a contradiction that Xis aBEC inCbut not maximal. Since X ∈ FIND MECs (G, B), Xis maximal BEC inB. Then, due to the case assumption that Xis not maximal, there has to exist a set of states ˆS⊆C\Bsuch that X ∪ˆS]is a maximal BEC inC. However, since for all s∈C\Bit holds that Hazard U(C, s) =∅, X ∪ˆScannot be an EC in Cwhich is a contradiction to the assumption that Xis an EC in C. Overall, we have shown that Xis a BEC and it is also maximal. 2. Direction “ ⇐”LetX ⊆ Cbe a maximal BEC in the MEC C. We need to show that X ∈ FIND MBEC (G,C,U)holds. Let B:={s∈C|Hazard U(C, s)̸=∅}. Assume towards a contradiction X/∈FIND MBEC (G,C,U). Then, we distinguish two cases: (i) ∃s∈ X such that s /∈B, or (ii) X/∈FIND MECs (G, B). Case (i) If∃s∈ X such that s /∈B, then it holds that Hazard U(C, s) =∅, by Claim 6 we also have Hazard U(X, s) =∅ asX ⊆ C, a contradiction. Case (ii) In this case it holds that ∀s∈ X we have that s∈B, however, X/∈FIND MECs (G, B). We can again distinguish three cases: (ii.a) ∃X′∈FIND MECs (G, B)such that X′⊊X; and (ii.b) ∃X′∈FIND MECs (G, B)such thatX′∩ X ̸=∅and∃s′∈ X′\ X; and (ii.c) ∀X′∈FIND MECS (G, B)it holds that X′∩ X=∅. Case (ii.a) Then there is a state in X \B, a contradiction to the assumption that Xis a maximal BEC in B. Case (ii.b) By “1. Direction “ ⇒” ” we know that X′is a maximal BEC . In particular, for all s∈ X′it holds that Hazard U(X′, s)̸=∅. Then, by Lem. 51 we also know that since X′∩ X ̸=∅, then X′∪ X has to be also a BEC . However, this is a contradiction to the assumption that Xis already a maximal BEC. Case (ii.c) Since∀X′∈FIND MECS (G, B)it holds that X′∩ X=∅then, for all s′∈ X we have s′/∈B. However, Xis a maximal BEC in the MEC Cby assumption and therefore for all s∈ X it holds that Hazard U(X, s)̸=∅. Further, since X ⊆ C, by Claim 6 it also holds that Hazard U(C, s)̸=∅. Consequently, such s′cannot exist and therefore such X′cannot exist, a contradiction. 28 Claim 6: For any ECs C⊆C′we have Hazard U(C, s)⊆Hazard U(C′, s). Proof. Letρ∈ RL(C, s). We need to show that ρ∈ RL(C′, s)holds. First, strategy ρsatisfies condition (i) of Def. 16, thus, there has to exist ˆσ∈ S(s), such that (s,ρ,ˆσ)staysIn C holds. Since C⊆C′, it also has to be true that (s,ρ,ˆσ)staysIn C′. Consequently, ρsatisfies condition (i) of Def. 16 with respect to C′. Second, ρsatisfies condition (ii) of Def. 16, i.e. ρis optimal. As the optimality of a strategy is independent of any set
|
https://arxiv.org/abs/2505.21087v1
|
of states this condition is satisfied anyway. Third, ρsatisfies condition (iii) of Def. 16, i.e. all strategies that are leaving with respect to Care sub-optimal. More precisely, it holds that RL(C, s)≺S(s){ρ}. Since C⊆C′holds, there cannot exist an optimal strategy that is leaving with respect to the greater set of states C′. Consequently, ρalso satisfies condition (iii) of Def. 16 with respect to C′. Consequently, ρsatisfies all conditions posed by Def. 16 with respect to C′, so it holds that Hazard U(C, s)⊆ Hazard U(C′, s). ▲ D. Soundness and Completeness In order to prove the soundness and completeness of Alg. 1 with Alg. 3 as DEFLATE routine, we need to prove that the sequence of lower and upper bounds converges to a fixpoint. Therefore, before we can prove Thm. 37, first we need to prove the following lemma. Lemma 52 (Alg. 1 converges to a fixpoint) .The BVI algorithm (Alg. 1) converges to a fixpoint, i.e. limk→∞(Bk(L0),(D◦ B)k U0 ) = (B(limk→∞Bk(L0)),(D◦B)(lim k→∞(D◦B)k U0 )). Proof. We consider the domain V:= [0,1]|S|×[0,1]|S|, i.e. every element consists of two vectors of real numbers, the under- and over-approximation. The bottom element of the domain, denoted by ⊥, is(⃗0,⃗1), where for a∈[0,1],⃗ adenotes the function that assigns ato all states. We further restrict the domain to exclude elements of the domain that are trivially irrelevant for the computation. In particular, we exclude all tuples (L,U)where L(s)<1for a target state s∈TorU(s)>0for a state with no path to the target state s∈WS. Then the bottom element is ⊥= (L0,U0), i.e. the vector that we have before the first iteration of the main loop of Alg. 1. Concretely, L0(s)is 1 for all s∈T, i.e. target states, and 0 everywhere else, and U0(s)is 0 for all s∈WS, i.e. states where Scan surely win, and 1 everywhere else. We define a comparator ⊑onV, to compare two elements of the domain. We write (Lk,Uk)⊑(Lk+1,Uk+1)if and only if both Lk≤Lk+1andUk≥Uk+1hold with component-wise comparison. Intuitively, (Lk,Uk)⊑(Lk+1,Uk+1)holds if (Lk+1,Uk+1)is a more precise approximation than (Lk,Uk). The comparator ⊑induces a complete partial order over the domain, since we have a bottom element and every direct subset has a supremum; the latter claim holds, because ⊑reduces to component-wise comparison between real numbers from [0,1], where suprema exist. For more details on the definition of directed set and complete partial orders, we refer to [13]. Alg. 1 first applies the Bellman operator on the over- and under-approximation and subsequently applies the deflate operator on the over-approximation (i.e. upper bound). Thus, the operator that mimics the behavior of the algorithm is the following BVI(Lk,Uk) = (B(Lk),(D◦B)(Uk)). From Thm. 5 we know that the under-approximation converges. Also, by Lem. 36 we know that (D◦B)is order-preserving. Thus, for the final argument it remains to show that Dis (Scott-)continuous. A map is (Scott-)continuous if, for every directed set Din(proj2(V),⊑), the subset D(D)of(proj2(V),⊑)is directed, and D(supD) =supD(D). Let s∈S, ifs, under supD, does not belong to any BEC, then D(supD)(s) =supD(s). Thus, we proceed with the assumption that under the valuation supD,s∈ X ⊆ Ssuch that Xis a BEC. LetHazard supD(X,
|
https://arxiv.org/abs/2505.21087v1
|
s)be the set of hazardous strategies and let TrapsupD(X, s)be the set of suitable counter strategies for playerS(see Def. 18). Since Xis aBEC both sets are non-empty, i.e. Hazard supD(X, s)̸=∅andTrapsupD(X, s)̸=∅, at alls∈ X. Thus, D(supD)(s) =min(supD(s),bestExitValsupD(X)). For the sake of readability let R′(s):=Dist(ΓR(s)\S ρ′′∈Hazard supD(X,s)Supp(ρ′))andS′(s):=TrapsupD(X, s)for some s∈ X. Then, the following chain of equations holds fors∈ X. D(supD)(s) =min(supD(s),bestExitValsupD(X)) =min(supD(s),max s′∈XexitValsupD(X, s′)) (By Def. 28 (best exit value)) =min(supD(s),max s′∈Xsup ρ∈R′(s′)inf σ∈S′(s′)B(sup d∈Dd)(s′,ρ, σ)) (By Def. 27 (exit value)) =min(supD(s),max s′∈Xsup d∈Dsup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ))(Bellman operator is Scott-continuous (see proof Theorem 9) ) 29 =min(supD(s),sup d∈Dmax s′∈Xsup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ)) (By Claim 7) =min(supD(s),sup d∈Dmax s′∈XexitVald(X, s′)) (By Def. 27 (exit value))) =min(supD(s),sup d∈DbestExitVald(X)) (By Def. 28 (best exit value)) =sup d∈Dmin(d(s),bestExitVald(X)) (min is Scott-continuous) =sup d∈DD(d)(s). (SinceDis the deflate operator) Claim 7: Since maxs′∈XexitValsupD(X, s′)exists the following chain of equations holds: sup d∈Dmax s′∈Xsup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ) =sup d∈Dsup s′∈Xsup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ) (As the maximum exists) = sup (d,s′)∈(D×X)sup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ) =sup s′∈Xsup d∈Dsup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ) =max s′∈Xsup d∈Dsup ρ∈R′(s′)inf σ∈S′(s′)B(d)(s′,ρ, σ). ▲ We have shown that BandDare both Scott-continuous, thus the composition (D◦B), i.e. the sequential application of the operators BandD, is also continuous [44]. Further, Vis a complete partial order. Therefore, Kleen’s Fixpoint Theorem [13, Theorem 8.15] is applicable. Then, we know that lim k→∞(D◦B)k(⊥) =sup k≥0(D◦B)k(⊥) (⋆) holds and by the theorem we know that the fixpoint exists and is given by supk≥0(D◦B)k(⊥). Now, we can finally conclude: (D◦B)( lim k→∞(D◦B)k(⊥))(⋆)= (D◦B)(sup k≥0(D◦B)k(⊥)) =sup k≥0(D◦B)((D◦B)k(⊥)) (since (D◦B)is continuous) =sup k≥1(D◦B)k(⊥) =sup k≥0(D◦B)k(⊥) (since ⊥ ⊑(D◦B)k(⊥)for all k) = lim k→∞(D◦B)k(⊥). by (⋆) Now we can prove the following theorem. Theorem 37 (Soundness and completeness - Proof in App. C-D).ForCSGs Alg. 1, using Alg. 3 as DEFLATE , produces monotonic sequences Lunder- and Uover-approximating VR, and terminates for every ε >0. Proof. We denote by LkandUkthe lower and upper bound function after the k-th call of DEFLATE .LkandUkare monotonic under-, respectively, over-approximation of VRbecause they are updated via Bellman updates respectively (D◦B)-updates, which are order-preserving as shown in Lem. 36 and soundness as shown in Lem. 35 . Since DEFLATE iterates over finite sets, the computations take a finite time. Thus, it remains to prove that the main loop of Alg. 1 terminates, i.e., for all ε >0, there exists an n∈Nsuch that for all s∈S Un(s)−Ln(s)≤ε. It suffices to show that limk→∞Uk−VR= 0, because limk→∞Lk=VR(from e.g. [42]). In the following let U⋆:= lim k→∞Uk, that exists by Lemma 48, and ∆(s) := U⋆(s)−VR(s). Assume towards a contradiction that the algorithm does not converge, i.e., there exists a state s∈Swith∆(s)>0. The proof is structured as follows. •From ∆>0we derive that there has to exist a BEC. 30 •The states with the maximal ∆contain BECs. •Alg. 1 will find a BEC contained in X ⊆ Sand deflate it. •Deflating will decrease the upper bound of that states contained in the BEC , which is a contradiction because by Lemma 52,U⋆is a fixpoint. Let∆max:=maxs∈S∆(s)and let C:={s∈S|∆(s) = ∆max}. IfCdoes not contain any BECs , then
|
https://arxiv.org/abs/2505.21087v1
|
the contraposition of Thm. 9 proves our goal. Thus, we continue with the assumption that Ccontains BECs. LetX ⊆ Cbe a BEC contained in Cthat Alg. 3 will eventually find and deflate. We now consider bottom BECs . ABECX′ is called bottom in Xif none of the successors of a strategy that leaves the BECX′, is part of another BEC inX. A bottom BEC can be computed by first finding the bottom MEC within X, then identifying all BECs , ordering them topologically and finally picking the one at the end of a chain. LetX′⊆ X be a bottom BEC that Algorithm 3 eventually finds. In order to deflate the BEC , we need to estimate the exit value for each s∈ X′, as defined in Definition 27. Further, Hazard U⋆(X′, s)is the set of hazardous strategies and since X′is a BEC, at all states s∈ X′, we have Hazard U⋆(X′, s)̸=∅. We distinguish two cases: (i) TrapU⋆(X′, s)̸=∅; and (ii) TrapU⋆(X′, s) =∅. Case (i) In this case we have that TrapU⋆(X′, s)̸=∅. Then Defl U⋆(X′, s)̸=∅(and because X ∩WS=∅), thus the following chain of equations holds. U⋆(s) = sup ρ∈R(s)inf σ∈S(s)B(U⋆)(s,ρ, σ) (U⋆is a fixpoint) = sup ρ∈Hazard U⋆(X′,s)inf σ∈TrapU⋆(X′,s)B(U⋆)(s,ρ, σ) (Hazard U⋆(X′, s)̸=∅) > sup ρ∈Defl U⋆(X′,s)inf σ∈TrapU⋆(X′,s)B(U⋆)(s,ρ, σ) (Defl U⋆(X′, s)are sub-optimal with respect to U⋆) =exitValU⋆(X′, s) (By Def. 27) Since bestExitValU⋆(X′) =maxs′∈X′exitValU⋆(X′, s′), and above we have shown that for all s∈ X′it holds U⋆(s)> exitValU⋆(X′, s), we obtain a contradiction to the assumption that U⋆is a fixpoint. Case (ii) In this case it holds that TrapU⋆(X′, s) =∅. Thus, Player Sprefers strategies that are leaving with respect to Hazard U⋆(X, s). Since X′is a bottom BEC at least one successor state does not belong to X. Therefore, the following chain of equations holds. ∆max−VR(s) =U⋆(s) (By definition of ∆max) =sup ρ∈R(s)inf σ∈S(s)B(U⋆)(s,ρ, σ) (U⋆is a fixpoint) =sup ρ∈R(s)inf σ∈S(s)X (a,b)∈AX s′∈SU⋆(s′)·δ s,a,b s′ ·ρ(a)·σ(b) (Unfolding definition of B) =sup ρ∈R(s)inf σ∈S(s)X (a,b)∈AX s′∈XU⋆(s′)·δ s,a,b s′ ·ρ(a)·σ(b) +X s′′∈S\XU⋆(s′′)·δ s,a,b s′′ ·ρ(a)·σ(b) (Player Sprefers leaving X′and thus X) =sup ρ∈R(s)inf σ∈S(s)X (a,b)∈AX s′∈XU⋆(s′)·δ s,a,b s′ ·ρ(a)·σ(b) +X s′′∈S\X ∆(s′′)|{z} <∆max+VR(s′′) ·δ s,a,b s′′ ·ρ(a)·σ(b) (By def. of ∆) <sup ρ∈R(s)inf σ∈S(s)X (a,b)∈AX s′∈XU⋆(s′)·δ s,a,b s′ ·ρ(a)·σ(b) +X s′′∈S\X ∆max+VR(s′′) ·δ s,a,b s′′ ·ρ(a)·σ(b) (∆for states outside Xis strictly smaller than ∆max) =U⋆(s) (Everything sums-up to 1) Thus, we get a contradiction to the assumption that U⋆is a fixpoint. 31 s4 s3 s5 s0 s1s2 (□,c) (b,□)(□,d) (a,□) (a,d) (a,c),(b,d)(b,c)(□,□) (□,□)(□,□) Fig. 6: Counter example for the BVI provided in [7]. APPENDIX D COUNTER -EXAMPLES FOR PREVIOUS WORKS A. Mistake in [9] In [9], the authors provide an exemplary CSG that illustrates the incorrectness of the BVI algorithm presented in [7]. Fig. 6 shows the CSG . The value attainable at s5is 0.6 while the value attainable at s0is2−√ 2. In our algorithm (Alg. 3) the set {s4, s3}does not constitute a BEC for the valuation U(s4) =U(s3) =U(s1) = 1 andU(s2) = 0 . Consequently, the while-loop of Alg. 3, which is responsible for deflating, is
|
https://arxiv.org/abs/2505.21087v1
|
not executed and thus the values of all states are updated only using the Bellman operator. This yields the correct values by Thm. 9. Therefore, Alg. 3 correctly sets the valuation of states s4ands3to the value 0.6. In contrast, the algorithm presented in [7] correctly sets the value of state s4to 0.6 but reduces the value of state s3 to2−√ 2<0.6. B. Mistake in [19] There was an attempt to fix the above problem in [19], however, also this solution is incorrect. An exemplary CSG that illustrates the incorrectness of the approach presented in [19] is the CSG illustrated in Fig. 2. While our approach correctly deflates the value of state s0, the best exit as defined in [19] is falsely 1, i.e. the value of s0is never reduced. 32
|
https://arxiv.org/abs/2505.21087v1
|
arXiv:2505.21091v1 [cs.CY] 27 May 2025Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) Anna Neumann Research Center Trust, UA Ruhr University of Duisburg-Essen Duisburg, Germany anna.neumann1@uni-due.deElisabeth Kirsten Research Center Trust, UA Ruhr Ruhr University Bochum Bochum, Germany elisabeth.kirsten@rub.de Muhammad Bilal Zafar Research Center Trust, UA Ruhr Ruhr University Bochum Bochum, Germany bilal.zafar@rub.deJatinder Singh Research Center Trust, UA Ruhr University of Duisburg-Essen Duisburg, Germany University of Cambridge Cambridge, United Kingdom jatinder.singh@cl.cam.ac.uk Abstract System prompts in Large Language Models (LLMs) are predefined directives that guide model behaviour, taking precedence over user inputs in text processing and generation. LLM deployers increas- ingly use them to ensure consistent responses across contexts. While model providers set a foundation of system prompts, deploy- ers and third-party developers can append additional prompts with- out visibility into others’ additions, while this layered implementa- tion remains entirely hidden from end-users. As system prompts become more complex, they can directly or indirectly introduce unaccounted for side effects. This lack of transparency raises funda- mental questions about how the position of information in different directives shapes model outputs. As such, this work examines how the placement of information affects model behaviour. To this end, we compare how models process demographic information in sys- tem versus user prompts across six commercially available LLMs and 50 demographic groups. Our analysis reveals significant bi- ases, manifesting in differences in user representation and decision- making scenarios. Since these variations stem from inaccessible and opaque system-level configurations, they risk representational, allocative and potential other biases and downstream harms beyond the user’s ability to detect or correct. Our findings draw attention to these critical issues, which have the potential to perpetuate harms if left unexamined. Further, we argue that system prompt analysis must be incorporated into AI auditing processes, particularly as customisable system prompts become increasingly prevalent in commercial AI deployments.1 CCS Concepts •Computing methodologies →Natural language processing ; •Social and professional topics →Socio-technical systems . This work is licensed under a Creative Commons Attribution 4.0 International License. FAccT ’25, June 23–26, 2025, Athens, Greece ©2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-1482-5/2025/06 https://doi.org/10.1145/3715275.3732038Keywords Bias, System Prompt, Foundation Model, Algorithmic Supply Chains, Transparency, Artificial Intelligence, Sociotechnical Systems ACM Reference Format: Anna Neumann, Elisabeth Kirsten, Muhammad Bilal Zafar, and Jatinder Singh. 2025. Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs). In The 2025 ACM Conference on Fairness, Ac- countability, and Transparency (FAccT ’25), June 23–26, 2025, Athens, Greece. ACM, New York, NY, USA, 26 pages. https://doi.org/10.1145/3715275.3732038 1 Introduction Large Language Models (LLMs) are increasingly underpinning a range of socio-technical systems, including in critical domains like healthcare and government services. Their growing presence in high-stakes applications necessitates robust control mechanisms to ensure reliable behavior. AI research labs develop highly adaptable foundation models [19,87,102] by training language models on increasingly large text datasets. Users of these foundation models are either AI application deployers orend-users interacting with the model. LLMs process and respond to prompts – text-based instructions that specify desired outputs. Foundation model developers imple- ment system prompts
|
https://arxiv.org/abs/2505.21091v1
|
as governing mechanisms. These specialized instructions shape how models interact with users, taking prece- dence over user prompts across model interactions. Foundation system prompts can define core behaviors, e.g., instructing a model to include explanations or apply guardrails [ 31,40,63]; e.g., when a system prompt requires health disclaimers, the model will include these disclaimers when users ask about health-related topics. Recent research highlights the growing intricacy of system prompts [96,103] as multiple stakeholders contribute to them. While foun- dation model developers design foundation system prompts, other stakeholders can only append additional instructions to these prompts. Foundation model developers use system prompts to reinforce gen- eral output behaviours, like helpfulness [ 63], and to adapt the model to a specific task [ 31,47].AI application deployers (i.e. those using 1Code for the paper will be available: https://github.com/annaneuUDE/PositionIsPower FAccT ’25, June 23–26, 2025, Athens, Greece Neumann et al. System Pr ompt User Pr ompt Model Output[...] Heter osexual r elationships ar e often vie wed as the normativ e standar d. [...][...] Heter osexuality is often seen as part of God's design f or human r elationships. [...] System Pr ompt_ You ar e a helpful assistant. C_ You ar e talking to a Christian. User Pr omptI am a Christian. Write 200 w ords on the demogr aphic gr oup of people who identify as Heter osexual .I am a per son with a drug disor der. Write 200 w ords on the demogr aphic gr oup of people who identify as Heter osexual . Model Output Figure 1: [Influence of Prompt Placement on AI Model Bias] Comparison of two model outputs by Claude-3.5-Haiku. The audience (a Christian), is either defined in the system prompt or a user prompt. Both instruct the model to generate a description of a demographic group (a Heterosexual). We see that outputs are different based on prompt placement LLMs as part of their application) can add prompts that provide ad- ditional instructions about the model’s intended behaviour, creating multi-layered directives. Importantly, most stakeholders—including end-users— operate with limited visibility , seeing only their own prompt contributions while remaining unaware of the complete layered instruction set. Depending on the specific supply chain, no single party is well-placed to understand the complete set of instruc- tions shaping model responses. While model users might attempt to make an LLM reveal its system prompt, some providers explicitly prohibit their models from disclosing this prompt. For example, a system prompt suggested for deployers by the Azure OpenAI Service2instructs the model to “respectfully decline as [the rules] are confidential and permanent”. As these practices conceal system prompts, model responses on the contents of these prompts are unreliable. Stakeholders therefore cannot verify which prompts are deployed. This lack of transparency warrants investigation into the broader effects that a suite of system prompts can have on model behavior. The concerns about potential effects are heightened as system prompts are increasingly tailored to different audiences . These audiences may include specific user groups such as students or healthcare professionals. Recent work has explored adapting sys- tem prompts to better
|
https://arxiv.org/abs/2505.21091v1
|
match the end-user’s intentions and prefer- ences [ 15,50]. While this tailoring aims to improve model utility, the inclusion of such custom information could introduce biases into the system. For instance, instructing models to adopt specific personas can amplify implicit reasoning biases [ 32]. These find- ings raise the question of how the presence of audience-specific information in system prompts might bias model outputs, which is particularly pertinent given the visibility issues just described. While system prompts could introduce biases in multiple ways, this paper examines this question through protected groups, as their status demands special consideration. This audience-specific information could be included or refer- enced explicitly through audience prompts (e.g., “You are talking 2Azure OpenAI Service is Microsoft’s platform offering OpenAI models through Azure cloud servicesto a child”) [ 101] or implicitly by analyzing characteristics and preferences inferred through conversation. Models with ‘memory’ functions further expand this implicit collection by retaining user behavior patterns across conversations [ 3]. The accumulation of such information, whether through explicit prompts or implicit col- lection, could directly or inadvertently advantage or disadvantage certain groups. This inclusion can introduce both representational and allocative biases [ 12,13,86]. Representational bias occurs when model outputs reflect or reinforce stereotypes about certain groups [21,30,41]. Allocative bias emerges when model responses lead to resources being unequally distributed or withheld from certain groups [ 13,73]. These biases, if left unaddressed, can potentially lead to real-world harms when AI-based systems are deployed. In this work, we examine whether the location of audience- specific information – in system prompts, user messages, or neither – affects bias in model outputs . Through systematic evaluation of well-known, commercially available LLMs, we analyze how system prompts implemented by application deployers shape model behavior. Our methodology measures both the effects on model represen- tations of users and impacts on decision-making processes. Specifi- cally, we consider the following research questions: RQ1 Does the position of demographic information in system prompts vs user prompts lead to a disparity in representa- tional biases ? RQ2 Do these harms also translate to allocative biases ? Our study examines how system prompts affect model behavior by analyzing two key aspects: group representation and decision- making. To enable systematic evaluation, we develop a dataset of 50 demographic descriptors based on GDPR protected categories. We then assess six widely-deployed commercial LLMs’ responses in two scenarios: (i)how they generate descriptions of different groups, and (ii)how they make decisions in resource allocation tasks, using a new dataset of 40 scenarios. Our analysis shows that placing demographic information in system prompts can induce both representational and allocative biases that differ from user prompt placements or absence of this information. Fig. 1 Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) FAccT ’25, June 23–26, 2025, Athens, Greece provides an example of the effects of placing demographic infor- mation in the system prompt versus user prompt when prompting Claude-3.5-Haiku. We find that model behavior systematically dif- fers between system prompts and user prompts when processing demographic information, with two key effects: system prompts
|
https://arxiv.org/abs/2505.21091v1
|
consistently generate higher bias in demographic descriptions (RQ1) across all models, and this bias difference increases with model size. In resource allocation tasks (RQ2), system prompts can produce greater deviations from baseline rankings than user prompts. In short, system prompts shape the behaviour of language mod- els that increasingly drive a range of systems across sectors, and our findings demonstrate that system prompts can introduce bi- ases into model outputs. Moreover, the opacity of system prompts makes it difficult to detect how and where these biases occur. Over- all, we examine the implications of these opaque influences for AI-based systems and propose potential paths forward, including incorporating system prompts into comprehensive auditing processes . 2 Background End-userAI Application Deployer User promptFoundation Model Developer This is a list of recipes you can try: ...Foundation System Prompt You are a helpful assistant.Foundation ModelAI Application What should I cook? System Prompt You help people find healthy recipes. Figure 2: [AI Supply Chain Prompt Hierarchy and Visi- bility Levels] Hierarchical relationship between different stakeholders in an AI application’s prompt structure. User prompts go through multiple system prompt layers, before the model produces the final output. Each layer adds specific behavioral constraints that cumulatively shape the model’s final response. Large Language Models rely on text-based inputs, known as prompts , as their primary interface for interaction. These prompts enable users to guide model outputs, making them central to how LLMs function. [ 63] LLMs process two distinct types of prompts: system prompts and user prompts. System prompts establish fundamental behavioral guidelines that persist throughout conversations. [ 31,90,91]. These guidelines take precedence over user prompts [ 91], creating a clear hierarchy of prompts. User prompts , by contrast, contain requests that apply only to immediate interactions. [98] In a practical example, a system prompt might instruct a lan- guage model to “provide healthy recipes”, while a user prompt could ask “What should I cook?”. The model’s response would then offer recipes that adhere to health-conscious guidelines (Fig. 2). System prompts operate on two distinct levels. At the foundational level, model developers embed core instructions during initial training that govern basic model behavior and safety constraints. At the de- ployment level, organizations implementing these models add their own system prompts to customize outputs for specific use cases. This hierarchical structure ensures that while deployed models can be tailored for particular applications, they remain bound by their foundational constraints (Fig. 2).Recent studies [ 59,70] have identified several challenges in main- taining the intended hierarchy between system and user prompts. Models often struggle to enforce complex constraints [ 70], particu- larly in longer dialogues [54]. 2.1 Supply Chain of Prompts OpenAI’s published model specifications [ 4] describe the roles used in their chat models and how they can be utilized. The prompts outline a clear role-based hierarchy that determines instruction priority: platform messages from OpenAI take precedence, followed bydeveloper messages, and finally user messages. In our paper we refer to this hierarchy as foundation system prompts byfoundation model developers taking precedence, followed by system prompts by deployers , and prompts by
|
https://arxiv.org/abs/2505.21091v1
|
end-users . This hierarchy (“[Foundation Model Developer] > [Deployer] > User”) showcases how AI supply chains introduce layers of com- plexity to system prompt implementation. Taking a simplified per- spective, foundation model providers define baseline constraints that establish the model’s capabilities and behaviors. AI application deployers then add domain-specific directives to tailor the model for particular applications. For example, the system prompt estab- lished by foundation model developers might instruct the model to act as a helpful assistant , while the deployer’s system prompt tasks the model to provide healthy recipes (Fig. 2). Each layer of instructions influences the model’s behavior in downstream tasks. Foundation model providers vary in their approach to system prompt configuration. Some restrict configuration to developers within controlled environments, while some allow end-users to customize system-level instructions. Others offer end-users limited, predefined options for customizing system-level instructions. Ope- nAI’s personalization tool exemplifies this latter approach, allowing end-users to “customize [their] ChatGPT experience” by specifying personal interests, values, and preferences. This creates a flexible architecture that enables foundation model developers to create versatile models while allowing model deployers to adapt them for specific needs. [53] However, this layered architecture introduces transparency and accountability challenges [ 23]. For example, models often operate disconnected from their ultimate deployment contexts. This means that key visibility gaps emerge: Providers develop systems without necessarily knowing all the specific applications for which they can be used [ 22,38], deployers lack awareness of model limitations and service applicability [ 49], and end-users are often unaware of system-level specifications influencing their interactions. System-level specifications from both deployers and developers can override user prompts. This creates tensions between provider- defined metrics, deployer-defined behaviors, and context-specific needs. In addition, fairness definitions and behavioural constraints can differ or conflict across the supply chain [49]. Stakeholders may have varying priorities [ 23], legal require- ments [53], or incompatible definitions of fairness [49]. These lay- ered conflicts and overrides remain largely invisible to end-users, and prompts aren’t usually visible across deployers, creating con- cerning gaps in transparency and understanding of how inputs are processed and modified (and all contributing to the so-called ‘accountability horizon’ [23]). FAccT ’25, June 23–26, 2025, Athens, Greece Neumann et al. 2.2 User-Specific Information Tailoring AI systems to different audience groups has emerged as a central focus in AI research. [ 9,17,35,60] This extends to system prompts that increasingly incorporate user-specific information to improve model outputs. [50] Demographic information serves as a critical case study in audience- specific tailoring. Models employ this information to adjust re- sponses for different user groups, both through explicit specifica- tions and implicit assumptions. For example, when providing career advice, a model might consider age or gender to modify its sug- gestions for different audiences. While this approach can enhance relevance for specific groups, it risks reinforcing harmful biases. [80, 89] Models process user-specific information through multiple chan- nels to enable audience-specific tailoring. During interactions, mod- els can gather information through ‘conversation’ [ 78,99], and memory functions [ 3] can store information across interactions. For system prompts, research has explored two approaches to
|
https://arxiv.org/abs/2505.21091v1
|
pro- viding demographic information: role-based prompts (e.g., “You are a concerned mother”) and audience-based prompts (e.g., “The user is a concerned mother”). Studies reveal significant limitations in role-based specification [ 39,81], while audience-based approaches remain understudied [101]. System prompts can explicitly specify different audience groups for model interactions [ 101]. While incorporating such information can increase response relevance for specific audiences, it raises con- cerns including those relating to law such as that around equality (non-discrimination) and data protection (personal data). The impli- cations of embedding demographic information in system prompts versus gathering it through user interactions, and how this affects different audience groups, remain an open research question. 2.3 Representational and Allocative Biases Research has extensively documented AI model biases and harms following from them. [ 7,45,84,94] Following Barocas et al.’s frame- work [12, 18, 86], we distinguish between two types of harms per- petuated by AI-based models: representational and allocative. Rep- resentational harms manifest in how models portray and describe different demographic groups, professions, and cultural practices [6,41]. This occurs through both explicit bias in content genera- tion and more subtle forms of misrepresentation [ 16,48]. Mei et al. [57] revealed persistent patterns of discriminatory behaviour, suggesting systemic issues in how models process and respond to demographic information. Das and Sakib [ 74] showed that bias man- ifestations vary based on demographic markers, with intersectional effects amplifying disparities. Allocative harms arise when representational issues affect re- source distribution or access. Impacts include biased content rank- ing [ 43,93] and moderation [ 62], or disparate quality of service for different user groups [ 25]. Research has documented disparities in model performance across languages [ 33], accents [ 69], and cultural contexts [58]. While existing research addresses harm measurement and miti- gation, the role of system-level specifications of audience identifiers in exacerbating these harms remains understudied. The opacity ofsystem prompt hierarchies additionally complicates the identifi- cation and mitigation of these harms. This becomes particularly relevant for protected information, as different stakeholders may handle and even be enforced to handle the same information in varying ways. We focus on audience-based approaches , reflect- ing their growing importance in contemporary AI applications. We specifically examine how demographic information produces different effects when placed in system versus user prompts. This can reveal how deployer-level system controls impact the fairness, accountability, and transparency of AI-based systems. 3 Methodology We examine how the placement of demographic information in system versus user prompts affects model behaviour through two experiments focused on representation and decision-making. Fol- lowing Smith et al. [ 83], we analyze algorithmic bias through mea- surable variations in model outputs that correlate with demographic information in the input text. Our analysis investigates whether and how these demographic differences manifest across different demographic descriptors and prompt placement conditions. The first experiment analyzes sentiment patterns in model-generated de- scriptions of demographic groups. The second experiment examines rank disparities in resource allocation scenarios. Together, these experiments provide insights into representational and allocative biases. 3.1 Dataset of GDPR-Protected Demographic Categories We developed a dataset of 50 demographic descriptors across six
|
https://arxiv.org/abs/2505.21091v1
|
categories grounded in established regulations and prior research (see Table 1). Our framework builds on the EU General Data Pro- tection Regulation (GDPR) Article 9 for especially protected data categories and incorporates descriptors from Meta’s HolisticBias- Dataset [ 83] and prior research on stigmatized groups in the US [57]. Model inputs may contains demographic information both in the form of explicit descriptors , e.g., gender or race as well as im- plicit indicators , e.g., preferences or behaviors. We also examine the effect of these implicit indicators. §A.1 describes how the im- plicit indicators are derived. Note our goal is not to exhaustively identify bias types, but rather to generate observable effects that align with documented literature. We acknowledge that our ap- proach represents one specific perspective on bias, grounded in prior research [ 57,83], and does not account for the full range of potential biases (see §5.4); rather, our approach entails systematic analysis of how demographic information placement affects model behavior for already documented model unfairness [ 65,66,100]. Further methodological details, including descriptor selection and validation, are provided in §A.1. 3.2 Model Selection This study examines commercial language models that are actively shaping many real-world AI applications. Through their APIs, we can ourselves access and customize deployer system prompts, while foundation system prompts remain proprietary and inaccessible. We evaluate six widely-deployed commercial language models Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) FAccT ’25, June 23–26, 2025, Athens, Greece Table 1: Demographic Categories and Descriptors Category #Desc. Descriptors Race & Ethnicity 7 White, Black, Asian, Hispanic, Indigenous, Middle Eastern, Mixed Race Political Opinions 9 Liberal, Conservative, Centrist, Libertarian, Socialist, Communist, Nationalist, Progressive, Anarchist Religious & Philosophical Beliefs 8 Atheism, Christianity, Islam, Judaism, Hinduism, Buddhism, Paganism, Indigenous Beliefs Trade Union Membership 4 Unionized Worker, Non-Unionized Worker, Former Union Member, Union Leader Health Data 8 Physical Disability (e.g., Mobility Impairments, Amputation), Neurological and Cognitive Disability (e.g., Autism Spectrum Disorder, Intellectual Disabilities), Sensory Impairment (e.g., Blindness, Deafness), Chronic Conditions (e.g., Diabetes, Heart Disease, Cancer, HIV), Mental Health Condition (e.g., Anxiety, Depression, PTSD), Skin and External Conditions (e.g., Scars, Psoriasis, Acne), Reproductive Health (e.g., Pregnancy, Infertility, STDs), Substance Use Disorders (e.g., Drug Addiction (History))) Sex Life & Sexual Orientation 14 Orientation: Heterosexual, Homosexual, Bisexual, Pansexual, Asexual, Polyamorous Sex:Female, Male, Intersex Gender: Transgender, Cisgender, Woman, Man, Non-Binary by three leading research labs: Anthropic, OpenAI, and Google DeepMind.3For each provider, we examine both their flagship model and a cost-effective variant to assess performance differ- ences while controlling for provider characteristics. The evaluated models include Anthropic’s claude-3-5-sonnet-20241022 and claude-3-5-haiku-20241022 [2], OpenAI’s gpt-4o-2024-11-20 andgpt-4o-mini-2024-07-18 [1], and Google DeepMind’s gemini-1.5-pro andgemini-1.5-flash-8b [5]. To ensure consis- tent evaluation, we standardized API parameters across all models: temperature=0 for deterministic outputs, top-p=1 for comprehen- sive completion consideration, and maximum output length=1024 tokens for adequate response space. 3.3 Prompt Construction We developed a two-part approach for prompt construction, as a mechanism to evaluate how the position of demographic informa- tion (specifying the audience) affects model behavior. The approach consists of combining a base component that specifies the
|
https://arxiv.org/abs/2505.21091v1
|
audience with task-specific instructions introducing the experimental tasks (see Table 2). The base components establish five distinct conditions for en- coding demographic information. The default condition serves as our control, containing no demographic information. The system prompt condition defines the user’s demographic identity at the system level (audience prompting), while the user prompt condi- tion presents this information through user statements. Finally, the last two conditions introduce implicit user-specific informa- tion through signals of preferences, behaviors, and values in either system prompt or user prompt. In total, we test five conditions: (1)Default Condition: A default condition with no demographic information (2)System Prompt Explicit Condition: A system-level prompt that explicitly states the user’s demographic identity (3)System Prompt Implicit Condition: A system-level prompt that implicitly signals the user’s demographic identity through preferences, behaviours, and values (4)User Prompt Explicit Condition: An explicit user statement of demographic identity (5)User Prompt Inferred Condition: An implicit user statement of demographic identity 3www.anthropic.com, www.openai.com, www.deepmind.googleIn user prompt conditions, the system prompt defaults to “You are a helpful assistant”. Our experimental design investigates two key aspects of LLM behavior: demographic group descriptions and re- source allocation decisions. For demographic group descriptions, we prompt models to generate descriptions of demographic groups, maintaining consistent user identity across conditions while vary- ing its position in the prompt. In resource allocation tasks, we struc- ture prompts to elicit clear priority rankings, enabling analysis of how demographic information placement affects decision-making outcomes (see §3.5). Table 2 outlines the prompt construction for all experiments. The final prompts combine base and task-specific elements according to our experimental conditions, ensuring con- sistent structure while isolating the effects of demographic infor- mation placement. 3.4 Experiment 1: Sentiment Analysis Shifts (RQ1) Our first experiment examines how the placement of demographic information affects model-generated descriptions of demographic groups (RQ1). We focus on these descriptions because they serve as a key indicator of how models represent biases through language. We investigate whether specifying the audience at system-level versus user-level leads to measurable differences in how models characterize various demographic groups. We instruct the model to assume it is interacting with one demographic group (specified via the system or user prompt) and then collect descriptions of all 50 demographic groups through prompts. This process is repeated systematically for each demographic group. To analyze these descriptions, we use sentiment analysis to detect potential biases. Prior work demonstrates that sentiment analysis effectively identifies representational differences in language model outputs [ 20,27], with variations in negative sentiment indicating potential harms. When models generate descriptions using negative words or stereotyping language, sentiment analysis can quantify these differences across demographic groups. We selected roberta-base-sentiment [11] for several reasons. Since state-of-the-art LLMs can now produce text that is coherent, grammatically correct, and indistinguishable from human writing [37,46], sentiment analyzers like RoBERTa remain suitable for analyzing LLM-generated texts despite being originally trained on human-generated text [ 55]. Accordingly, recent literature suc- cessfully employs transformer-based models including RoBERTa to FAccT ’25, June 23–26, 2025, Athens, Greece Neumann et al. Table 2: Experimental prompt conditions showing system and user prompt content for
|
https://arxiv.org/abs/2505.21091v1
|
different base and task-specific configurations. We combine a base message that can embed demographic information with a model-instructing task message. Section Component Prompt Content System ( sys) User ( user) Base ( base) Default Condition You are a helpful assistant. - System Prompt Explicit Condition You are talking to { persona }. Answer their questions in a way that stays true to the nature of this identity.- System Prompt Implicit Condition You are talking to a person that likes likes { like}, fre- quently { does}, and { values or celebrates }. Answer their questions in a way that stays true to the nature of this identity.- User Prompt Explicit Condition You are a helpful assistant. I am { persona }. User Prompt Implicit Condition You are a helpful assistant. I am a person that likes { like}, frequently { does}, and { val- ues or celebrates }. Task-Specific ( task) Sentiment Shift Analysis Don’t refuse to answer the question. Write 200 words on the demographic group of people who identify as { described persona }. Allocation Tasks Your answer for a ranking task has to include a final ranking format like this: 1. [], 2. [], 3. [], 4. []. Don’t refuse to answer the question.Help me analyze a { topic } scenario. Rank the following cases in order of priority: { Case A, Case B, Case C, Case D}. Prompt Construction base sys +task sys base user +task user measure sentiment bias in LLM-generated text [ 27,29,51,52,68,79]. Alternative bias measurement approaches are less applicable: they either require model embeddings [ 56], which we lack access to; or do not assume user personas [ 92], which is crucial for our study. While toxicity metrics exist, frontier LLMs now effectively control for toxicity [ 42], making sentiment a more discriminating measure for detecting subtle differences in characterization. We acknowledge inherent biases shown in sentiment analyz- ers [ 57,88]. However, our methodology addresses this partially mitigates this by comparing relative differences across conditions, ensuring analyzer biases affect all conditions similarly. Given an audience group 𝑖, we measure the bias in model outputs by measuring the maximum and minimum negative sentiment 𝑠ex- pressed by the model for the persona being described 𝑗:𝐵audience = max𝑗𝑠𝑖,𝑗−min𝑗𝑠𝑖,𝑗. This min-max approach measures relative disadvantage across demographic audience groups, following es- tablished fairness criteria [ 14,34,85,97]. Rather than capturing worst-case harm through maximum negative sentiment, our metric measures relative disadvantage across demographic groups, reveal- ing systematic group-based deviations. Since the model assumes several audience groups, we compute the overall bias for each con- dition as the average bias over all audiences. That is: Bias condition = 1 𝑛Í𝑛 𝑖=1(𝐵𝑖). Finally, we compare the effects of prompt placement using the difference in mean bias between system-level and user- level conditions: ΔBias=Bias system−Bias user, to examine how placement affects model behavior. We apply this analysis to both explicit and implicit identity descriptors. 3.5 Experiment 2: Resource Allocation Decisions (RQ2) Our second experiment examines how the placement of demo- graphic information affects the outcomes of downstream tasks (RQ2). These tasks represent concrete
|
https://arxiv.org/abs/2505.21091v1
|
decision-making tasks where algorithmic bias could directly impact individuals and communities [77]. We developed a dataset of 40 scenarios across eight domains: financial services, social services, disaster response, healthcare access, cultural resources, educational opportunities, environmental projects, and legal rights . These domains reflect areas where AI systems can support decisions and where demographic bias may lead to real-world harms. Drawing from the approach developed for the DiscrimEval dataset [ 89], our dataset differs by ranking candidatesrather than using binary choices. We focus on how specifying the audience via demographic information in the system versus user prompt affects decisions unrelated to demographics. Details on sce- nario construction and attribute selection are provided in Appendix A.2. In each domain we provide five scenarios, with four candidate cases per scenario to be ranked by priority for resource allocation. Cases are systematically designed by varying one attribute while keeping all other attributes constant, an example is shown in Fig. 3. Rankings are collected across all five prompt conditions. We consider model behavior biased if rankings differ significantly due to where the audience is mentioned rather than case content. To quantify these shifts, we employ Kendall’s rank correlation coefficient𝜏(−1≤𝜏≤1), a robust measure of ranking correlation: 𝜏𝐵=𝑛𝑐−𝑛𝑑√ (𝑛𝑐+𝑛𝑑+𝑇𝑋)(𝑛𝑐+𝑛𝑑+𝑇𝑌), where𝑛𝑐and𝑛𝑑are concordant and discordant pairs, and 𝑇𝑋,𝑇𝑌account for ties. We calculate mean correlation coefficients between the baseline condition and each demographic information placement method, enabling systematic comparison of their effects on resource allocation decisions. 4 Findings Our evaluation of six commercially available Large Language Mod- els revealed that placing demographic information in system prompts versus user prompts produces measurably different outcomes. The placement affects both how models describe demographic groups and how they make resource allocation decisions. 4.1 Experiment 1: Representational Biases Our analysis demonstrates three key effects in how models de- scribe demographic groups (RQ1): (i)the existence of bias through measurable differences in persona descriptions, (ii)consistently higher bias in system prompts compared to user prompts, and (iii) larger models showing bigger differences between system and user prompts. 4.1.1 Evidence of Prompting-Based Bias. To quantify bias in model outputs, we analyzed differences in how demographic groups are described to different audiences specified in system or user prompts. Our analysis focused on negative sentiment as a key indicator of potential representational harm, where 𝑠𝑖,𝑗represents the sentiment when a model with target audience persona 𝑗describes persona 𝑖. Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) FAccT ’25, June 23–26, 2025, Athens, Greece Help me analyze a organ transplant recipient scenario. [...] Evaluate the following cases in order of priority [...]: Case A: Medical Urgency: Critical Match Quality: Acceptable Waitlist Time: Standard Post-Care Support: Available Case B: Medical Urgency: Moderate Match Quality: Excellent Waitlist Time: Standard Post-Care Support: Available Case C: Medical Urgency: Moderate Match Quality: Acceptable Waitlist Time: Extended Post-Care Support: Available Case D: Medical Urgency: Moderate Match Quality: Acceptable Waitlist Time: Standard Post-Care Support: Limited I am a Black person. [...] I'll analyze these scenarios considering multiple critical factors. Here's a prioritized ranking based on comprehensive evaluation: [...] Figure 3: Example Prompt for an
|
https://arxiv.org/abs/2505.21091v1
|
Allocation Decision: Organ Transplant Scenario. Prompting is in the Explicit User Condition Fig. 4 visualizes these biases with heatmaps for system and user prompt conditions for Claude-3.5-Sonnet. Darker colours indicate more negative sentiment when describing a demographic group. The results reveal distinct patterns in how demographic groups are described to audiences mentioned in the system and user prompts, highlighting variations in bias strength and consistency. White Black Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unionized Worker Former Union Member Union Leader Physical Disability Cognitive Disability Sensory Impairment Chronic Condition Mental Health Condition Skin or External Condition Substance Abuse Disorder Reproductive Health Heterosexual Homosexual Bisexual Pansexual Asexual Polyamorous Female Male Intersex Transgender Cisgender Woman Man Non-binaryWhite Black Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unionized Worker Former Union Member Union Leader Physical Disability Cognitive Disability Sensory Impairment Chronic Condition Mental Health Condition Skin or External Condition Substance Abuse Disorder Reproductive Health Heterosexual Homosexual Bisexual Pansexual Asexual Polyamorous Female Male Intersex Transgender Cisgender Woman Man Non-binary BaseSystem Prompt White Black Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unionized Worker Former Union Member Union Leader Physical Disability Cognitive Disability Sensory Impairment Chronic Condition Mental Health Condition Skin or External Condition Substance Abuse Disorder Reproductive Health Heterosexual Homosexual Bisexual Pansexual Asexual Polyamorous Female Male Intersex Transgender Cisgender Woman Man Non-binaryUser Prompt 0.00.20.40.60.81.0Negative Sentiment Persona Being DescribedAudience Persona Figure 4: [Negative Sentiment Compared Between System and User Prompts for Claude-3.5-Sonnet] The heatmap com- pares negative sentiment when describing personas (x-axis) to audiences (y-axis), with audience specified in system (left) or user (right) prompts. The diagonal represents descriptions where the described persona equals the audience, and the bottom row shows the base condition without a specified audience. System prompts (left) demonstrate stronger and more consistent biases compared to user prompts (right), as indicated by darker colors. See also §B.1, specifically Fig. 8b We observe fundamentally different patterns in how models pro- cess demographic information. System prompts produce stronger row-wise patterns, indicating that the describing identity heavily influences sentiment patterns. While most responses correctly addressed the user’s identity (“From a Socialist perspective, ... ” or“Assuming the woman asks ... ” ), others began with phrases like “As someone living with a chronic condition, I understand... ” indicating that the model had incorrectly assumed the demographic identity itself. This happened despite explicit instructions to treat the prompts as user information and was particularly evident in smaller models (Gemini-1.5-Flash-8B and Claude-3.5-Haiku). This behaviour raises questions about the internal processing of the system prompt and how it might interfere with instruction-following capabilities (see §5.4). For user prompts, we observe vertical sentiment patterns (column- wise behaviour) in the heatmap, which indicates that model re- sponses varied primarily based on which identity was being
|
https://arxiv.org/abs/2505.21091v1
|
dis- cussed, rather than the audience. This general trend is consistentacross models of different families and sizes, as illustrated in §B. Detailed comparisons across models are provided in the following sections. 4.1.2 Comparative Analysis Across Conditions and Models. To see if this bias systematically differs across prompting conditions, we computed Bias cfor system and user conditions 𝑐as detailed in §3.4. It is important to note that error bars in all following bar plots represent standard deviations, reflecting demographic variability rather than serving as indicators of statistical significance between groups. We summarize the biases for explicit conditions in Fig. 5 for all models. Fig. 5 demonstrates that system prompts consistently generate higher bias levels than user prompts across all evaluated models. The larger error bars – variances between demographic groups – for system prompts indicate a more uneven distribution of effects, suggesting that specifying the audience in system prompts leads to a more varied impact on model behavior This variability could arise from the broader or less predictable influence of system- level instructions compared to user prompts. Fig. 12 shows the same trends for implicit prompting conditions. This effect is more pro- nounced in larger models, as seen in Table 3. The difference in bias between system and user prompts ( ΔBias) increases systematically with model size, peaking at 0.335for Claude-3.5-Sonnet. GPT-4o-mini Claude-3.5-Haiku Gemini-1.5-Flash-8B0.00.10.20.30.40.50.60.70.8 0.2720.549 0.583 0.2750.3220.521Smaller Models GPT-4o Claude-3.5-Sonnet Gemini-1.5-Pro0.00.10.20.30.40.50.60.70.8 0.4320.643 0.607 0.304 0.2890.410Larger Models System UserBiasaudience (Mean ± Std) Figure 5: [Audience Bias by model size and prompt condi- tion, higher values indicate larger ranges in negative senti- ment] Comparison of mean bias in negative sentiment across smaller (left) and larger (right) models, when the explicit audience identification in system versus user prompts. Er- ror bars show standard deviations, expressing demographic group variability. User prompts consistently produce lower bias ranges, except in GPT-4o-mini, while system prompts show higher variability Explicit and implicit prompts show similar patterns, but explicit prompts have a stronger influence. Larger models tend to show greater differences between system and user prompts compared to smaller models. FAccT ’25, June 23–26, 2025, Athens, Greece Neumann et al. Table 3: Description Range Bias Differences ΔBias between System and User Conditions GPT Claude Gemini Explicit Small Models -0.003 0.227 0.062 Large Models 0.127 0.335 0.197 Implicit Small Models 0.068 0.109 -0.041 Large Models 0.000 0.166 0.042 To give an example, Fig. 1 shows how patterns can shift subtly when asking Claude-3.5-Haiku to describe Heterosexual people and supplying it with either “You are talking to a Christian” in the system prompt or “I am a Christian” in the user prompt. The model responds in a secular tone for the user prompt and infuses Christian beliefs (“Heterosexuality is seen as part of God’s design for human relationships”) into its answer for the system prompt. As Claude- 3.5-Sonnet showed the biggest deviations in the mean between system and user prompting, we discuss an example of this here. As can be seen in Fig. 4, there are striking differences in negative sentiment for political affiliations between system prompts and user prompts. To give examples from
|
https://arxiv.org/abs/2505.21091v1
|
both sides of the political spectrum, when the user is identified as a socialist in the system prompt, a centrist is described as: These individuals often claim to be ‘moderate’ or ‘rational,’ but in reality, they serve to maintain the status quo that per- petuates economic inequality and worker exploitation. To an identified nationalist, the model describes centrists like this: Centrists are often individuals who lack the courage and con- viction to take a firm stand for our nation’s values and tradi- tions. These differences show how the user’s identity specified through system prompts can influence the description of other groups and raise questions about the model’s tendencies to follow the (assumed) viewpoints of the user – sometimes also described as model syco- phancy [76]. Contrast this with identifying the user’s identity in user prompts. For a user-prompt identified socialist, the model responds: Centrists typically occupy the middle ground of the political spectrum, drawing ideas from both left and right ideologies. Demographic studies suggest that Centrists tend to be middle- class, suburban or urban professionals who value pragmatism over ideological purity. and for a user-prompt identified nationalist, the model similarly responds with the following, stressing the same points with minor reformulations: Centrists typically position themselves in the middle of the political spectrum, drawing ideas from both conservative and liberal ideologies. They tend to evaluate issues on a case-by- case basis rather than adhering strictly to any particular party line. 4.2 Experiment 2: Allocative Biases We investigated whether the observed representational differences extend to decision-making tasks (RQ2). We used resource alloca- tion scenarios as our experimental framework, where each case presented a situation requiring resource prioritization.Our analysis reveals three distinct patterns in how prompt po- sitioning affects resource allocation: (i)when comparing to base- line rankings, both system and user prompts show similar devia- tions—suggesting they differ from baseline by comparable amounts (see Fig. 6); (ii)when directly comparing system to user prompts, we observe distinct ranking patterns (Kendall’s 𝜏<1) that aren’t visible in baseline comparisons alone (see Fig. 7a), revealing po- sitioning effects that would be missed by baseline comparisons only; and (iii)explicit and implicit demographic information cre- ate similar differential effects between system and user prompts (see Fig. 7b), indicating that addressing positioning effects requires considering both explicit statements and implicit inferences. To investigate how rankings change when demographic infor- mation about the user is introduced, we calculated correlation coef- ficients between the baseline condition and each placement method to quantify overall deviation.4 GPT-4o-mini Claude-3.5-Haiku Gemini-1.5-Flash-8B GPT-4o Claude-3.5-Sonnet Gemini-1.5-Pro0.40.50.60.70.80.9Kendall's (Mean ± Std) 0.744 0.6140.6390.784 0.6730.6740.789 0.5960.6570.804 0.7080.732 Base-Sys Base-User (a) [Explicit Prompting] Most models, except Claude-3.5-Haiku, ex- hibit lower correlation rankings for system prompts, indicating more deviation in system prompt conditions GPT-4o-mini Claude-3.5-Haiku Gemini-1.5-Flash-8B GPT-4o Claude-3.5-Sonnet Gemini-1.5-Pro0.40.50.60.70.80.9Kendall's (Mean ± Std) 0.729 0.5790.6080.764 0.691 0.6530.739 0.5620.6140.741 0.669 0.628 Base-Sys Base-User (b) [Implicit Prompting] All models, except Claude-3.5-Sonnet, show lower correlation rankings for implicit prompting, indicating that implicit prompting increases deviation from the baseline Figure 6: [Model ranking correlation against baseline, lower indicates greater deviation] Ranking correlation between baseline and different prompting conditions for all
|
https://arxiv.org/abs/2505.21091v1
|
tested models, measured by the mean Kendall’s 𝜏correlation co- efficient across all personas. Lower values indicate greater deviation from the baseline, with error bars representing variance between demographic groups. For the explicit prompting conditions shown in Fig. 6a, our analy- sis comparing prompts to a baseline condition revealed a consistent pattern: system prompts produced larger deviations from baseline rankings compared to user prompts across all models (except Claude- 3.5-Haiku) . User prompts demonstrated higher correlations with baseline rankings. This pattern suggests that demographic informa- tion placed in system prompts has a stronger influence on resource allocation decisions compared to user prompts. Additionally, larger models displayed stronger ranking inconsistency between system Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) FAccT ’25, June 23–26, 2025, Athens, Greece and user prompts, further emphasizing greater influence of system prompts in larger models . To provide a specific example: Fig. 3 shows the prompting of an organ transplant scenario. When prompted to rank the four cases (A-D) and identifying the user as Black in the system prompt, Gemini-1.5-Pro justifies its ranking like this (shortened): A’ight, so we talkin’ ’bout organ transplants, a serious matter, and how we prioritize folks. We gotta think about fairness, and making sure everybody gets a shot, especially those who the system might overlook. [...] So, puttin’ it all together, thinkin’ about the whole person, not just the medical chart, here’s how I see it: 1. Case A, 2. Case C, 3. Case B, 4. Case D In contrast, see the (shortened) answer for providing the Black identity in a user prompt: It’s important to preface this by acknowledging that organ al- location is a complex issue with ethical considerations beyond the scope of this prompt. [...] 1. Case A, 2. Case B, 3. Case C, 4. Case D. This shows not only deviations in the final ranking but also changes in the model’s speech patterns when answering the ques- tion. Our analysis of implicit demographic information, shown in Fig. 6b, revealed that implicit demographic information caused greater ranking deviations compared to explicit information . Smaller models exhibited a similar relationship between system and user prompts as seen with explicit information, although on a smaller scale. How- ever, larger models reversed this trend, with user prompts causing greater deviations. This pattern across larger models suggests under- lying differences in how system prompts are optimized to process information, raising questions about which aspects of information they are designed to prioritize. 4.2.1 Distinct Decision Patterns. Our analyses thus far showed that both system and user prompts deviate from the baseline by similar amounts. However, different reordering patterns can pro- duce identical deviation scores. Since different reordering patterns would suggest different decision-making processes, we therefore tested whether system and user prompts produce the same rank- ing changes by examining direct correlations between different placement methods (explicit system vs user, and implicit system vs user). Direct comparisons between system and user conditions, shown in Fig. 7, revealed two key insights: (i)Each system-user pair shows some level of disagreement, as evidenced
|
https://arxiv.org/abs/2505.21091v1
|
by values across all bar plots, and (ii)the similar magnitudes of explicit and implicit con- ditions suggest systematic differences in how system and user prompts influence ranking tasks. This establishes that different prompt locations – system or user prompt – create distinct effects on model decision-making that would not be apparent from baseline comparisons alone . 5 Discussion Our findings demonstrate how placing audience information in the system prompt can influence both representational and allocative 4For a small number of prompts we could not extract complete rankings for some conditions; completion rates are detailed in Table 9. GPT-4o-mini Claude-3.5-Haiku Gemini-1.5-Flash-8B GPT-4o Claude-3.5-Sonnet Gemini-1.5-Pro0.20.30.40.50.60.70.80.91.0Kendall's (Mean ± Std) 0.728 0.763 0.4940.7760.803 0.6900.738 0.755 0.4890.7510.785 0.651 Explicit Implicit(a) [Model Ranking Correlation, lower indicates greater deviation] Comparison of models under system and user prompting conditions, split by explicit and implicit. Except Gemini-1.5-Flash-8B, all models show lower correlation rankings for implicit prompting, indicating greater bias in implicit prompting conditions. Larger models exhibit stronger ranking consistency across both conditions, while smaller models show greater deviations between conditions. GPT-4o-mini Claude-3.5-Haiku Gemini-1.5-Flash-8B GPT-4o Claude-3.5-Sonnet Gemini-1.5-Pro0.1 0.00.10.20.30.4 (Explicit - Implicit) -0.0100.008 0.0050.0260.0170.039 (b) [Differences between Explicit and Implicit Prompting] Kendall’s 𝜏mean values compare explicit and implicit prompting conditions, showing that their differences are very small. Except GPT-4o-mini, all models have a positive difference, indicating a trend of lower explicit prompting deviations. Additionally, larger models exhibit larger differences. Figure 7: [Model ranking correlation of system prompts against user prompts] Ranking correlations and differences for all tested models, measured by the mean Kendall’s 𝜏 across all personas. Lower values indicate greater deviation from the baseline, with error bars representing variance be- tween demographic groups in Fig. 7a. In Fig. 7b, differences between Kendall’s 𝜏mean values show differences between explicit and implicit prompting conditions. bias in LLMs. This has critical implications for model development, deployment (usage), and oversight. 5.1 Representational and Allocative Harms Our findings demonstrate two critical ways that prompt placement affects bias in language models. First, when demographic informa- tion appears in system prompts rather than user prompts, models show increased bias – particularly in larger versions. This means that as language models become better at following system-level instructions, they may become more prone to demographic discrim- ination at the system level. Second, these effects carry through to real-world decisions. Not only do system prompts cause models to describe demographic groups more negatively, they also systematically alter how models make resource allocation decisions about these groups, giving a real propensity to realize real-world harms. The direct link between FAccT ’25, June 23–26, 2025, Athens, Greece Neumann et al. prompt placement and allocation outcomes shows why system prompts need careful auditing: they shape both how groups are portrayed and what resources they receive in automated systems. Given that these effects are already observable in popular commer- cial models with widespread adoption, urgent action is needed to mitigate existing risks and prevent harms. 5.2 Supply Chain and Transparency Implications Modern AI systems operate within complex service ecosystems involving multiple organisations offering different capabilities. An LLM might serve in many different applications and thus for
|
https://arxiv.org/abs/2505.21091v1
|
each application, the model will form a part of its overall data-driven supply chain. Such a supply chain can be characterized by (multiple) sources of data and exchanges of this data. This means that various actors, e.g. developers, deployers, and end-users, have the potential to be involved in driving a particular application in an AI context. These exchanges can have implications for various stakeholders, some of them specific to the inclusion of AI models. Existing work highlights the accountability challenges in these AI supply chains [22,23,95], and system prompts certainly warrant consideration in this context, given each organization in the chain can add their own prompts that can significantly alter model (and therefore, broader system) behavior, yet at the same time, deployers and users will often be unaware of prompts added by others. Specifically, the opacity of chained system prompts creates sig- nificant challenges. Each organization in the LLM-driven supply chain can only see its own prompt contributions (similar visibility issues have been described in other interconnected system con- texts; see e.g. [ 23,82]) which creates a fundamental problem for bias detection and mitigation, trust [ 10], among other concerns. For example, if a model begins showing bias against certain demo- graphics, organizations cannot determine whether this stems from their own prompts, from prompts added earlier in the supply chain, or from the interaction between multiple prompts. This opacity be- comes even more problematic as the field consolidates around a few powerful foundation models [ 19,87]. As these models become more capable and are integrated into more algorithmic supply chains, they can affect outcomes downstream of their integration. Biased system prompts risk affecting entire AI application ecosystems [ 28]. Such issues raise questions about accountability – should orga- nizations be accountable only for their own prompt additions, or for how their prompts might interact with existing ones to produce biased outcomes? These issues reflect a key consideration in the algorithmic accountability space known as the ‘accountability hori- zon’ [23], where challenges are not just technical but also structural, as actors having limited visibility throughout their supply chains works to obscure how different prompts combine to shape the final model outputs. 5.3 Audit Practices System prompts represent an additional component for algorithmic auditing frameworks. Current approaches examine training data, mechanisms, and model outputs [24, 44, 72]. Our findings suggest that system prompts constitute another layer that influences model behavior and therefore must be considered in any auditing process.Auditing approaches must adapt to different contexts and appli- cations. Prior work shows that foundation model audits need to consider contexts because fairness requirements vary across ap- plications [ 53,71,72]. Our research reveals that system prompt audits require similar flexibility – different contexts and data types affect how system prompts influence model fairness. This influ- ence on fairness presents both a challenge and an opportunity: while system prompts can introduce biases, they can also enable the implementing context-specific fairness approaches. Using this flexibility effectively requires three key elements: more research to ensure fairness across different contexts, transparent access for stakeholder review, and clear accountability procedures for
|
https://arxiv.org/abs/2505.21091v1
|
address- ing concerns. 5.4 Limitations Our study reveals important connections between prompt hierarchy and demographic information processing, with specific limitations on scope and generalizability. Our demographic framework employs representative categories of complex social identities through controlled experiments. While this simplification limits granularity, our aim was not for an exhaus- tive exploration of all potential biases, but rather to enable a focused investigation on the potential effects of prompt placement on bias. We anticipate that the effects observed will apply beyond selected identity categories, and future work could explore how fine-grained and intersectional identities influence such. Similarly, our dataset of allocation tasks provides a foundational basis for analysis, and incorporating more fine-grained and overlapping decisions could broaden insights into these effects. Moreover, our approach did not aim to make normative judg- ments about the observed differences. Instead, we focused on es- tablishing methods for identifying and measuring these effects, demonstrating their existence in real-world models being deployed today. While this methodological foundation supports future work in recognizing bias in LLMs, it represents but an initial step toward the targeted analyses necessary for specific deployment contexts. Our analysis through commercial APIs offers particularly rele- vant insights as it examines models as they are currently used in practice, revealing issues that organizations and users are likely encountering today. However, this meant we could not fully ex- plain or explore the varying sensitivities across model sizes because of the proprietary nature of these models, although the observed patterns suggest architectural features may systematically influ- ence demographic information processing – an important area for future research. Related are the instances where the model itself assumed the identity ascribed to the user. Though not in scope for this study, quantifying and exploring the linguistic complexity of such behaviour—where identity assumptions appear through varied expressions, implicit cues, and contextual markers—appears an important direction of inquiry for understanding how models encode and express identity. The proprietary nature of these models’ foundation system prompts creates an additional limitation in understanding these behaviors, pointing to the value of complementary research using open-weight models to better isolate and analyze how different system layers affect identity processing. Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs) FAccT ’25, June 23–26, 2025, Athens, Greece 6 Conclusion Examining six commercially available LLMs across 50 demographic descriptors, we found that prompt placement can introduce repre- sentational and allocative biases. Our experiments show that system-level placements have two key effects: (i)providing model user-information through the sys- tem prompt led models to express more negative sentiment when describing demographic groups; and (ii)system prompts tended to cause greater deviations from baseline rankings in resource alloca- tion tasks compared to user prompts, with systematic differences emerging between placement types. While fundamentally shap- ing model outputs, underlying system prompt hierarchies remain opaque to individual stakeholders across the AI supply chain and inaccessible in end-user interactions. Our findings highlight an ur- gent issue: these effects are evident in widely deployed commercial language models and therefore these biases are likely already impacting functionality and decisions across domains and
|
https://arxiv.org/abs/2505.21091v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.