text
string
source
string
up to 1.000 training epochs took several hours. Marking the eye outperformed all other tests, however it should be noted that while the eye is a meaningful concept, this dataset did not necessarily insure uniqueness of looking direction, but always a unique combination of eye coordinate and gaze vector. The interesting part of this experiment was that this eye annotation did not have to be human-level accurate. A marking close to the eye sufficed 6 0 500 1,0000204060 EpochsAccuracy (%)ResNet18 0 500 1,0000204060 EpochsAccuracy (%)ResNet34 0 500 1,0000204060 EpochsAccuracy (%)ResNet50 0 500 1,0000204060 EpochsAccuracy (%)VGG13 0 500 1,0000204060 EpochsAccuracy (%)VGG16 Direction No Direction Only Eyes Random Figure 7: Accuracy vs. Epochs at LR=0.0001 for ResNet and VGG models for direction, no direction, eye annotation and a baseline with random annotations. This clearly shows that the models, despite using the same architecture and hyperparameters, learned better with both the eye concept and viewing direction concept over all classes. A larger version of this plot is included in the Appendix A. to raise detection accuracies in ambiguous settings. An overview of those results can be seen in Figure 7. We evaluated two different approaches for the improvement of the optimization problem arg ming∈G[L(f, g, Ο€ xβ€², e, d) + Ω( g)]: Our first approach was to optimize the task loss and then concatenate the gaze coordinates before evaluating the last softmax layer, then learn the data using a Multilayer Perceptron Popescu et al. [2009], essentially learning the following gaze information loss: L(f, g, Ο€ xβ€², e, d) =L(Ltask(f, g, Ο€ xβ€²)|(e, d)) (1) This particular setup, however, did not lead to a detectable increase in accuracy. We managed to increase the accuracy by including the gaze vector directly into the image information. However, we tested the accuracy improvement over several different types of architectures, using different amounts of epochs and learning rates, showing that the learning process can be significantly improved using the concepts eye and gaze direction. As can be seen in Figure 7, both eye coordinates and gaze direction led to significant improvements over the baseline of no annotation and random annotations. All other results for other learning rates can be found in the Appendix A, but all show the same general results. We preprocessed our dataset such that the shape of the images was adjusted to 3Γ—224Γ—224and normalized using the mean. Furthermore, we took a look at what exactly our network learns if we directly incorporate the direction arrow as well using LIME Ribeiro et al. [2016]: As you can see in Figure 8, the network then highlights normal features but also the gaze vector as a feature, compared to worse features that overlap with the rivaling class and no highlighting of the eyes. We included another example of this as an interesting additional observation for Gradcam Selvaraju et al. [2017] as well as for PipNet Nauta et al. [2023] in the Appendix A. 7 Figure 8: We see here that the LIME Ribeiro et al. [2016] explanation of the badger trained exactly the same as the other model with the
https://arxiv.org/abs/2505.21589v1
exception of the direction vector is able to include fewer features shared by badger and pigeon, such as more area around the pigeon’s eyes and where the fur of pigeon and badger overlap. The model also directly accounts for the eye gaze as well, which shows the usefulness of the feature in the learning process. We also made sure the gaze of the animal varied during the training. So, the animal cannot be classified simply because the arrow always points in the same direction. 6 Ambivision: Animal Optical Illusions- Our Dataset In order to underline our problem statement, we include an entirely new self-generated dataset that was constructed using iterative prompt engineering with Chatgpt-4 and Chatgpt-4o OpenAI [2024] of animal-based optical illusions. The dataset will be released as open-source with acceptance of the paper in combination with our evaluation scripts. Creating meaningful optical illusions is inherently difficult. First of all, note that we included psychological principles for good prompt engineering, such as Gestalt theory Koffka [2013], which we elaborate together with an example prompt in our discussion of the bin s mitigation strategies in 6.1. Each image in Ambivision depicts one animal hidden inside the body of another animal, creating an intentionally ambiguous perceptual boundary. The dataset consists of over 200 images annotated with the class label, eye coordinates, gaze vector and bounding boxes for both animals- providing rich, concept-level labels beyond standard object annotations. For every successful image generation, we had to enter around 150 prompts, resulting in roughly 30.000 ChatGPT prompts OpenAI [2024]. Of the resulting dataset, 41 of those images are RGB, while the majority are black and white. This design choice reflects the increased difficulty in generating convincing color illusions. Ambivision is, to the best of our knowledge, the first dataset of its kind to systematically encode perceptual ambiguity with fine-grained concept annotations, making it a valuable baseline for evaluating both classification performance as well as explainability in ambiguous visual settings. We provide four different versions of the dataset for the convenience of the user: We provide the dataset with the label of the animals, followed by the eye coordinates (ex, ey), the normalized direction vector (dx, dy)and the bounding boxes (x1, y1),(x2, y2). We provide the same dataset with direction arrows drawn directly into the image as a baseline for this work. An additional baseline is the dataset with random markings in the image and one baseline where just the eye is encircled. 6.1 Bias mitigation strategies Because this dataset was generated with ChatGPT OpenAI [2024], we had to make sure to ensure diverse results and mitigate biases. For each animal class, we ensured that the gaze and eye position vary. This applies equally to both animals in the image to avoid introducing unintentional class- specific cues. (In the Appendix A, we include one exemplary overview of all eye positions and the spread of the eye positions for the bird class as plots). We varied artistic representation by prompting specifically for more or less realistic art styles. We specifically prompted the animals to be in alternating positions
https://arxiv.org/abs/2505.21589v1
and poses, for example moving, sitting, flying, and eating. Another principle we applied for prompting is a principle borrowed from the psychological domain. The design rules for optical illusions go back to fundamental problems of psychology, such as Gestalt theory introduced as early as 1935 Koffka [2013]. These laws of conceptual organization give us an easy overview of what design concepts can trick our minds or make it difficult for our brains to 8 correctly organize and interpret visual data. One such example is the concept of proximity: When something is in close proximity to something else, we are more likely to interpret it as belonging together than when they are further apart. The Gestalt principles provide us with useful guidelines for our prompts. Establishing this baseline dataset allows us to explore perception differences between AI learners and humans. One example of an initial bias that we mitigated was that we had immense trouble making sure illusions from the owl class did not always look straightforward. This might be due to the odd fact that owls can turn their neck 270 degrees in real life as well, which means that owls will, as a matter of fact, face you more often than not. By asking for a flying owl, for example, and asking specifically for different poses, it was, however, possible to get more diverse illusions. Prompt Example: Generate an artistic black-and-white image featuring a fusion of a tiger and a falcon. The design blends the tiger’s powerful stripes and muscular build with the falcon’s sharp beak and impressive wingspan. This creates an optical illusion where, from one angle, it appears as a tiger crouching to pounce and, from another, as a falcon swooping down to capture its prey. The image emphasizes the shared features of predatory prowess and agility, highlighting the ferocity and grace of both animals. 7 Limitations and Future Work First of all, one major limitation is that the dataset is generated using Chatgpt, which is in itself vulnerable to internal biases. However, this is why we explicitly focused on addressing potential biases in our Section 6.1. Furthermore, it is very hard to generate any kind of optical illusions, and our approach here allowed us to generate a dataset of over 200 images in a feasible manner. We also note that it is very interesting to have a dataset which is an optical illusion dataset for humans, but generated by a machine. Furthermore, in this dataset, we explicitly focused on two animals distinguishable by their gaze vector and eye coordinates. Although this was outside the scope of evaluation, during our various and extensive prompt attempts, we generated other interesting concepts. Such as: examples of animals where more than one animal is hidden in the body of an animal, but also humans and animals in a mixed manner. We give access to the images that did not fulfil our criteria for evaluation purposes for exploratory and research purposes. Examples can be found in the Appendix A. We were also able to generate interesting pictures where neither gaze nor eye coordinates
https://arxiv.org/abs/2505.21589v1
was a distinguishing feature, but there were still two animals visible, again we include an example in the Appendix A. Ultimately, this work invites a broader perspective: What if pixel-wise saliency is not the most effective approach to explainability in the image domain? What kinds of concepts should models truly be learning? Can we pursue more holistic learning strategies inspired by optical illusions and insights from cognitive psychology? These questions form a foundation for future exploration. 8 Conclusion In this paper, we challenge the prevailing paradigm in explainable AI (XAI) for visual data, which primarily revolves around pixel-based attributions and saliency maps. While such methods offer useful insights in many domains, they fall short when confronted with perceptual ambiguityβ€”situations in which even human observers struggle to resolve competing interpretations. Inspired by classical optical illusions like the rabbit-duck example, we propose that meaningful explanations in these cases must go beyond pixels and capture abstract, semantic concepts such as gaze direction and eye position. While there have been some efforts in the concept-based domain, automatic generation of concepts again relies on pixel-based methodology and fails to capture concepts such as the viewing direction. To address this research gap, this paper introduces a novel dataset, Ambivision, presenting visually merged animal optical illusions. Each image is annotated with the animal classes, their right eye coordinate (if only one eye is visible, then that eye), the normalized viewing direction and bounding boxes for the animal. Through extensive experimentation across multiple state-of-the-art architectures and training regimes, we demonstrate that including such concept-level annotationsβ€”specifically gaze and eye locationβ€”leads to significant performance improvements on classification tasks in ambiguous settings. Additionally, we showed the limitations of popular existing XAI algorithms on this particular dataset due to its ambiguous nature. Ambivision represents a step toward rethinking how we evaluate and design explainability in AI. It opens up new directions for building more human-aligned explanations, ones that take into account not just what is seen but how it is perceived. 9 We hope this work sparks new conversations about what it truly means to explain AI, as well as what concepts are best to incorporate into classification problems. Acknowledgments and Disclosure of Funding This work was supported by the Research Center Trustworthy Data Science and Security. 10 A Technical Appendices and Supplementary Material We promised to deliver additional material for various points in the paper for whomever it may interest. First, we show an exemplary overview of existing types of gaze annotation datasets and their purposes for a literature overview in Table 1. In the following, we show the spread of the eye coordinates of the class β€œbird” and then a scatter plot of all eye coordinates in total. 500 1,000 1,50005001,000 X CoordinateY CoordinateSpread of Bird Eye Coordinates Figure 9: Scatter plot showing the distribution of bird eye coordinates. To show that the eye coordinates in general are not placed in one area, we show here the distribution of all eye coordinates in one collected plot in Figure 10. The only tendency visible is that more of them are in the center
https://arxiv.org/abs/2505.21589v1
areas than the edges, which makes sense because the body of an animal is always around the eye. Figure 10: This scatter plot shows the spread of all eye coordinates from all classes. It is pretty diverse, considering the eyes are placed within the body of the animal and can therefore not occur on the immediate sides. 11 Table 1: An exemplary Overview of existing Types of Gaze Annotation Datasets and their purposes. Dataset Name Type of Data Type of Annotation Cathegory: Human Gaze Annotations 1. MPIIGaze Zhang et al. [2015] large-scale dataset of Human Images of 15 subjects Focus on different lightings, variable places and day times, as natural settings as possible 2. EyeDiap Funes Mora et al. [2014] human gaze estimation dataset of 16 test subjects head pose variations, gaze poses ground truth given changes in ambient and sensing conditions by the 3d poses of the visual target 3. ETH-XGaze Zhang et al. [2020] human gaze estimation dataset of 110 participants head pose variations and different lighting 4. Gaze360 Kellnhofer et al. [2019] human gaze estimation dataset of 238 subjects wide range of lighting conditions Cathegory: Animal Gaze Annotations 5. Animal Kingdom Dataset Ng et al. [2022] Video Dataset of Animals annotated for relevant animal behavior, no eye gaze annotation but pose estimation 6. PET Gilani et al. [2015] classes bird, cat, cow, dog, horse, and sheep eye movements recorded of human points of interest tracking eye position for visual points of importance from 40 users for free vision task and visual search 7. AnimalWeb Khan et al. [2020] animal faces collected from 350 species annotated with 9 land-marks on key facial features Our Dataset: Ambivision Drawn Animal Images generated by ChatGPT4 and ChatGPT4o Gaze vector in 2D format, animal labels for both animals Optical Illusions) consisting of always two animals, one merged/hidden in the other animal animals were distinguishable by their right eye coordinate and normalized direction vector. If it stared straight ahead, gaze vector was assigned as 0.0, 0.0. annotated bounding boxes In the following, we also include more example images of the dataset to demonstrate the quality and diversity of the illusions in Figure 11. Figure 11: An overview of example instances from the β€œbird” class to demonstrate the diversity of the illusions 12 To go on, we continue by including all experimental results for different learning rates and architectures in larger format in this Appendix to demonstrate the validity of our results. 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet18 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet34 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet50 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG13 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG16 Direction No Direction Only Eyes Random Figure 12: Accuracy vs. Epochs at LR=0.0001 for ResNet and VGG models across different annotation conditions. 13 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet18 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet34 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet50 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG13 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG16 Direction No Direction Only Eyes Random Figure 13: Accuracy vs. Epochs at learning rate 1Γ—10βˆ’5for ResNet and VGG models under various
https://arxiv.org/abs/2505.21589v1
annotation strategies. 14 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet18 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet34 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet50 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG13 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG16 Direction No Direction Only Eyes Random Figure 14: Accuracy vs. Epochs at learning rate 5Γ—10βˆ’6for ResNet and VGG models under various annotation strategies. Furthermore, we discussed that the explanations generated when including direction as a concept showed more useful features for recognition. Due to page restriction, we only included one example of this phenomenon in the main paper for the algorithm LIME Ribeiro et al. [2016]. However, we have other such examples for for example Gradcam Selvaraju et al. [2017] and PipNet Nauta et al. [2023], which we will show and elaborate in the following: 15 Figure 15: This image shows us the top prototypes marked by Pipnet for the class "bird". Again, Pipnet marks areas in the image that are important to the classification process. We can see that one of the boxes could either be the beak or the eye with the arrow, and one is focused on the feathers/wing patterns. Both are good features for the recognition of the animal. We also show some of the example patterns that Pipnet gives us for the feather pattern by which it classifies as bird. Even though this works well, this is yet another XAI method that focuses on areas instead of abstract concepts like viewing direction. Figure 16: In this image, we can see with Gradcam on an example image how the learner was able to extract much more useful features for the model trained with the direction vector. The whiteness is removed, and for the fox explanations it focuses less on areas that are actually part of the bear. Regarding the bear explanation, less parts of the fox are highlighted. To go on, we noted in our limitations that we were able to generate some images that did not strictly fall into our scheme: These images included examples of animals where more than one animal is hidden in the body of an animal, which can be seen in Figure17: 16 Figure 17: This shows us how easily optical illusions can be extended to more than just two animals within each other. We include several of these images generated accidentally when trying to generate as many different optical illusions as possible. We must also consider what happens when the gaze is not the distinguishing feature, such as in Figure 18. Overall, this paper aims to broaden our perspective: What if highlighting pixels was the wrong approach for explainable AI in the image domain? What concepts should we really learn? Can we tackle more comprehensive learning strategies with the use of optical illusions and knowledge from psychological domains? These extra images will be included in the opensource dataset in a separate folder. Figure 18: This is an example of a generated image where the gazes are completely shared and do not help in distinguishing which is the correct feature. Is there a more prominent answer on which one is
https://arxiv.org/abs/2505.21589v1
seen with a higher likelihood, and what is it dependent on? The outer one? Do we prefer the color black? Is it the animal whose head β€œlooks complete”? Does this change when we turn the angle of the picture, so another kind of β€œgaze direction”? References Javier AntorΓ‘n, Umang Bhatt, Tameem Adel, Adrian Weller, and JosΓ© Miguel HernΓ‘ndez-Lobato. Getting a clue: A method for explaining uncertainty estimates. arXiv preprint arXiv:2006.06848 , 2020. Mihai BΓ’ce, Philippe Schlattner, Vincent Becker, and GΓ‘bor SΓΆrΓΆs. Facilitating object detection and recognition through eye gaze. In 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2017) . ETH Zurich, 2017. Andrea Bontempelli, Stefano Teso, Katya Tentori, Fausto Giunchiglia, and Andrea Passerini. Concept-level debugging of part-prototype networks. arXiv preprint arXiv:2205.15769 , 2022. Chaofan Chen, Oscar Li, Alina Barnett, Jonathan Su, and Cynthia Rudin. This looks like that: deep learning for interpretable image recognition. CoRR , abs/1806.10574, 2018. URL http://arxiv.org/abs/1806. 10574 . D Choi. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446 , 2019. Thomas Fel, Victor Boutin, Louis BΓ©thune, RΓ©mi CadΓ¨ne, Mazda Moayeri, LΓ©o AndΓ©ol, Mathieu Chalvidal, and Thomas Serre. A holistic approach to unifying automatic concept extraction and concept importance estimation. Advances in Neural Information Processing Systems , 36:54805–54818, 2023a. 17 Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, RΓ©mi CadΓ¨ne, and Thomas Serre. Craft: Concept recursive activation factorization for explainability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2711–2721, 2023b. Kenneth Alberto Funes Mora, Florent Monay, and Jean-Marc Odobez. Eyediap: A database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras. In Proceedings of the symposium on eye tracking research and applications , pages 255–258, 2014. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. Advances in neural information processing systems , 32, 2019. Syed Omer Gilani, Ramanathan Subramanian, Yan Yan, David Melcher, Nicu Sebe, and Stefan Winkler. Pet: An eye-tracking dataset for animal-centric pascal object classes. In 2015 IEEE International Conference on Multimedia and Expo (ICME) , pages 1–6. IEEE, 2015. Yash Goyal, Amir Feder, Uri Shalit, and Been Kim. Explaining classifiers with causal concept effect (cace). arXiv preprint arXiv:1907.07165 , 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015. URL https://arxiv.org/abs/1512.03385 . Petr Kellnhofer, Adria Recasens, Simon Stent, Wojciech Matusik, and Antonio Torralba. Gaze360: Physically unconstrained gaze estimation in the wild. In Proceedings of the IEEE/CVF international conference on computer vision , pages 6912–6921, 2019. Muhammad Haris Khan, John McDonagh, Salman Khan, Muhammad Shahabuddin, Aditya Arora, Fahad Shah- baz Khan, Ling Shao, and Georgios Tzimiropoulos. Animalweb: A large-scale hierarchical dataset of annotated animal faces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6939–6948, 2020. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning , pages 2668–2677. PMLR, 2018. Diederik P. Kingma and Jimmy
https://arxiv.org/abs/2505.21589v1
Ba. Adam: A method for stochastic optimization, 2017. URL https: //arxiv.org/abs/1412.6980 . Kurt Koffka. Principles of Gestalt psychology . routledge, 2013. Gang Liu, Yu Yu, Kenneth A Funes Mora, and Jean-Marc Odobez. A differential approach for gaze estimation. IEEE transactions on pattern analysis and machine intelligence , 43(3):1092–1099, 2019. Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions, 2017a. URL https: //arxiv.org/abs/1705.07874 . Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems , 30, 2017b. Andrea Morichetta, Pedro Casas, and Marco Mellia. Explain-it: Towards explainable ai for unsupervised network traffic analysis. In Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks , pages 22–28, 2019. Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency , pages 607–617, 2020. Sankha S Mukherjee and Neil Martin Robertson. Deep head pose: Gaze-direction estimation in multimodal video. IEEE Transactions on Multimedia , 17(11):2094–2107, 2015. Meike Nauta, JΓΆrg SchlΓΆtterer, Maurice Van Keulen, and Christin Seifert. Pip-net: Patch-based intuitive prototypes for interpretable image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2744–2753, 2023. Carina Newen and Emmanuel MΓΌller. Unsupervised deepview: Global explainability of uncertainties for high dimensional data. In 2022 IEEE International Conference on Knowledge Graph (ICKG) , pages 196–202. IEEE, 2022. Xun Long Ng, Kian Eng Ong, Qichen Zheng, Yun Ni, Si Yong Yeo, and Jun Liu. Animal kingdom: A large and diverse dataset for animal behavior understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 19023–19034, 2022. 18 OpenAI. Chatgpt, 2024. URL https://www.openai.com/chatgpt . Retrieved from OpenAI. Marius-Constantin Popescu, Valentina E Balas, Liliana Perescu-Popescu, and Nikos Mastorakis. Multilayer perceptron and neural networks. WSEAS Transactions on Circuits and Systems , 8(7):579–588, 2009. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictions of any classifier, 2016. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. InProceedings of the AAAI conference on artificial intelligence , volume 32, 2018. Khaled Saab, Jared Dunnmon, Alexander Ratner, Daniel Rubin, and Christopher RΓ©. Improving sample complexity with observational supervision. 2019. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision , pages 618–626, 2017. Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and Fosca Giannotti. Glocalx- from local to global explanations of black box ai models. Artificial Intelligence , 294:103457, 2021. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning , pages 3319–3328. PMLR, 2017. Echo Wen Wan and Rocky Peng Chen. Anthropomorphism and object attachment. Current Opinion in Psychology , 39:88–93, 2021. Xin Wang, Nicolas Thome, and Matthieu Cord. Gaze latent support vector machine for image classification improved by weakly supervised region selection. Pattern
https://arxiv.org/abs/2505.21589v1
Recognition , 72:59–71, 2017. Xinming Wang, Jianhua Zhang, Hanlin Zhang, Shuwen Zhao, and Honghai Liu. Vision-based gaze estimation: A review. IEEE Transactions on Cognitive and Developmental Systems , 14(2):316–332, 2021. Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. Explainable ai: A brief survey on history, research areas, approaches and challenges. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8 , pages 563–574. Springer, 2019. Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. Advances in neural informa- tion processing systems , 33:20554–20565, 2020. Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A Ehinger, and Benjamin IP Rubinstein. Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pages 11682–11690, 2021. Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. Appearance-based gaze estimation in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4511–4520, 2015. Xucong Zhang, Seonwook Park, Thabo Beeler, Derek Bradley, Siyu Tang, and Otmar Hilliges. Eth-xgaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16 , pages 365–381. Springer, 2020. 19
https://arxiv.org/abs/2505.21589v1
Pioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning Maosen Zhao1βˆ—, Pengtao Chen1βˆ—, Chong Yu2, Yan Wen1, Xudong Tan1, Tao Chen1† 1School of Information Science and Technology, Fudan University 2Academy for Engineering and Technology, Fudan University 20307130202@fudan.edu.cn, eetchen@fudan.edu.cn Abstract Model quantization reduces the bit-width of weights and activations, improving memory efficiency and inference speed in diffusion models. However, achieving 4-bit quan- tization remains challenging. Existing methods, primarily based on integer quantization and post-training quantiza- tion fine-tuning, struggle with inconsistent performance. In- spired by the success of floating-point (FP) quantization in large language models, we explore low-bit FP quantization for diffusion models and identify key challenges: the fail- ure of signed FP quantization to handle asymmetric activa- tion distributions, the insufficient consideration of temporal complexity in the denoising process during fine-tuning, and the misalignment between fine-tuning loss and quantization error. To address these challenges, we propose the mixup- sign floating-point quantization (MSFP) framework, first in- troducing unsigned FP quantization in model quantization, along with timestep-aware LoRA (TALoRA) and denoising- factor loss alignment (DFA), which ensure precise and sta- ble fine-tuning. Extensive experiments show that we are the first to achieve superior performance in 4-bit FP quantiza- tion for diffusion models, outperforming existing PTQ fine- tuning methods in 4-bit INT quantization. 1. Introduction Despite the impressive performance of diffusion models (DMs) in image generation [33, 44], their computational and memory demands, particularly for high-resolution outputs, pose significant challenges for deployment on resource-constrained edge devices, highlighting the need for model compression to address these limitations. Model quantization, a key technique within model compression, reduces the bit-width of model weights and activations, typ- ically stored in 32-bit format, to lower precision. This reduction lowers memory usage and accelerates inference †Corresponding author.βˆ—Equal contribution.speed. By decreasing the bit-width, quantization im- proves both temporal and memory efficiency in mainstream models, maintaining robust performance, particularly in resource-constrained environments [13, 18, 21]. The current quantization methods for diffusion models can be broadly categorized into two primary approaches. The first, post-training quantization (PTQ), optimizes the quantization parameters after the model has been trained, typically by minimizing the quantization error [19, 35]. While PTQ is effective for 4-bit quantization of weights, it is limited by its reliance on 8-bit quantization for activations, as further reduction in activation bit-width leads to significant performance degradation. In contrast, quantization-aware training (QAT) integrates quantization into the training process, enabling the model to learn with 4- bit precision from scratch [20]. Although QAT can achieve high-performance 4-bit models, it incurs substantial compu- tational overhead, rendering it less practical for many real- world applications [6, 15]. To achieve fully quantized 4-bit diffusion models with minimal overhead, fine-tuning has emerged as a promis- ing solution. This approach leverages pre-trained mod- els and adjusts a small subset of parameters to narrow the performance gap between the quantized model and its full-precision counterpart. While some studies have ex- plored fine-tuning for 4-bit quantization in diffusion mod- els [8, 42], these methods often fail to achieve consistent performance under standard configurations (e.g., quantizing all layers but not some). Consequently, developing
https://arxiv.org/abs/2505.21591v1
a uni- versally effective and scalable fine-tuning method for 4-bit quantization in diffusion models remains an open challenge. To be noticed, existing quantization methods for diffu- sion models primarily rely on integer (INT) quantization, which has long been the dominant approach. However, recent developments have demonstrated the considerable potential of floating-point (FP) quantization. Compared to INT quantization, FP quantization offers greater flex- ibility in modeling complex weight and activation distri- butions [30], leading to improved performance in visual 1arXiv:2505.21591v1 [cs.LG] 27 May 2025 tasks at 8-bit precision [17, 39], and remarkable results in large language models under 4-bit precision [25, 43]. Moreover, FP quantization provides significant advantages in inference acceleration, with NVIDIA H100 achieving a 1.45Γ— speedup with FP8 quantization, outperforming INT8 [32, 45]. Despite these advantages, the application of low- bit FP quantization in diffusion models remains largely un- explored, presenting a promising avenue for future research. In summary, to address the challenges in achieving 4- bit diffusion models, we propose a baseline method using a search-based signed FP quantization framework [2, 25] combined with single-LoRA fine-tuning, aiming to realize 4-bit FP quantized diffusion models. Through this explo- ration, we uncover several findings that are significant for both diffusion model quantization and FP quantization: (1) There exists many layers which exhibit asymmetric dis- tributions, due to the nonlinear activation function SiLU . The application of traditional signed FP quantization with a symmetric distribution leads to significant precision loss in the sub-zero area, causing substantial performance degrada- tion after quantization. (2) Fine-tuning is conducted based on the denoising process, which is regarded as a complex task involving the restoration from outlines to details [40]. However, current methods typically apply a single LoRA to fine-tune severely degraded models across all timesteps, which leads to suboptimal learning at certain timesteps. (3) Predicted noise plays a varying role at different timesteps during denoising, which is the key to diffusion models. Arising from the neglect of this variation across timesteps, we identify a mismatch between the impact of quantization and the loss function in current methods, which undermines the effectiveness of fine-tuning. Facing the challenges in achieving 4-bit FP quantization, we propose constructive strategies: (1) To handle different distribution effectively, we propose a mixup-sign floating- point quantization (MSFP) framework, where unsigned FP quantization with an added zero point leads to a more com- patible distribution of discrete points with the anomalous distributions in activations and signed FP quantization con- tinues to exhibit strong representation capacity on other dis- tributions. (2) Realizing that the current single-LoRA fine- tuning approach is in lack of flexibility across timesteps, we introduce timestep-aware LoRA (TALoRA), incorporat- ing multiple LoRAs and a timestep-aware router to dynam- ically select the appropriate LoRA for each timestep in the denoising process. (3) To further improve the effectiveness of fine-tuning, we introduce a denoising-factor loss align- ment (DFA), ensuring the loss function, and the guidance of the fine-tuning, are consistent with the actual quantiza- tion deterioration across timesteps. In summary, our contributions are as follows: (i) We are the first to identify that signed FP quanti-
https://arxiv.org/abs/2505.21591v1
zation struggles with asymmetric activations, whicharise from the nonlinear behavior of activation func- tions. To address this, we introduce the MSFP frame- work, which is also the first effective application of unsigned FP quantization in quantization, offering a novel approach for achieving low-bit quantization. (ii) For the fine-tuning of low-bit diffusion models, which is based on the denoising process, we precisely define it as a multi-task process across timesteps and intro- duce an efficient TALoRA module. Furthermore, we improve the alignment of loss function with quantiza- tion error via DFA strategy, enabling stable and reli- able fine-tuning to achieve low-bit diffusion models. (iii) We focus on identifying and eliminating three ma- jor barriers to effective low-bit FP quantization in diffusion models. Extensive experiments on DDIM and LDM demonstrate that our work achieves SOTA results for 4-bit quantization. The proposed MSFP, TALoRA and DFA have greatly advanced the progress of low-bit quantization in diffusion models. 2. Related Work 2.1. Diffusion Model Quantization There are two main approaches in the quantization of diffusion models: QAT [6, 14] and PTQ [31]. QAT is particularly effective for low-bit quantization while it re- trains the model from scratch, consuming extensive com- putational resources and time [20]. In contrast, PTQ of- fers greater time efficiency, making it more practical for large models in real-world applications. Recent advance- ments in PTQ have concentrated on optimizing calibration datasets to enhance reconstruction accuracy [9, 19, 27, 35] and mitigating quantization errors stemming from the tem- poral and structural properties of diffusion models [3, 19, 36, 38]. To further achieve fully 4-bit quantization, fine- tuning techniques has been introduced in PTQ-based quan- tization. EfficientDM [8] develops a LoRA-based fine- tuning framework, while QuEST [42] focuses on optimiz- ing quantization-unfriendly activations. Both works stag- nate in addressing data distribution during fine-tuning, fail- ing to account for the specific challenges involved in fine- tuning within the context of diffusion model quantization. Meanwhile, recent advancements in FP quantization have made significant strides in model quantization [17, 25], highlighting its potential for diffusion models. While ex- isting low-bit quantization methods perform well for linear models, applying FP quantization to convolutional diffu- sion models remains more challenging. To date, only one study has explored 8-bit activation quantization in diffusion models using a basic search-based approach [2], with no re- search addressing lower-bit quantization. This underscores both the challenges and the untapped potential of achieving 4-bit FP quantization for diffusion models. 2 6 4 2 0 2 4 6 Value Range1510log(Frequency) (a)down2.attn1.q 0 1 2 3 4 5 6 Value Range1510log(Frequency) (b)Value -0.278 up0.block1.Res_conv2 0.2 0.1 0.0 0.1 0.2 0.3 0.4 Value Range1510log(Frequency) (c)Value -0.278 mid.block1.Res_conv2Figure 1. The activation distributions in NALs and AALs, results on the CelebA dataset. (a) The paradigm of NALs with symmetric activations. (b) The typical paradigm of AALs with asymmetric activations, where unsigned FP quantization is more suitable. (c) The infrequent paradigm of AALs with relatively symmetric activations, where either signed or unsigned FP quantization could be applicable. 2.2. Parameter-Efficient Fine-Tuning Parameter-efficient fine-tuning (PEFT) has emerged as an effective alternative to full model
https://arxiv.org/abs/2505.21591v1
fine-tuning, focusing on adjusting only a subset of parameters while keeping the ma- jority frozen, thereby reducing storage overhead. Low-rank adapters (LoRA) [12], originally developed for large lan- guage models, have become one of the most widely used PEFT methods. Leveraging LoRA’s strong transferability, QLoRA [5] can be effectively applied to fine-tune low-bit diffusion models. However, previous LoRA-based fine- tuning is suboptimal in practice. In this paper, we make a further exploration on this and incorporate timestep-level adaptation inspired by MoELoRA [7], enhancing perfor- mance of low-bit quantized DMs. 3. Challenges & Exploration 3.1. Preliminary Diffusion Models. The diffusion model is a new genera- tion framework that completes learning by adding noise and completes generation in a denoising manner. As for the for- ward process, the noise is injected into ground-truth images x0at random timesteps. This enables the DMs to learn the distributions of noise through noisy images xt, which can be obtained as follows [11]: xt=√αtx0+√ 1βˆ’Ξ±tΞ΅. (1) HereΞ΅represents a standard Gaussian noise and Ξ±tis the accumulated noise intensity, calculated by: Ξ±t=tY i=1Ξ±i, (2) where Ξ±sgoverns the noise intensity under each timestep. Once the DMs are well-trained, the Gaussian noise image will be inputted to the DMs, and undergo the iterative de- noising process. Specifically, the noise can be predictedby DMs and used to obtain the image xtunder timestep t, which could range from Tto0, with the objective of ob- taining the image xtβˆ’1in a superior quality: xtβˆ’1=1√αt xtβˆ’1βˆ’Ξ±t√1βˆ’Ξ±t·Ρθ(xt, t) +ΟƒtΞ΄,(3) where Ρθ(xt, t)is the predicted noise at timestep tandΞ΄ is a newly added noise with the factor Οƒtto ensure diverse results. Here we define a denoising factor Ξ³t, which is for- mulated as: Ξ³t=1√αtΒ·1βˆ’Ξ±t√1βˆ’Ξ±t. (4) According to Equation 3, Ξ³tindicates the impact of the prediction noise under timestep t. The greater the factor Ξ³t, the stronger the predicted noise effect in the denoising. Model Quantization. The fundamental paradigm of model quantization is the process of transforming a con- tinuous data distribution into a finite set of discrete points. Consequently, the quality of the quantized model is inextri- cably related to the distribution of discrete points. Accord- ing to the discrete point type, quantization is defined as two categories, widely-used INT quantization and emerging FP quantization. Analogous to the process of INT quantization, a floating-point vector xcan be quantized as follows: bx=Clip ⌊x sβŒ‰+z, l, u Β·s. (5) HereβŒŠβŒ‰is the rounding operation. landuare the min- imum and maximum quantization thresholds while scaling factor sand zero-point ztogether constitute the quantiza- tion parameters. As shown in Equation 5, INT quantization results in an evenly spaced distribution of discrete points. While INT quantization is straightforward to implement, it may be too simplistic for continuous distributions that have significant variations in density, where evenly spaced inter- vals in INT quantization are not effective. On the contrary, 3 4bit 5bit 6bit 7bit 8bit Activation Bit-width0.01.02.03.04.0Normalized MSE with xt (FP32) Representation Capacity HighLowmodel.up[0].block[1].conv2 model.down[2].attn[1].qFigure 2. Effect of bit-width reduction on activation representation capacity in AALs (blue) and NALs (orange) under signed FP quantization, evaluated on CelebA dataset. 0 20 40 60 80 100 Denoising Step103 102 101
https://arxiv.org/abs/2505.21591v1
LossOriginal MSE Loss Aligned MSE Loss Performance Deterioration Metric 106 105 104 Performance DeteriorationFigure 3. Two loss, and performance degrada- tion between the quantized and full-precision models across steps. Compared with metric, the original loss shows an inverse trend, while the aligned loss remains consistent. 0 10 20 30 40 50 60 70 80 Layer Index110Normalized MSEsign + zp sign unsign + zp unsignFigure 4. The MSE of activations before and after quantization across all AALs under four different strategies, normalized against the baseline of signed FP quantization with- out zero point (purple). the discrete points under FP quantization are not uniformly spaced in distribution, as shown below [23]: f= (βˆ’1)s2pβˆ’b 1 +d1 2+d2 22+. . .dm 2m , (6) where sis the sign bit, diis the m-bit mantissa, pis the e-bit exponent and bis the bias that serves as both scaling factor and threshold in INT quantization. Previous studies in FP quantization rely on signed FP quantization, where sis set to 1. For an n-bit FP quantization, the bit-width is distributed across the mantissa, exponent, and sign bit, with the condition m+e+s=n. The different combi- nations of mandeallow FP quantization to represent data in multiple formats with a fixed number of bits, denoted as EiMj, where i-bit exponent and j-bit mantissa are speci- fied. A larger jresults in higher precision within each inter- val, while a larger iexpands the range of covered intervals. There appears to be an inherent suitability of FP quanti- zation for DMs, as the majority of weights and activations follow a normal distribution symmetric around value = 0. This aligns well with the unevenly spaced distribution, where discrete points are dense in the small-value region and sparse in the large-value region, in FP quantization [45]. Additionally, the flexible formatting mechanism allows FP quantization to better accommodate complex distribution scenarios. 3.2. Barriers to Effective Low-Bit FP Quantization in Diffusion Models In order to implement high-performing 4-bit diffusion models, we implemented FP quantization that is more com- patible with the data distribution of diffusion models, with the search-based strategy in previous work [25]. Build- ing on this, we deployed a LoRA-based fine-tuning strat- egy to address the performance degradation caused by 4-bit quantization. Despite this, we find that the performance of the 4-bit FP diffusion model remains satisfactory andwe identify two main issues that we are facing: (1) For FP quantization, although FP quantization works well at 8 bits, it experiences a sharp performance degradation at 4 bits. How to improve the ability of FP quantization to rep- resent data under low-bit quantization ? (2) For standard post-training LoRA-based fine-tuning, exhibits instability and suboptimal results when applied to low-bit quantized diffusion models. How can we make LoRA more efficient and accurate in learning the loss information at different denoising timesteps ? In the following parts, we will explore the underlying causes of these two issues and the feasible strategies to address them. Observation 1: Previous signed FP quantization fails to achieve effective low-bit quantization in Activation- Anomalous Layers. In diffusion models, we observed that the nonlinear ac- tivation layer
https://arxiv.org/abs/2505.21591v1
SiLU , defined as SiLU (x) =x 1 +eβˆ’x, is commonly situated between layers. SiLU causes the abnormal activations for the subsequent layer. As depicted in Panel (b) of Figure 1, all values be- low 0 are compressed into the range of [βˆ’0.278,0). In this paper, we refer to layers with such asymmetric activations as Anomalous-Activation-Distribution Layers (AALs) and the other layers as Normal-Activation-Distribution Layers (NALs). Mainstream signed FP quantization typically sets the maximum threshold of the quantizer high to accurately represent normal positive activations. However, this ap- proach results in a significant precision loss when dealing with values below 0. Figure 2 illustrates the representation capacity of FP signed quantization in both NALs and AALs under different bit widths. When the bit width drops be- low 6 bits, AALs suffer more severe performance degrada- tion compared to NALs, which ultimately leads to the fail- ure in low-bit FP quantization. This phenomenon suggests that mitigating the performance decline in AALs is a critical step towards improving low-bit FP quantization in diffusion models. 4 Observation 2: The single-LoRA-based strategy is overly simplistic for fine-tuning quantized diffusion models across different timesteps. Previous work has focused on adapting LoRA for the quantization of diffusion models but has not fully explored LoRA’s performance in the context of denoising. Consid- ering that the denoising process of diffusion models starts with recovering outlines and progresses to restoring details, we question whether the single-LoRA fine-tuning strategy can handle this complexity. In Table 1, we compare the baseline model, which uses a single-LoRA strategy for fine- tuning, with two alternative strategies. The second strategy assigns a separate LoRA for the first and last 50 timesteps, resulting in a significant improvement over the baseline. In contrast, the third strategy also introduces a dual-LoRA strategy, it randomly selects one for each timestep, result- ing in much worse performance. These results suggest that applying multiple LoRAs, allocated in a structured manner across timesteps, enhances model performance, while disor- dered selection of LoRAs could lead to suboptimal results. This motivates us to approach the fine-tuning of quantized diffusion models as a multi-task process and assign multiple LoRAs across different timesteps with a rational approach in allocation. Observation 3: The MSE of the predicted noise of full- precision and quantized models does not reflect the ac- tual impact of quantization at different timesteps. In fine-tuning, a commonly used loss function calculates the MSE between the noise predictions of the full-precision and quantized diffusion models, using denoised images from the full-precision model at the previous timestep as inputs: Lt Ρθ=βˆ₯Ρθ(xt, t)βˆ’bΡθ(xt, t)βˆ₯2. (7) By observing the variation in this loss during single- LoRA fine-tuning (see Figure 3), we identify an unexpected trend: the loss increases progressively faster as denoising advances. This contradicts the principle of denoising, where image quality should improve with each step, and the im- pact of predicted noise should diminish over time. By the final step, the impact of quantization on model performance should be negligible, as the predicted noise no longer affects the input. However, the loss is at its maximum, indicating
https://arxiv.org/abs/2505.21591v1
that quantization error is most significant at this stage, con- tradicting the expectation that its influence should diminish over time. To highlight the discrepancy between the loss and ac- tual quantization errors, we define the performance gap at each step as the difference in denoising quality between the quantized and full-precision models, as the ultimate goal of denoising is to yield high-quality images. We in- put the previous output image xtfrom the full-precision model and calculate the performance gap between the de-Method Bits (W/A) FID ↓ FP 32/32 6.49 Single-LoRA 4/4 19.41 Dual-LoRA (Split Steps in Half) 4/4 17.07 Dual-LoRA (Random Allocation) 4/4 41.96 Table 1. The impact of the number of LoRAs and their allocation across timesteps on the performance of the fine-tuning. The results is evaluated by 4-bit quantization on CelebA dataset. noised image xtβˆ’1from the full-precision model and the denoised image bxtβˆ’1from the quantized model, measured byMSE (xtβˆ’1,bxtβˆ’1). As shown in Figure 3, this mis- alignment between the loss and the actual performance gap leads to deviations in LoRA’s learning. This necessitates aligning our loss function with the denoising process dur- ing fine-tuning. 4. Methodology In this section, we explore the issues identified in Sec- tion 3.2 and propose corresponding solutions. As illustrated in Figure 5, we introduce a mixup-sign FP quantization framework to address the diverse activation distributions in the first stage. During fine-tuning, the timestep-aware rout- ing mechanism and denoising-factor loss alignment work in tandem to enable high-quality learning, ultimately enabling the realization of optimized 4-bit FP diffusion models. 4.1. Mixup-Sign Floating Point Quantization To address the challenges of low-bit FP quantization fail- ure in AALs, we leverage FP quantization by allocating more discrete points to areas with high data concentration. Motivated by the half-normal distribution of activations in AALs, we introduce unsigned floating-point quantization. However, as shown in Equation 6, when using unsigned FP quantization with sset to 0, we round all data in the sub- zero range to zero, losing important negative information. To address this, we introduce a zero point in the range of [- 0.278, 0) to recover most sub-zero activations. The updated quantization formula becomes: funsign = (βˆ’1)s2pβˆ’b 1 +d1 2+d2 22+. . .dm 2m +z, (8) where sis set to 0 and zis the newly added zero point. By freeing the 1-bit sign bit, which is ineffective in signed FP quantization, and using it as additional exponent / mantissa bit width, we fully utilize the representation capacity. As illustrated in Figure 4, unsigned FP quantization with a zero point significantly improves representation in over 95% of AALs, compared to traditional signed FP quantization. 5 Step TStep 1…Denoising Process SharedRouter πŸ”₯… Lora Hub πŸ”₯ Top1 ❌ ❌ Gaussian NoiseFull-Precision ModelMSFP Quantized Model MSE ❄ ❄ SharedRouter πŸ”₯ πŸ”₯ Top 1 ❌ βŒΓ—Layer DF AlignmentStep TIteration πŸ”₯Trainable ❄FrozenForwardBackwardT Shared RouterDenoising-Factor Alignment Step tTime_embMLP[1, ][1, d][1, yΓ—h] STE[1, y, h]Forked from the pretrained diffusion model LoraLayerAllocation Matrix Step t MSE* Loss0Step𝛼 Solver … Lora Hub πŸ”₯ Top1 ❌ ❌ Full-Precision ModelMSFP Quantized Model MSE ❄ ❄ SharedRouter πŸ”₯ LoRA Hub πŸ”₯ Top 1 ❌
https://arxiv.org/abs/2505.21591v1
βŒΓ—LayerStep 1 …Denoised Sample 1 Add Flow MSFP Quantization ActivationAAL00NormalDistributionAnomalousDistribution SignedQuantizationNAL Mixup-SignQuantizationBest(,) DF Alignment 1𝛼!$1βˆ’π›Ό!1βˆ’π›Ό&!tLoRA HubFigure 5. The pipeline of our proposed method. UNets are applied to the Mixup-Sign Floating-Point Quantization (MSFP), where dis- tinct floating-point quantization schemes are employed for Anomalous-Activation-Distribution Layers (AALs) and Normal-Activation- Distribution Layers (NALs). During the fine-tuning stage, multiple LoRA modules are introduced, and a timestep-aware routing mecha- nism is used for dynamic LoRA allocation across different timesteps. Additionally, a denoising-factor alignment technique is employed to align the loss function with quantization-induced performance degradation. However, there are rare cases where performance slightly goes worse due to the diversity of anomalous distributions. Panel (c) of Figure 1 shows that in such cases, the activa- tion distribution may resemble a normal distribution, where signed FP quantization might perform better. Figure 4 fur- ther indicates that introducing a zero point into signed FP quantization is unnecessary, offering minimal improvement in a few cases. Given the strong performance of unsigned FP quantiza- tion with a zero point and the diversity of AALs, we pro- pose a mixup-sign FP quantization framework. During the search-based initialization phase, we use signed FP quanti- zation for NALs and introduce both unsigned FP quantiza- tion with a zero point and signed FP quantization for AALs. This approach addresses AAL challenges in low-bit quanti- zation while minimizing computational overhead by adding the zero point only to unsigned FP quantization. 4.2. Timestep-Aware Router for LoRA Allocation In Section 3.2, we observe that a single LoRA cannot fully capture all the information across timesteps due to the diverse generative characteristics at different timesteps. We also find that a reasonable allocation of different LoRAs for timesteps will be beneficial to fine-tuning. In this section, we introduce a timestep-aware LoRA allocation method that dynamically assigns optimal LoRA to each timestep, maximizing fine-tuning effectiveness.Our method relies on a learnable router (illustrated in Figure 5), a module shared across all timesteps. It takes the timestep as input and outputs selection probabilities for each LoRA across UNet layers. For each timestep, the LoRA with the highest probability corresponding to the router’s output is inserted into the quantized model for fine- tuning or inference. In a router network, the main compo- nents are a time embedding layer and an MLP layer. The time embedding layer, derived from a pre-trained diffusion model, converts the scalar into a d-dimensional embedding. The MLP layer then maps the embedding to a LoRA al- location distribution, where yis the number of quantized layers and his the size of the LoRA Hub. Using an STE method [1], this distribution is converted into 0/1 proba- bilities to allocate suitable LoRAs to the diffusion model’s quantized layers. 4.3. Denoising Factor Aligned Loss To address the mismatch between the actual performance gap and the loss used during quantization, we review the denoising principle depicted in Equation 3 and pinpoint the cause of the mismatch: the time-step-dependent constraint on the predicted noise is not sufficiently accounted for dur- ing the denoising process. Therefore, we implement a mod- ification to the loss function based on predicted
https://arxiv.org/abs/2505.21591v1
noise: Lt=Ξ³tΒ·Lt Ρθ. (9) 6 By introducing Ξ³t, which accurately reflects the utiliza- tion of the predicted noise at each time step, we achieve a preliminary alignment between the loss and the actual quan- tization error, as shown in Figure 3. This facilitates more accurate fine-tuning, leading to better performance recov- ery in the low-bit diffusion model. 5. Experiment 5.1. Experimental Setup Models and Metrics. To verify the effectiveness of the proposed method, we evaluate it with two widely adopted diffusion paradigms: DDIM [37] and LDM [33]. For DDIM experiments, we evaluate on CIFAR-10 [16] and CelebA [28]. For LDM, we test unconditional generation on LSUN- Bedroom [44] and LSUN-Church [44] and conditional gen- eration on ImageNet [4]. The performance of the diffusion models is evaluated with Inception Score (IS) [34], Fr Β΄echet Inception Distance (FID) [10] and Sliding FID (sFID) [34]. All metrics are evaluated based on 50k samples generated by the DDIM solver [37]. Quantization Detail. We employ standard layer-wise quantization for both weights and activations. Except for the input and output layers, which are typically set to 8-bit, all other convolution and linear layers are quantized to the target bit-width. Furthermore, we generate 256 samples for the calibration set based on Q-Diffusion [19] for bias initial- ization and use the method from [2] to obtain the optimal quantization parameters. Baseline. We compare two main types of quantiza- tion methods: PTQ methods (Q-Diffusion [19] and EDA- DM [41]) and fine-tuning methods (EfficientDM [8] and QuEST [42]). Since these fine-tuning methods involve spe- cial settings like non-full-layer quantization, we standard- ize the settings of EfficientDM in a consistent manner and procure other standardized results from DilateQuant [26]. Comparison with special settings is provided in Appendix. 5.2. Quantization Performance Unconditional Generation. Table 2 presents the results of unconditional image generation across multiple datasets. With 6-bit quantization, our method achieves nearly identi- cal performance to full precision. Our 4-bit quantized mod- els achieve SOTA results in all tasks, significantly outper- forming previous baseline methods. Notably, on CIFAR- 10, our 4-bit quantized model results in an FID score that is only 1.84 worse than full precision, an almost negligi- ble degradation, while previous methods struggle with 4-bit quantization. Compared to the fine-tuning method [8], our 4-bit quantized models improve FID by 32.38, 24.15, and 9.63 on three datasets, respectively, and still maintain re- markable IS. Additionally, we provide the performance of 4-bit and 6-bit quantized models on CelebA in Appendix.Task MethodPrec. (W/A)FID↓ IS↑ CIFAR-10 32x32 DDIM steps = 100FP 32/32 4.26 9.03 Q-Diffusion 6/6 9.19 8.76 EDA-DM 6/6 26.68 9.35 EfficientDM 6/6 25.03 8.08 Ours ( h=2) 6/6 4.26 9.04 Ours ( h=4) 6/6 4.23 9.06 Q-Diffusion 4/4 N/A N/A EDA-DM 4/4 120.24 4.42 EfficientDM 4/4 38.40 7.32 Ours ( h=2) 4/4 6.02 8.79 Ours ( h=4) 4/4 6.10 8.90 LSUN (Bedroom) 256x256 LDM-4 steps = 100 eta = 1.0FP 32/32 3.02 2.29 Q-Diffusion 6/6 10.10 2.11 EDA-DM 6/6 10.56 2.12 QuEST 6/6 10.10 2.20 EfficientDM 6/6 12.95 2.57 Ours ( h=2) 6/6 8.42 2.49 Ours ( h=4) 6/6 8.40 2.49 Q-Diffusion 4/4 N/A
https://arxiv.org/abs/2505.21591v1
N/A EDA-DM 4/4 N/A N/A QuEST 4/4 N/A N/A EfficientDM 4/4 36.36 2.69 Ours ( h=2) 4/4 12.21 2.47 Ours ( h=4) 4/4 12.34 2.48 LSUN (Church) 256x256 LDM-8 steps = 100 eta = 0.0FP 32/32 4.06 2.70 Q-Diffusion 6/6 10.90 2.47 EDA-DM 6/6 10.76 2.43 QuEST 6/6 6.83 2.65 EfficientDM 6/6 7.45 2.80 Ours ( h=2) 6/6 6.24 2.73 Ours ( h=4) 6/6 6.38 2.73 Q-Diffusion 4/4 N/A N/A EDA-DM 4/4 N/A N/A QuEST 4/4 13.03 2.63 EfficientDM 4/4 18.40 2.97 Ours ( h=2) 4/4 8.81 2.70 Ours ( h=4) 4/4 8.77 2.71 Table 2. Quantization performance of unconditional generation. β€˜Prec. (W/A)’ denotes the quantization bit-width. β€˜N/A’ denotes failed image generation. hdenotes the size of LoRA Hub. Conditional Generation. Table 3 presents the results of our conditional generation experiments on ImageNet. We observe that FID is not always a reliable metric in this con- text, as an unexpected trend emerges: as the bit-width of the quantized model decreases, the FID score improves, which contradicts the expected trend. Therefore, our discussion of the ImageNet results focuses on other evaluation met- rics. As shown by the IS and sFID, our method achieves 7 FullPrecisionW6A6W4A4 Figure 6. A visual comparison of generation results using our method across different quantization bit-widths, with the LSUN- Church dataset as an example. performance comparable to that of the full-precision model with 6-bit quantization. Even with 4-bit quantization, we achieve SOTA results in terms of sFID, improving by 7.00 over the previous best method, EfficientDM [8]. Further- more, our method significantly outperforms two other fine- tuning methods in the IS metric. Visual evaluations further confirm that the generated images maintain high quality, ex- hibiting clear and coherent content, as shown in Figure 6. More visualization results will be presented in Appendix. Method Prec. (W/A) sFID ↓FID ↓ IS↑ FP 32/32 7.67 11.69 364.72 EDA-DM 6/6 8.02 11.52 360.77 QuEST 6/6 9.36 8.45 310.12 EfficientDM 6/6 6.88 9.54 351.79 Ours ( h=2) 6/6 7.43 10.10 349.91 Ours ( h=4) 6/6 6.65 10.10 351.79 EDA-DM 4/4 36.66 20.02 204.93 QuEST 4/4 29.27 38.43 69.58 EfficientDM 4/4 14.42 12.73 139.45 Ours ( h=2) 4/4 7.42 6.50 190.74 Ours ( h=4) 4/4 8.23 7.43 177.40 Table 3. Quantization performance of conditional generation for fully-quantized LDM-4 models on ImageNet 256Γ—256 with 20 steps. β€˜Prec. (W/A)’ denotes the quantization bit-width. hde- notes the size of LoRA Hub. 5.3. Ablation Study The ablation experiments are conducted on the 4-bit quantization using the CelebA dataset, which is challeng- ing for low-bit quantization, further demonstrating the ef- fectiveness of our approach. The baseline uses signed FP quantization combined with single LoRA fine-tuning. As shown in Table 4, all three proposed modules lead to sig- nificant performance improvements, with their combinationMethod Prec. (W/A)FID↓MSFP TALoRA DFA % % % 4/4 16.02 ! % % 4/4 9.60 % ! % 4/4 10.66 ! % ! 4/4 8.39 ! ! % 4/4 8.79 ! ! ! 4/4 7.69 Table 4. Ablation study on different modules we proposed. Testing on CelebA dataset with h= 2LoRA Hub size. yielding a synergistic effect. The baseline
https://arxiv.org/abs/2505.21591v1
FID score is 9.53 higher than that of the full-precision model (6.49). By ap- plying our technique, we reduce the FID by 8.18 compared to the baseline. More interesting is that we visualize the LoRA allocation distribution learned by the router as shown in Figure 7. We find that the distribution of the router- learned allocation over timesteps is consistent with the find- ing that the diffusion model focuses on contour generation early and on detail generation later [40]. CIFAR-10CelebA-HQ ImageNet LSUN-Bedroom CelebALoRA 1LoRA 2Denoising Process𝑇1 Figure 7. Distribution of LoRA allocations over timesteps ob- tained after router training on different datasets, when h= 2. 6. Conclusion In this paper, we focus on exploring low-bit FP quanti- zation for diffusion models. For model initialization, we innovatively introduce unsigned FP quantization with zero point to address AALs. For the fine-tuning based on the de- noising process, we formulate it as a multi-task procedure. We introduce multiple LoRAs along with a router for their allocation at different timesteps, and further align the loss function, originally based on estimated noise, with the ac- tual quantization error. We introduce unsigned FP quantiza- tion and achieve 4-bit FP quantized diffusion models. Our FP PTQ-based fine-tuning method sets a new precedent for 4-bit diffusion models, offering insight into the deployment of low-bit diffusion models in the future. 8 7. Acknowledgments This work is supported by Shanghai Science and Technology Commission Explorer Program Project (24TS1401300), National Key Research and Devel- opment Program of China (No.2022ZD0160101). The computations in this research were performed using the CFFF platform of Fudan University. References [1] Yoshua Bengio, Nicholas L Β΄eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013. 6 [2] Cheng Chen, Christina Giannoula, and Andreas Moshovos. Low-bitwidth floating point quantization for efficient high- quality diffusion models. arXiv preprint arXiv:2408.06995 , 2024. 2, 7, 1 [3] Huanpeng Chu, Wei Wu, Chengjie Zang, and Kun Yuan. Qncd: Quantization noise correction for diffusion models. InProceedings of the 32nd ACM International Conference on Multimedia , pages 10995–11003, 2024. 2 [4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. 7, 4 [5] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: efficient finetuning of quantized llms (2023). arXiv preprint arXiv:2305.14314 , 52:3982–3992, 2023. 3, 2 [6] Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha. Learned step size quantization. In International Conference on Learning Representations , 2019. 1, 2 [7] Wenfeng Feng, Chuzhan Hao, Yuewei Zhang, Yu Han, and Hao Wang. Mixture-of-loras: An efficient multitask tun- ing method for large language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC- COLING 2024) , pages 11371–11380, 2024. 3 [8] Yefei He, Jing Liu, Weijia Wu, Hong Zhou, and Bo- han Zhuang. Efficientdm: Efficient quantization-aware fine-tuning of low-bit diffusion models. arXiv preprint arXiv:2310.03270 , 2023. 1, 2, 7, 8,
https://arxiv.org/abs/2505.21591v1
4 [9] Yefei He, Luping Liu, Jing Liu, Weijia Wu, Hong Zhou, and Bohan Zhuang. Ptqd: Accurate post-training quantization for diffusion models. Advances in Neural Information Pro- cessing Systems , 36, 2024. 2 [10] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilib- rium. Advances in neural information processing systems , 30, 2017. 7 [11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising dif- fusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. 3[12] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen- Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 , 2021. 3 [13] Wei Huang, Xingyu Zheng, Xudong Ma, Haotong Qin, Chengtao Lv, Hong Chen, Jie Luo, Xiaojuan Qi, Xianglong Liu, and Michele Magno. An empirical study of llama3 quantization: From llms to mllms. Visual Intelligence , 2(1): 36, 2024. 1 [14] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 2704–2713, 2018. 2 [15] Raghuraman Krishnamoorthi. Quantizing deep convolu- tional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342 , 2018. 1 [16] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 7 [17] Andrey Kuzmin, Mart Van Baalen, Yuwei Ren, Markus Nagel, Jorn Peters, and Tijmen Blankevoort. Fp8 quanti- zation: The power of the exponent. Advances in Neural In- formation Processing Systems , 35:14651–14662, 2022. 2 [18] Min Li, Zihao Huang, Lin Chen, Junxing Ren, Miao Jiang, Fengfa Li, Jitao Fu, and Chenghua Gao. Contemporary ad- vances in neural network quantization: A survey. In 2024 In- ternational Joint Conference on Neural Networks (IJCNN) , pages 1–10. IEEE, 2024. 1 [19] Xiuyu Li, Yijiang Liu, Long Lian, Huanrui Yang, Zhen Dong, Daniel Kang, Shanghang Zhang, and Kurt Keutzer. Q-diffusion: Quantizing diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 17535–17545, 2023. 1, 2, 7 [20] Yanjing Li, Sheng Xu, Xianbin Cao, Xiao Sun, and Baochang Zhang. Q-dm: An efficient low-bit quantized dif- fusion model. Advances in Neural Information Processing Systems , 36, 2024. 1, 2 [21] Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xi- aotong Zhang. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing , 461:370– 403, 2021. 1 [22] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll Β΄ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 , pages 740–755. Springer, 2014. 4 [23] Fangxin Liu, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, and Li Jiang. Improving neural network efficiency via post-training quan- tization with adaptive floating-point. In Proceedings of the IEEE/CVF international conference on computer vision , pages 5281–5290,
https://arxiv.org/abs/2505.21591v1
2021. 4 [24] Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. arXiv preprint arXiv:2202.09778 , 2022. 4 9 [25] Shih-yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, and Kwang-Ting Cheng. Llm-fp4: 4-bit floating-point quan- tized transformers. arXiv preprint arXiv:2310.16836 , 2023. 2, 4, 1 [26] Xuewen Liu, Zhikai Li, and Qingyi Gu. Dilatequant: Accu- rate and efficient diffusion quantization via weight dilation. arXiv preprint arXiv:2409.14307 , 2024. 7 [27] Xuewen Liu, Zhikai Li, Junrui Xiao, and Qingyi Gu. En- hanced distribution alignment for post-training quantization of diffusion models. arXiv preprint arXiv:2401.04585 , 2024. 2 [28] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision , pages 3730–3738, 2015. 7, 4 [29] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems , 35:5775–5787, 2022. 4 [30] Paulius Micikevicius, Dusan Stosic, Neil Burgess, Mar- ius Cornea, Pradeep Dubey, Richard Grisenthwaite, Sang- won Ha, Alexander Heinecke, Patrick Judd, John Kamalu, et al. Fp8 formats for deep learning. arXiv preprint arXiv:2209.05433 , 2022. 1, 2 [31] Markus Nagel, Rana Ali Amjad, Mart Van Baalen, Chris- tos Louizos, and Tijmen Blankevoort. Up or down? adap- tive rounding for post-training quantization. In International Conference on Machine Learning , pages 7197–7206. PMLR, 2020. 2 [32] NVIDIA. Blackwell platform sets new llm inference records in mlperf inference v4.1, 2024. Available at: https://developer.nvidia.com/blog/nvidia-blackwell- platform-sets-new-llm-inference-records-in-mlperf- inference-v4-1, Accessed: 2024-11-14. 2 [33] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj Β¨orn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. 1, 7 [34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. Advances in neural information processing systems , 29, 2016. 7 [35] Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, and Yan Yan. Post-training quantization on diffusion models. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition , pages 1972–1981, 2023. 1, 2 [36] Junhyuk So, Jungwon Lee, Daehyun Ahn, Hyungjun Kim, and Eunhyeok Park. Temporal dynamic quantization for dif- fusion models. Advances in Neural Information Processing Systems , 36, 2024. 2 [37] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 , 2020. 7 [38] Haojun Sun, Chen Tang, Zhi Wang, Yuan Meng, Xinzhu Ma, Wenwu Zhu, et al. Tmpq-dm: Joint timestep reduction andquantization precision selection for efficient diffusion mod- els.arXiv preprint arXiv:2404.09532 , 2024. 2 [39] Mart van Baalen, Andrey Kuzmin, Suparna S Nair, Yuwei Ren, Eric Mahurin, Chirag Patel, Sundar Subramanian, Sanghyuk Lee, Markus Nagel, Joseph Soriaga, et al. Fp8 ver- sus int8 for efficient deep learning inference. arXiv preprint arXiv:2303.17951 , 2023. 2 [40] Binxu Wang and John J Vastola. Diffusion models gener- ate images like painters: an analytical theory of outline first, details
https://arxiv.org/abs/2505.21591v1
later. arXiv preprint arXiv:2303.02490 , 2023. 2, 8, 4 [41] Changyuan Wang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie Zhou, and Jiwen Lu. Towards accurate data- free quantization for diffusion models. arXiv preprint arXiv:2305.18723 , 2(5), 2023. 7, 2 [42] Haoxuan Wang, Yuzhang Shang, Zhihang Yuan, Junyi Wu, Junchi Yan, and Yan Yan. Quest: Low-bit diffusion model quantization via efficient selective finetuning. arXiv preprint arXiv:2402.03666 , 2024. 1, 2, 7, 4 [43] Jie Wang, Huanxi Liu, Dawei Feng, Jie Ding, and Bo Ding. Fp4-quantization: Lossless 4bit quantization for large lan- guage models. In 2024 IEEE International Conference on Joint Cloud Computing (JCC) , pages 61–67. IEEE, 2024. 2 [44] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 , 2015. 1, 7 [45] Yijia Zhang, Lingran Zhao, Shijie Cao, Sicheng Zhang, Wenqiang Wang, Ting Cao, Fan Yang, Mao Yang, Shang- hang Zhang, and Ningyi Xu. Integer or floating point? new outlooks for low-bit quantization on large language models. In2024 IEEE International Conference on Multimedia and Expo (ICME) , pages 1–6. IEEE, 2024. 2, 4 10 Pioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning Supplementary Material A. Supplementary Material Overview In this supplementary material, we provide additional ex- planations and experimental results referenced in the main paper. The content is organized as follows: β€’ Methodology of Mixup-Sign Quantization in Appendix B. β€’ More Implementation Details in Appendix C. β€’ FP vs. INT in Post-Training Quantization in Appendix D. β€’ Comprehensive Analysis of TALoRA Performance in Appendix E. β€’ Supplementary Performance Evaluation in Appendix F. β€’ Extensive Comparison with EfficientDM and QUEST in Appendix G. β€’ Additional Visualization Results in Appendix H. B. Methodology of Mixup-Sign Quantization We implement the proposed MSFP strategy using a search-based method [2, 25], wherein the quantization pa- rameters are determined by minimizing the MSE between the distributions before and after quantization. To clarify, the quantization parameters for the signed FP quantization include the format , bias b, and sign bit sset to 1, whereas the quantization parameters for the unsigned FP quantization include the format , bias b, sign bit sset to 0, and zero point zp. All quantization parameters are assigned a search space during initialization. As mentioned in the main text, the bias bserves as a threshold in FP quantization: maxval = 22xβˆ’1βˆ’bΒ· 1βˆ’1 2y (10) The maximum value, denoted as maxval , is determined by the format (e.g.,ExMy) abd the bias b, and represents the maximum discrete value achievable in FP quantization. Notably, maxval andbare directly correlated, and for con- venience, we will refer to maxval in subsequent discus- sions as the equivalent to the bias b. In the MSFP strategy, initialization is divided into two parts: weight initialization and activation initialization. During initialization, we determine the optimal quantiza- tion parameter settings, and the process is outlined in Algo- rithm 1: In the first stage, the search for signed FP quan- tization parameters is applicable to all cases. In
https://arxiv.org/abs/2505.21591v1
the sec- ond stage, the search for unsigned FP quantization param- eters is specifically applied to the activation initializationAlgorithm 1 Initialization of Quantization Parameters 1:Input: format options ,maxval options ,(zpoptions ), (unsigned format options ) 2:Output: format ,maxval ,(zp) 3: 4:#10000 is huge enough 5:minmse = 10000 6:s= 1 7:forfinformat options do 8: forprev minmaxval options do 9: prev mse =calculate mse (f, prev m, s ) 10: ifprev mse < min mse then 11: minmse =prev mse 12: format =f 13: maxval =prev m 14: end if 15: end for 16:end for 17: 18:#only for unsigned FP quantization 19:s= 0 20:forfinunsigned format options do 21: forprev minmaxval options do 22: forprev zpinzpoptions do 23: prev mse = calculate mse (f, prev m, prev zp, s ) 24: ifprev mse < min mse then 25: minmse =prev mse 26: format =f 27: maxval =prev m 28: zp=prev zp 29: end if 30: end for 31: end for 32:end for of the Anomalous-Activation-Distribution Layers (AALs) mentioned in the main text. Additionally, due to the significant variability in the search space for maxval , which depends on the differing distributions of the data, therefore, prior to initiating the search for quantizer parameters, the first step involves per- forming several random forward passes to capture the max- imum value observed for each quantizer. This value is then used as the initial maxval 0. Weight Initialization. For weight initialization, since the distribution of weights typically approximates a nor- mal distribution (as shown in Figure 8), we deploy signed FP quantization. In the search for the format of signed 1 Search Space Bits (W/A) FID↓ [0,maxval 0] 6/32 10.14 [0,2maxval 0] 6/32 10.26 [0.6maxval 0,2maxval 0] 6/32 9.36 [0.7maxval 0,2maxval 0] 6/32 6.46 [0.8maxval 0,2maxval 0] 6/32 5.58 [0.9maxval 0,2maxval 0] 6/32 5.13 [maxval 0,2maxval 0] 6/32 5.83 Table 5. The impact of different maxval search spaces in weight initialization on the DDIM model performance on CelebA dataset. FP quantization, we define a search space of size 4 for 4-bit, 6-bit, and 8-bit representations, encompassing the most expressive data formats for each bit-width while strik- ing a balance between computational overhead and perfor- mance [17, 30, 39]. For the search of maxval in weights, we extend the pre- vious search range of range (0, maxval 0,0.001) to ex- plore a more refined and reasonable search space. On the one hand, considering that large-value weights are relatively few but have a significant impact, we set the lower bound of the search to a value slightly smaller than maxval 0to avoid excessive loss of essential large-value weights. On the other hand, setting the upper bound to maxval 0may not guarantee the minimization of MSE. As inferred from the representation of FP quantization, any quantizer with itsmaxval larger than 2Γ—maxval 0cannot result in a smaller MSE, so we set the upper bound of the search to 2Γ—maxval 0. As shown in Table 5, our exploration across different search spaces demonstrates the effectiveness of the redefined search space of maxval . Activation Initialization. For activation initialization, based on the analysis in
https://arxiv.org/abs/2505.21591v1
the main text, we employ signed FP quantization for NALs with distribution approximately following a normal distribution, and adopt a mixup-sign FP quantization strategy for AALs with asymmetric distribu- tions. Unlike weight initialization, where weights remain static, activation initialization needs to account for potential activation distributions. To ensure that the activations used for initialization are representative, we introduce a calibra- tion dataset [19, 35], as is common in INT quantization. Given the increased complexity and randomness of acti- vation distributions, we include all possible formats for dif- ferent bit-widths within the search space for format . No- tably, for n-bit unsigned FP quantization with the ExMy format , the condition x+y+s=napplies, where s= 0, distinguishing its format from that of signed FP quantiza- tion, which includes sset to 1 under the same bit-width. Accordingly, the search range for maxval is adjusted to linspace (0, maxval 0,100) , preventing excessive compu- tational overhead. Lastly, for the zero point zpintroduced in unsigned FP quantization, since the minimum value ofthe distribution is constrained by SiLU to approximately - 0.278, assigning zpa search space of linsapce (βˆ’0.3,0,6) is sufficient. C. More Implementation Details FP PTQ Configuration. Following the procedure out- lined in Appendix B, we deploy our MSFP strategy for both weights and activations. The initialization of maxval 0is achieved by generating 2000 images through random for- ward passes. Subsequently, a calibration dataset is con- structed based on the output of the full-precision model, fol- lowing the approach of Q-Diffusion [19]. Specifically, 256 samples are used for the DDIM model, while 128 samples are used for the LDM model. For weight initialization, the search spaces for maxval andformat are presented in Table 6. For activation initial- ization, the search spaces for maxval ,fomrat andzpare thoroughly discussed and provided in Appendix B. BitSearch Space (maxval )Search Space (format ) 4 [0.8maxval 0,2maxval 0] [E3M0,E2M1,E1M2,E0M3] 6 [0.9maxval 0,2maxval 0] [E4M1,E3M2,E2M3,E1M4] 8 [0.9maxval 0,2maxval 0] [E5M2,E4M3,E3M4,E2M5] Table 6. Search spaces for different quantization parameters under different bit-widths in weight initialization. Fine-tuning Configuration. For the noise estimation U-Net, all quantized layers, except for the input and out- put layers, are quantized and equipped with QLoRA-based TALoRAs [5]. Each TALoRA is initialized with a rank of 32. The selection of different TALoRAs at each timestep is managed by a router, which is implemented as a linear layer. The input channels of the router match the channel count of the timestep embedding in the diffusion model. Adam optimizers are assigned to both the TALoRAs and the router, with a learning rate of 1e-4 for both components. Fine-tuning is performed for 160 epochs with a batch size of 16 on DDIM models and 320 epochs with a batch size of 8 on LDM models. Notably, the batch size for the ImageNet dataset is reduced to 4. D. FP vs. INT in Post-Training Quantization Table 7 presents a performance comparison between the 6-bit model initialized with MSFP and several 6-bit models based on traditional INT quantization [6, 19, 35, 41]. As shown, even without fine-tuning, our approach significantly outperforms existing SOTA methods in handling
https://arxiv.org/abs/2505.21591v1
6-bit quan- tization for diffusion models. This highlights that FP quan- tization is a more effective choice for handling low-bit ac- tivation quantization in diffusion models, a task that is both 2 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Weight Value01234log(Frequency) (a)down.2.attn.1.q.weight 4 2 0 2 4 Weight Value012345log(Frequency) (b)up.0.block.1.conv2.weight 0.3 0.2 0.1 0.0 0.1 0.2 0.3 Weight Value012345log(Frequency) (c)mid.block_1.conv2.weight 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 Weight Value0123log(Frequency) (d)up.2.attn.1.q.weight 0.4 0.2 0.0 0.2 0.4 Weight Value01234log(Frequency) (e)up.2.block.0.nin_shortcut.weight 0.3 0.2 0.1 0.0 0.1 0.2 0.3 Weight Value01234log(Frequency) (f)up.3.upsample.conv.weightFigure 8. The weight distribution of certain layers in the DDIM model on CelebA dataset. Task MethodPrec. (W/A)FID↓IS↑ CelebA 64x64 DDIM steps = 100FP 32/32 6.49 2.61 LSQ 6/6 78.37 1.94 PTQ4DM 6/6 24.96 2.13 Q-Diffusion 6/6 23.37 2.16 ADP-DM 6/6 16.86 2.30 Ours(MSFP) 6/6 9.51 2.78 Table 7. Quantization performance of unconditional generation. In this case, ’Ours’ refers to the method that deploys only the MSFP strategy without any fine-tuning. β€˜Prec. (W/A)’ denotes the quan- tization bit-width. challenging and crucial, compared to INT-based methods. E. Comprehensive Analysis of TALoRA Per- formance E.1. TALoRA Outperforms Rank-Scaled LoRA In our approach, we introduce multiple TALoRAs for the majority of quantized layers, which leads to an increase in the model size. Some may question whether the observed performance improvement is simply due to the larger mem- ory footprint of the LoRAs. However, in practice, onlyone TALoRA is active at each timestep, which differs fun- damentally from using a larger-rank LoRA, as the latter would result in higher training and inference costs. Further- more, Table 8 presents the results of fine-tuning with two TALoRAs (rank=32) and a single QLoRA (rank=64). Our method achieves even better performance, demonstrating that our timestep-aware fine-tuning strategy effectively re- covers the performance lost during quantization in diffusion models, with lower overhead and enhanced performance. Method Rank Bits(W/A) FID ↓ FP / 32/32 6.49 single-LoRA 64 4/4 7.75 TALoRA( h=2) 32 4/4 7.69 Table 8. Comparison between TALoRA and rank-scaled LoRA in fine-tuning 4-bit DDIM models on CelebA dataset. ’Rank’ refers to the LoRA rank. E.2. Impact of TALoRA Quantity As illustrated in Figure 9, when deploying four TALo- RAs, the distributions of LoRA allocation across different timesteps exhibits a strong regularity: in most cases, regard- less of the dataset, the majority of timesteps utilize only two LoRAs. This suggests that fine-tuning low-bit diffu- 3 T 1Denoising ProcessLoRA 1 LoRA 2 LoRA 3 LoRA 4 CIFAR-10 CelebA ImageNet LSUN-BedroomFigure 9. Distribution of LoRA allocations over timesteps ob- tained after router training on different datasets, when h= 4. sion models predominantly follows a two-stage task pattern, which aligns with the motivation behind introducing TALo- RAsβ€”viewing the denoising process as a progression from restoring coarse structures to refining intricate details [40]. Experimental results in the main text further demon- strates that deploying four TALoRAs does not yield better results compared to deploying two TALoRAs. In fact, in most cases, the latter achieves superior results on 4-bit dif- fusion models. This aligns with our earlier analysis: two TALoRAs are sufficient to handle the fine-tuning task
https://arxiv.org/abs/2505.21591v1
effec- tively, while the introduction of additional TALoRAs could reduce the training opportunities for the most impactful Lo- RAs, ultimately compromising fine-tuning performance. F. Supplementary Performance Evaluation To further validate the effectiveness of our approach, we conduct supplementary experiments. For the DDIM model, where prior methods have struggled, our approach is eval- uated on the CelebA dataset [28]β€”a more complex dataset with higher image resolutions corresponding to a more in- tricate DDIM model. As shown in Table 9, our method achieves cutting-edge performance under both 4-bit and 6- bit settings. Notably, our 4-bit diffusion model exhibits per- formance on FID and IS metrics comparable to full preci- sion, and our method even outperforms the full-precision model under the 6-bit setting. For the LDM model, we further evaluate it on the Im- ageNet dataset [4] using two advanced sampling methods, PLMS [24] and DPM-Solver [29], which are more sophisti- cated and computationally demanding during fine-tuning. Table 10 demonstrates that our method maintains robust performance under both 4-bit and 6-bit quantization set- tings, achieving SOTA results on the more reliable sFID and IS metrics in ImageNet. Furthermore, we apply our method to the task of quantiz- ing text-to-image diffusion models, specifically deploying it on Stable Diffusion with the MS-COCO dataset [22]. Our approach also delivers highly satisfactory results, with de- tailed visualizations provided in Appendix H.Task MethodPrec. (W/A)FID↓IS↑ CelebA 64x64 DDIM steps = 100FP 32/32 6.49 2.61 Q-Diffusion 6/6 23.37 2.16 ADP-DM 6/6 16.86 2.30 Ours(h=2) 6/6 5.38 2.67 Ours(h=4) 6/6 5.36 2.66 Q-Diffusion 4/4 N/A N/A ADP-DM 4/4 N/A N/A Ours(h=2) 4/4 7.69 2.59 Ours(h=4) 4/4 7.84 2.60 Table 9. Quantization performance of unconditional generation. β€˜Prec. (W/A)’ denotes the quantization bit-width. β€˜N/A’ denotes failed image generation. hdenotes the size of LoRA Hub. Task MethodPrec. (W/A)sFID↓FID↓ IS↑ LDM-4 PLMS steps = 20FP 32/32 7.08 11.71 379.19 EDA-DM 6/6 6.59 11.27 363.00 EfficientDM 6/6 9.36 9.85 325.13 Ours(h=2) 6/6 5.63 10.35 363.79 Ours(h=4) 6/6 5.33 10.25 364.27 EDA-DM 4/4 32.63 17.56 203.15 EfficientDM 4/4 9.89 14.78 103.34 Ours(h=2) 4/4 7.39 7.27 196.32 Ours(h=4) 4/4 7.83 7.83 193.11 LDM-4 DPM- Solver steps = 20FP 32/32 6.85 11.44 373.12 EDA-DM 6/6 7.95 11.14 357.16 EfficientDM 6/6 9.30 8.54 336.11 Ours(h=2) 6/6 6.86 9.61 363.71 Ours(h=4) 6/6 6.88 9.59 364.30 EDA-DM 4/4 39.40 30.86 138.01 EfficientDM 4/4 13.82 14.36 109.52 Ours(h=2) 4/4 12.61 8.46 257.33 Ours(h=4) 4/4 14.56 9.64 238.07 Table 10. Quantization performance of conditional generation for fully-quantized LDM-4 models on ImageNet 256Γ—256 with 20 steps, using PLMS and DPM-Solver as sampling methods. β€˜Prec. (W/A)’ denotes the quantization bit-width. hdenotes the size of LoRA Hub. G. Extensive Comparison with EfficientDM and QUEST As mentioned in the main text, prior fine-tuning- based methods, such as EfficientDM [8] and Quest [42], adopt specialized experimental setups. EfficientDM re- tains all skip connection layers and the oplayers within Upsample blocks in full precision. These layers consti- 4 Task Settings MethodPrec. (W/A)FID↓ LSUN- Church 256Γ—256 LDM-8 steps = 100 eta = 0.0- FP 32/32 4.06 Partial QuantizationEfficientDM 4/4 13.68 Ours(h=2) 4/4 7.95 Full QuantizationEfficientDM 4/4 18.40 Ours(h=2) 4/4 8.81 Channel-wise for ActivationQuEST
https://arxiv.org/abs/2505.21591v1
4/4 11.76 Ours(h=2) 4/4 - Layer-wise for ActivationQuEST 4/4 13.03 Ours(h=2) 4/4 8.81 Table 11. Comparison with EfficientDM and QuEST under spe- cific settings. β€˜Prec. (W/A)’ denotes the quantization bit-width. h denotes the size of LoRA Hub. tute a significant portion of the model, and their quanti- zation significantly affects performance. Therefore, in our comparative experiments, we apply standard quantization to these layers. In contrast, Quest adopts a different strategy by modifying the quantization granularity for activations. Specifically, in low-bit quantization, channel-wise quanti- zation of weights is a common approach. However, Quest extends this to activations, introducing substantial compu- tational overhead compared to the mainstream layer-wise quantization. To ensure a fair comparison, we employ con- ventional layer-wise quantization for activations. For a comprehensive evaluation, we align our method with the specific settings of EfficientDM and consider the implications of Quest’s setup. As shown in Table 11, under EfficientDM’s configuration, our 4-bit LDM model achieves significantly better results on the Church dataset, with an FID score that is 6.39 lower than EfficientDM’s. However, we choose not to replicate Quest’s specific set- tings for two key reasons. First, despite using the more ef- ficient layer-wise quantization for both weights and activa- tions, our method already surpasses Quest’s performance. Specifically, under the 4-bit setting, our method achieves an FID of 8.81, compared to Quest’s 11.76, which re- lies on computationally expensive channel-wise quantiza- tion for both. Second, our approach relies on FP quantiza- tion, and incorporating channel-wise quantization necessi- tates search-based initialization for every channel, which is computationally infeasible. H. Additional Visualization Results 5 Full-precision(W32A32) Ours( h=2) Ours( h=4) Figure 10. Visualization of random samples from 4-bit LDM-4 on LSUN-Bedroom across different LoRA Hub sizes h. Full-precision(W32A32) Ours(W6A6) Ours(W4A4) Figure 11. Visualization of random samples from quantized LDM-4 on ImageNet. The size of LoRA Hub is 2. 6 Closeup of a brown bear sitting in a grassy area.A group of three stuffed animal teddy bears.A kitchen with a refrigerator, stove and oven with cabinets. A stop sign put upside down on a metal pole. Full-precision Ours (h=2)Figure 12. Comparison of text-to-image outputs from 6-bit quantized and full-precision Stable Diffusion models. hdenotes the size of LoRA Hub. 7
https://arxiv.org/abs/2505.21591v1
arXiv:2505.21593v1 [cs.CV] 27 May 2025Any-to-Bokeh: One-Step Video Bokeh via Multi-Plane Image Guided Diffusion Yang Yang1,2βˆ—Siming Zheng2βˆ—Jinwei Chen2Boxi Wu1† Xiaofei He1Deng Cai1Bo Li2Peng-Tao Jiang2† 1Zhejiang University2vivo Mobile Communication Co., Ltd Project Page: https://vivocameraresearch.github.io/any2bokeh/ (b) Bokeh Rendering from Synthetic Videos Input Arbitrary FP Arbitrary BS (c) Video Bokeh Rendering of Any Duration Focus Subject(a) Bokeh Rendering from Real Videos Figure 1: Any-to-Bokeh can process videos of any length, including both real-world and synthetic videos generated by video generation models, for bokeh rendering. The upper section demonstrates how Any-to-Bokeh enables users to customize the focal plane (FP) and adjust the blur strength (BS). The lower section highlights Any-to-Bokeh’s powerful temporal coherence in long videos. The yellow cross indicates the focal plane. Please zoom in to view the image details. Abstract Recent advances in diffusion based editing models have enabled realistic camera simulation and image-based bokeh, but video bokeh remains largely unexplored. Existing video editing models cannot explicitly control focus planes or adjust bokeh intensity, limiting their applicability for controllable optical effects. Moreover, naively extending image-based bokeh methods to video often results in temporal flickering and unsatisfactory edge blur transitions due to the lack of temporal modeling and generalization capability. To address these challenges, we propose a novel one-step video bokeh framework that converts arbitrary input videos into tem- porally coherent, depth-aware bokeh effects. Our method leverages a multi-plane image (MPI) representation constructed through a progressively widening depth sampling function, providing explicit geometric guidance for depth-dependent blur synthesis. By conditioning a single-step video diffusion model on MPI layers and utilizing the strong 3D priors from pre-trained models such as Stable Video Diffu- sion, our approach achieves realistic and consistent bokeh effects across diverse scenes. Additionally, we introduce a progressive training strategy to enhance tem- poral consistency, depth robustness, and detail preservation. Extensive experiments βˆ—Equal contribution. Intern at vivo Mobile Communication Co., Ltd. †Corresponding author. Preprint. Under review. demonstrate that our method produces high-quality, controllable bokeh effects and achieves state-of-the-art performance on multiple evaluation benchmarks. 1 Introduction Recent advances in diffusion models have significantly improved camera simulation tasks, particularly in controlling geometric transformations such as lens movement, zooming, and panning [ 1,2,3,4] Beyond geometric aspects, recent studies have also begun to explore the rendering of synthetic bokeh in the reference image [ 5], aiming to simulate depth-of-field effects directly from 2D content. However, the extension of video bokeh remains largely unexplored, with no existing methods addressing the challenges of maintaining temporal coherence and structure consistency across frames. Furthermore, while current video generation models [ 6,7] can occasionally produce bokeh-like effects implicitly, they lack explicit control over the focus plane. Additionally, these models are unable to freely adjust bokeh intensity, which limits their applicability in video generation scenarios that require flexible and realistic optical focus manipulation. This motivates us to develop a framework that enables controllable, temporally coherent bokeh for arbitrary input videos, filling a critical gap in the video editing research area. While bokeh rendering has gained attention in recent years, most existing work has focused on the image-based bokeh task, where the goal is to generate
https://arxiv.org/abs/2505.21593v1
shallow depth-of-field effects from a single im- age [ 8,9,10,11]. In contrast, video bokeh remains in its early stages. Naively extending image-based methods [ 12,13,14,15] to video often leads to undesirable artifacts, such as temporal flickering and inconsistent blur, due to the lack of temporal modeling and robust scene understanding. Furthermore, most current models [ 16,17] are trained from scratch, without utilizing strong pre-trained priors or generalizable representations. This results in models that are overfitted to specific data distributions and overly sensitive to depth estimation errors, leading to suboptimal blur qualityβ€”especially around object boundaries. To address these challenges, we propose a novel one-step video bokeh diffusion network that enables efficient and temporally coherent bokeh from arbitrary input videos. Our method is built upon a multi-plane image (MPI) representation [ 18], which provides an explicit yet compact encoding of scene geometry. Specifically, we generate MPI layers by partitioning the scene along a set of disparity intervals, constructed using a progressively widening disparity range function. This allows the model to focus on fine details in the foreground while allocating coarser attention to out-of-focus regions, enabling accurate blur transition at object boundaries. By conditioning a single-step video diffusion model on this MPI representation, the system learns to synthesize depth-aware blur effects that naturally align with subject contours, even in complex or cluttered scenes. Unlike existing methods trained from scratch, our framework leverages the strong 3D perception capabilities of large-scale pre-trained video diffusion models such as Stable Video Diffusion (SVD) [ 19], which are trained on diverse video data and demonstrate superior generalization and structural understanding. To further enhance temporal consistency, depth robustness, and visual detail, we adopt a three-stage progressive training strategy. In the first stage, we train the MPI spatial block and temporal block to learn bokeh rendering with accurate geometric guidance. In the second stage, we include more frames and introduce data perturbations to improve the model’s robustness and enforce temporal coherence, aiming to enhance robustness by building longer temporal sequences that better handle real-world variations. In the final stage, we incorporate a V AE-based refinement module to further enhance the fidelity of subject details and texture preservation. Together, these components form a unified framework for general-purpose video bokeh. Extensive experiments on real-world and synthesis video benchmarks demonstrate that our method achieves superior visual quality and temporal stability compared to prior work. Our framework supports rendering from arbitrary video inputs, making high-quality bokeh video accessible and practical for a wide range of applications, including content creation, cinematic editing, and mobile post-processing. In summary, our contributions are as follows: β€’We propose a one-step video bokeh framework that leverages the 3D-aware priors of large- scale pre-trained video diffusion models, departing from previous from-scratch approaches. This enables our model to generalize well to diverse videos and produce temporally coherent bokeh effects without domain-specific training. 2 β€’We introduce an MPI-guided conditioning mechanism, using a disparity-interval sampling function to construct layered scene geometry and guide spatially accurate bokeh rendering. β€’We develop a progressive training strategy that significantly improves temporal consistency, depth robustness, and detail preservation, ultimately producing more
https://arxiv.org/abs/2505.21593v1
realistic bokeh outputs and achieving state-of-the-art performance across multiple evaluation benchmarks. 2 Related work 2.1 Camera Simulation Diffusion Models Reference Guidance Models . A line of work [ 20,21] encodes motion cues from reference videos via LoRA [ 22], enabling the diffusion model to replicate specific camera behaviors observed in the training set. In contrast, MotionClone[ 23] proposes a training-free approach, using spatial and temporal attention modules to directly extract motion patterns from reference videos. Some works[1, 2, 3, 4] require users to draw reference points to guide the lens adjustment. Optical Effects with Diffusion. While most prior diffusion-based camera simulation work focuses on motion modeling, recent studies have expanded to include optical effects like depth-of-field and bokeh. For instance, BokehDiffusion[ 5] and Generative Photography[ 24] propose methods to synthesize high-quality bokeh from 2D images, capturing realistic defocus effects. These works show the potential of diffusion models for simulating non-geometric camera behavior. However, they are limited to static images and lack temporal modeling, making them unsuitable for video tasks. In contrast, our work integrates optical effect simulation with pre-trained video diffusion models, enabling controllable, temporally consistent bokeh effects for arbitrary video inputs. 2.2 Computational Bokeh Training-Free Methods . Early computational bokeh methods are model-free and rely on physically- based or image-based heuristics. Ray tracing[ 25,26,27] produces realistic defocus effects, but it requires full 3D geometry and is computationally expensive, which limits its practical applicability. Depth-based methods[ 28,29] apply scattering or gathering kernels to create spatially varying blur using estimated depth. Matting-based methods [ 30,31] blur the background based on foreground masks. While these methods are simple and do not require training, they often suffer from artifacts due to inaccurate depth estimation or segmentation errors, leading to unnatural blur transitions. Learning-based Methods . Deep learning approaches improve efficiency and realism. For example, BokehMe[ 14] refines depth-based blur using classical methods[ 28] in combination with neural networks, while Multiplane Image (MPI) [ 32] based models [ 13,18,12] decompose the scene into layered representations for depth-aware rendering. Others adopt adaptive kernels or light field approximations [ 33,34]. DeepLens [ 15] trained a depth estimation network using depth estimation and foreground segmentation data in order to enhance the perception of foreground edges End-to-end models trained on paired all-in-focus and bokeh datasets [ 35,10,36] show promise but are limited by dataset biases and fixed parameter simulation. They generally lack the flexibility to control focal planes or simulate custom bokeh intensities. While image-based bokeh synthesis is well-studied, extending it to videos is challenging. Naive frame-wise methods often introduce temporal flickering and inconsistency due to the lack of temporal modeling. Few works integrate strong pre-trained priors for video bokeh. Our work addresses this by a novel MPI-guided conditioning mechanism and leveraging pre-trained video diffusion models for consistent spatial and temporal bokeh rendering. 3 Method An overview of our pipeline is provided in Fig. 2 (a). We propose a one-step framework for video bokeh that leverages pre-trained video priors to achieve both efficiency and enhanced visual quality. The central innovation of our approach is the MPI construction module presented in
https://arxiv.org/abs/2505.21593v1
Fig. 2 (b), which effectively separates depth-aware regions and facilitates improved bokeh effect. Additionally, we introduce a progressive training strategy in Fig. 3 designed to enhance temporal consistency, 3 𝑽𝑨 V AE Decoder V AE Encoder C(a) Model Architecture MPI Spatial Block Temporal Block CConcatenate HMPI ThresholdAAdapterEEmbeddingA A A H H H 𝑽𝑫 𝑽𝑩 MLP A π‘Έπ’Š KECLIPπ“œ MPI -AttnCross -Attn(b) MPI Spatial Block 𝑽𝑨Self-Attn β„Žπ‘–=𝑖 𝑁1 𝑑𝑓 MPI Area Function A Stage 1: Train MPI spatial block & Temporal block VAE Decoder 𝑽𝑨 𝑽𝑫CVAE Encoder A A H H 𝑽𝑨 ෩𝑽𝑫A A H H𝑉𝐴 𝑉𝐷 V AE Decoder A A H HA Stage 2: Temporal Block Refinement with Disparity Noise Stage 3: V AE Decoder Fine -tuning (c) Progressive Training StrategyCVAE Encoder VAE Decoder CVAE Encoder Figure 2: Two key components of Any-to-Bokeh. a) One-step video bokeh model architecture : receives input of any video and disparity relative to the focal plane to perform bokeh effect. b) MPI spatial block : uses the MPI mask Mto prompt MPI attention to focus on areas at different depths from the focal plane, guiding bokeh rendering. Additionally, high-level semantic information is injected via cross-attention to preserve more semantic structures. The user-defined blur strength Kis injected through embedding. robustness, and detail preservation. Each component of the framework is discussed in detail in the following subsections. 3.1 Preliminary Circle of Confusion . Recent computational bokeh methods often use the circle of confusion (CoC) to estimate per-pixel blur radius [ 12,13,14]. The CoC models the size of the projected image of a point outside the focal plane. As shown in [28], the blur radius rfor a pixel is defined as r=K 1 zβˆ’1 zf =K|dβˆ’df|, (1) where Kcontrols the overall blur strength, zdenotes the depth of the pixel, and zfrepresents the depth corresponding to the focus plane. We replace the depth zwith the disparity (inverse depth) d to simplify the formula. In practical implementations, the CoC value determines the spatial extent over which pixel intensity is diffused. The blur kernel is typically modeled as a disk [ 37,38], with its radius proportional to the CoC, enabling depth-aware rendering with soft transitions around object boundaries. Instead of manually defining the blur kernel, we feed the CoC as a condition into our model, allowing it to adaptively generate a better bokeh effect. 3.2 One-Step Video Bokeh Diffusion Model Recent studies [ 39] have shown that reducing the number of diffusion steps can improve both efficiency and visual quality, especially when the model is directly conditioned on strong structural priors, such as 3D geometry [ 39], reference images [ 40,41], or semantic masks [ 42]. Inspired by these advances and the demonstrated effectiveness of diffusion models in 3D perception [ 43,19], we build upon the pre-trained Stable Video Diffusion (SVD) [ 19] framework to develop a one-step video bokeh model. Our method adopts a single-step design conditioned on MPI guidance, enabling depth-aware, temporally coherent, and controllable bokeh synthesis in video. Given a sequence of input frames and corresponding disparity maps, we encode each frame using a V AE encoder and
https://arxiv.org/abs/2505.21593v1
inject MPI-derived attention masks into the diffusion process. The base model follows a U-Net architecture initialized from SVD and is conditioned on three explicit control signals: (1) a normalized disparity difference between the video disparity map and the focal plane disparity VD, (2) a scalar blur strength parameter K, and (3) a focal plane-adapted MPI mask M. These signals allow flexible control over focus positioning and bokeh intensity across the sequence. 3.3 MPI Spacial Block As shown in Fig. 2(b), we first construct a multi-plane image (MPI) representation for each input frame and generate a focal plane-adapted MPI mask. Unlike prior works [ 13,18] that discretize the scene using front-to-back layers with fixed depth values, our MPI is defined relative to the focal plane, enabling more flexible and focus-aware layer sampling. The resulting MPI mask is injected into the attention module as a geometry-aware prior to guide bokeh rendering. In parallel, high-level semantics extracted by a pre-trained CLIP image encoder are fused via cross-attention, allowing the model to preserve content structure while generating defocus effects. 4 𝑽𝑨 V AE Decoder V AE Encoder C(a) Model Architecture MPI Spatial Block Temporal Block CConcatenate HMPI ThresholdAAdapterEEmbeddingA A A H H H 𝑽𝑫 𝑽𝑩 MLP A π‘Έπ’Š KECLIPπ“œ MPI -AttnCross -Attn(b) MPI Spatial Block 𝑽𝑨Self-Attn β„Žπ‘–=𝑖 𝑁1 𝑑𝑓 MPI Area Function A Stage 1: Train MPI spatial block & Temporal block VAE Decoder 𝑽𝑨 𝑽𝑫CVAE Encoder A A H H 𝑽𝑨 ෩𝑽𝑫A A H H𝑉𝐴 𝑉𝐷 V AE Decoder A A H HA Stage 2: Temporal Block Refinement with Disparity Noise Stage 3: V AE Decoder Fine -tuning (c) Progressive Training StrategyCVAE Encoder VAE Decoder CVAE Encoder Figure 3: Progressive Training Strategy : Stage 1: Train the whole U-Net and adapters. Stage 2: Refine temporal block with disturbance. Stage 3: Fine-tuning V AE decoder. We desaturated the colors in the same areas. Focal Plane-Adapted MPI Mask . To determine the disparity intervals for the MPI layers, we design a disparity-interval sampling function that gradually increases the interval width along the disparity axis. This provides finer granularity for regions near the subject’s edge and coarser sampling for background areas. As a result, the network can focus more on content near the focus plane, generating better bokeh details. We define a set of Ndiscrete regions using an MPI area function: hi=i N1 df, i= 1,2,Β·Β·Β·, Nβˆ’1. (2) Here, df∈(0,1]denotes the normalized disparity of the focal plane. As illustrated in Eq. (1), disparity changes more slowly at greater depths, and defocus effects diminish for distant regions. Based on this theory, we use 1/dfas a sharpness factor, allocating finer sampling near shallow focus planes, where bokeh is more perceptible. These regions are then used to construct a focal plane-adapted MPI mask, identifying pixels whose disparity is close to the focal plane. Specifically, given a video disparity sequence dpredicted by a pre-trained depth estimation network [ 44], we define the mask with the MPI threshold as: M={mi| |d(mi)βˆ’df|< hi}. MPI Attention . Inspired by the gated self-attention mechanism [ 45], we extend it by introducing our focal plane-adapted MPI mask,
https://arxiv.org/abs/2505.21593v1
referred to as MPI Attention, and injecting it into the spatial attention blocks of the U-Net. Formally: Λ†Q=Q+ tanh( Ξ³)Β·TS(Attn([ Q+ Ξ¦M(E(K),Ξ¦A(VA)],Β―M)), (3) where Q={Q1,Β·Β·Β·, Qi}denotes the feature tokens from the current U-Net block, VArepresents visual tokens from the input video, and Ξ³is a learnable gating parameter initialized to zero. TS is a token selection operator applied over the attended output, and [Β·,Β·]denotes concatenation. To introduce controllable bokeh strength, we encode a user-specified control variable Krepresenting the desired blur strength. The scalar Kis first transformed via a Fourier embedding [ 46]E(K), followed by an MLP Ξ¦M(Β·), yielding a modulation term Ξ¦M(E(K)). This term is added to each query token, enabling the attention module to condition its response based on the desired bokeh intensity. In parallel, the visual tokens VAare projected via an adapter network Ξ¦A(Β·), which consists of a lightweight MLP and maps VAto the same dimensionality as the query tokens. To modulate attention for focus proximity, we reconstruct the mask Β―M= [1,M], where 1assigns full attention weight to Q. This encourages the model to attend more to focus-relevant regions. Focal plane-adapted MPI masks are injected at multiple U-Net stages: shallow layers receive the near-focal-plane mask to focus on areas with sensitive changes, while deeper layers capture masks for larger depth intervals, allowing for a broader receptive field. Bilinear interpolation is used to align mask resolutions with U-Net block input. Semantic Injection . We extract global semantic features from the visual token VAusing the CLIP image encoder [ 47]. The resulting embeddings are injected into the U-Net via cross-attention, where CLIP embeddings serve as queries and visual tokens as keys and values. This enhances the network’s ability to preserve semantic structure and produce content-aware bokeh effects. 3.4 Progressive Training Strategy As shown in Fig. 3, we approach a three-stage training strategy to improve temporal consistency, depth robustness, and fine detail preservation. Stage 1: U-Net Fine-tuning . In the first stage, we train the MPI spatial block, temporal block, and adapters using clean data. This allows the model to quickly adapt to the video-to-video bokeh task. Training without noise encourages the network to better leverage the guidance provided by the focal 5 plane-adapted MPI mask, improving its ability to synthesize spatially accurate and depth-aware bokeh effects. This stage also helps the model establish initial temporal consistency across video frames. Stage 2: Temporal Block Refinement with Disparity Noise . In the second stage, we freeze the MPI spatial block of the U-Net and train only the temporal blocks using more frames. To enhance the model’s robustness, we introduce a combination of noise perturbations specifically targeting the disparity boundaries in the focus region. First, we apply elastic transform [ 48] to distort the depth map locally around the focal region. Next, we inject Perlin noise [ 49] into these perturbed areas to simulate natural depth inconsistencies. Finally, we apply morphological dilation and erosion operations to further exaggerate the boundary transitions. This staged perturbation strategy prevents the MPI spatial blocks from being overwhelmed by both temporal modeling and complex disparity noise simultaneously. By injecting controlled disparity
https://arxiv.org/abs/2505.21593v1
noise, the model learns to be less dependent on precise depth values and becomes more resilient to real-world variations in depth estimation. Additionally, training with longer temporal sequences allows the model to leverage longer temporal memory, reducing bokeh flickering caused by depth noise. Stage 3: V AE Decoder Fine-tuning . In the final stage, we fine-tune only the V AE decoder and its adapter to recover high-frequency details that are often lost during encoding. Clean data is used in this phase to promote high-fidelity reconstruction. Inspired by prior work [ 50], we introduce a skip connection from the V AE encoder to the decoder by applying a simple convolution to the encoder features, enabling the decoder to reuse spatially rich representations. To preserve fine details, we use a combination of image-space L1 loss and texture loss L=L1(Λ†VB, VB) +Lt, where Λ†VBis the prediction result VBis the ground truth. The texture loss is computed by comparing the gradients of the predicted and ground truth images along the horizontal and vertical directions using the Sobel operator. Specifically, the loss is defined as the squared sum of the gradient differences: Lt=X x,y (βˆ‡xΛ†VB(x, y)βˆ’ βˆ‡ xVB(x, y))2+ (βˆ‡yΛ†VB(x, y)βˆ’ βˆ‡ yVB(x, y))2 , (4) which helps the model better capture fine details like edges and textures, improving the bokeh effect. 3.5 Weighted Overlap Inference Strategy To enhance the model’s capability for inference on long videos and to ensure seamless temporal consistency across video segments, we propose a weighted overlapping inference strategy (WOIS). This method effectively tackles the critical issue of boundary inconsistencies commonly introduced by naive sequence splitting, which typically degrades visual quality. Our strategy involves dividing the input video into Poverlapping segments denoted as Λ†V0 B,Λ†V1 B, . . . , Λ†VP B, each consisting of 2Lframes, with adjacent segments overlapping by exactly Lframes. For the j-th frame in the i-th overlapping segment, denoted by Λ†Vi B[j], we obtain the fused result ˜Vi B[j]through a weighted combination: ˜Vi B[j] =Ξ³jΛ†Vi B[j] + (1βˆ’Ξ³j)Λ†Vi+1 B[j], i∈1,2, . . . , P, (5) where the weighting factor Ξ³jemploys a cosine-based function defined as Ξ³j=1 2 1 + cosΟ€j L to ensure smooth and visually coherent transitions between consecutive video segments. 4 Experiments 4.1 Implementation Details Dataset . Currently, no paired datasets for the all-in-focus and bokeh videos. Existing computational bokeh datasets [ 35,10,36] primarily consist of image pairs but lack temporal consistency, and previous video-based works [ 16] often omit foreground motion. To address this, we build upon prior work [ 37] and adopt a synthetic approach to generate paired all-in-focus and bokeh video sequences. For accurate foreground extraction, we use objects from the video matting dataset [ 51], isolating them using the alpha channel for precise segmentation. To augment the dataset, we also incorporate the image matting dataset [ 52] and collect 1,300 background images from the intern dataset and background dataset [ 51]. In each video, we randomly select background and foreground clips, simulating real-world camera adjustments such as focal plane and aperture changes. The foreground objects are moved along random 3D trajectories, each
https://arxiv.org/abs/2505.21593v1
containing 25 frames. All videos 6 Method FD ↓ RM↓ VFID-I ↓ FVD↓ SSIM↑ PSNR↑ Time↓ DeepLens [15] 1.162 0.030 16.042 125.338 0.819 24.574 0.226 BokehMe [14] 0.536 0.013 8.633 39.102 0.936 27.992 0.103 Dr.Bokeh [12] 0.522 0.011 6.097 32.710 0.950 31.273 2.729 MPIB [13] 0.481 0.011 5.444 35.766 0.950 31.390 0.521 Any-to-Bokeh 0.431 0.007 1.479 9.005 0.974 38.899 0.363 Table 1: Quantitative comparison of Any-to-Bokeh . The best metric scores in each column are marked in bold for clarity.β€œ ↓” or β€œβ†‘β€ indicate lower or higher values are better. GTDr.Bokeh BokehMe Ours DeepLens MPIB All-in-Focus Disparity Figure 4: Qualitative results on the synthetic test dataset . The focal plane is marked by a yellow cross on the disparity map. To highlight the differences, we zoom in on the red and green regions. in the dataset have a resolution of 1024Γ—576pixels. Using a ray-tracing-based method [ 13], we generate accurate bokeh effects, ensuring temporal coherence across frames. To evaluate model performance, we synthesize a test set of 200 videos with varying bokeh strength and focus planes. Training and Inference . In this work, we use SVD [ 19] as our base model. During training, the process is divided into three stages: Stage 1: We train using 4-frame video sequences with the Adam optimizer at a learning rate of 1e-5. Stage 2: We train with 8-frame video sequences, using a learning rate of 5e-6 and a depth perturbation probability of 0.5. Stage 3: We fine-tune the V AE decoder using the same learning rate as in Stage 2. For all stages, the video resolution is set to 1024Γ—576with a batch size of 1, and training is performed across 4 Nvidia H800 GPUs. During inference, we use the weighted overlap inference strategy where long videos are divided into clips of 8 frames, with each clip having a 4-frame overlap. Metrics . For evaluation, we report the following metrics: PSNR and SSIM[ 53] for image fidelity. For video quality, we use VFID [ 54] with features from I3D[ 55] (denoted VFID-I) and FVD[ 56]. To assess temporal consistency, we use the relation metric[ 16] (denoted as RM) to compute the pixel-wise difference between adjacent frames, and the flow difference between predicted and ground truth frames, both estimated with RAFT [57] (referred to as FD). 4.2 Results on Test Dataset To verify the performance of our proposed method, we compare it with four existing computational bokeh methods: DeepLens [ 15], BokehMe [ 14], Dr.Bokeh [ 12], and MPIB [ 13]. Specifically, to compare with traditional MPI-based bokeh methods, MPIB and Dr.Bokeh use the conventional approach, which differs from the one introduced in Sec. 3.3. To our knowledge, there is only one related work [16] on video bokeh, and since it is not open-source, we cannot compare it. Quantitative Results . As shown in Tab. 1, our method consistently outperforms all other approaches across all metrics. Specifically, Any-to-Bokeh achieves the lowest flow difference (FD) and the best relation metric (RM), demonstrating superior temporal consistency. This improvement underscores 7 GT DeepLens MPIB Dr.Bokeh BokehMe Ours Figure 5: Qualitative
https://arxiv.org/abs/2505.21593v1
results on the real video . The focal plane is located on the football. For each method, we only present the middle frame. Please zoom in to view the image details. the effectiveness of the SVD pre-trained prior in reducing flickering and inconsistent blur, resulting in a temporally consistent bokeh effect. Additionally, our method significantly outperforms the baselines in video quality, achieving the lowest VFID-I and FVD, highlighting its ability to generate high-quality, temporally coherent bokeh effects. In terms of image fidelity, Any-to-Bokeh also achieves the highest SSIM and PSNR, further confirming its superior performance in bokeh effects. Furthermore, compared with the traditional MPI method, we measured the time required for inference on a single frame and found that our approach has an efficiency advantage. Qualitative Results . To further highlight the visual advantages of our approach, we present examples in Fig. 4. Despite the guidance of the precise disparity map in the synthetic test set, the baseline methods exhibit varying degrees of bleeding (highlighted in red boxes), and the girl’s hair shows unsatisfactory deformation (highlighted in green boxes). Specifically, MPIB and Dr.Bokeh, both based on traditional MPI methods, produce discordant edges (highlighted in the red box). This further demonstrates the effectiveness of our approach. Additionally, to verify the effectiveness of our model in real-life scenarios, we tested our approach on a real video dataset [ 58]. As shown in Fig. 5, our method preserves better details and generates more natural bokeh effects (highlighted in the red box). Furthermore, the effect in the green box shows that our method maintains smoother object edges and reduces jaggedness at the edges during bokeh rendering. More results can be found in the appendix. Baseline Preference Ours vs. DeepLens 90.1% / 9.9% Ours vs. BokehMe 77.8% / 22.2% Ours vs. MPIB 67.5% / 32.5% Ours vs. Dr.Bokeh 60.3% / 39.7% Table 2: Reults on human preference.Bokeh Rendering Results on Real Video . Generating high-quality, temporally consistent bokeh on real-world videos is more challenging due to complex motion and camera shake. We test on the DA VIS [ 58] dataset, using Video Depth Anything [ 44] for disparity prediction. By leveraging foreground masks, we obtain the focal plane. As shown in Fig. 6, our method generates bokeh effects that follow optical principles and maintain strong temporal consistency. We also conducted a user study on real-world videos to better evaluate different methods from a subjective perspective. We randomly selected 20 videos from the DA VIS dataset and rendered them using various methods. Each participant viewed two videos at a time: one generated by our method and one from a randomly selected baseline (DeepLens [ 15], MPIB [ 13], BokehMe [ 14], or Dr.Bokeh [ 12]). The videos were presented in random order. Participants were asked to choose the method that produced the most aesthetically pleasing bokeh effect based on their personal preference. If they found it difficult to decide, they were allowed to skip making a selection. The user study involved 24 participants and provided 480 ratings across these video sets. The results in Tab. 2 indicate that
https://arxiv.org/abs/2505.21593v1
our method was preferred over the others, demonstrating a higher human preference for the bokeh effects generated by our approach. 4.3 Ablation Studies We evaluate the effectiveness of each component in the Any-to-Bokeh framework in Tab. 3 and assess the V AE’s contribution to video detail quality in Tab. 4. MPI Block Ablation . The last three rows in the upper part show the ablation results of the MPI block. First, we remove the MPI module and use the original attention block, adding the blur strength ( K) embedding directly to the original SVD embedding. From the VFID and FVD metrics, we observe that our model benefits significantly from the MPI blocks, leading to better video quality. Additionally, the MPI block greatly enhances temporal consistency and single-frame quality, which are crucial for the bokeh rendering task. Additionally, we investigate the impact of the SVD pre-trained prior (PR) on the model. As mentioned in the Sec. 3.2, SVD provides strong 3D prior knowledge, and its 8 MPI PR WOIS TR FD ↓ RM↓ VFID-I ↓ FVD↓ SSIM↑ PSNR↑ βœ“ βœ“ βœ“ βœ“ 0.517 0.013 3.865 18.922 0.907 32.250 βœ“ βœ“ βœ“ 0.540 0.013 4.209 20.743 0.905 32.035 βœ“ βœ“ 0.551 0.013 4.521 21.941 0.905 31.936 βœ“ 0.568 0.014 4.714 22.556 0.901 31.575 βœ“ 0.586 0.014 4.537 27.743 0.893 30.988 βœ“ βœ“ βœ“ βœ“ 0.531 0.013 3.880 19.648 0.906 32.182 βœ“ βœ“ βœ“ 0.566 0.014 4.442 22.452 0.904 31.890 Table 3: Ablation study of Any-to-Bokeh. The last two lines compare the contribution of TR to the robustness against inaccurate disparity. Subsequent lines show the enhancements of each module. β€œMPI”: MPI spacial block. β€œPR”: SVD pre-trained prior. β€œWOIS”: weighted overlap inference strategy. β€œTR”: temporal block refinement. Real Videos Bokeh Rendering Results Figure 6: Visualization of generated bokeh effects on real videos. Please zoom in to see the details. incorporation leads to improvements across all metrics. This is especially evident in the FVD score, where the contribution to video quality is substantial. Effectiveness of Weighted Overlap Inference Strategy (WOIS) . By incorporating WOIS, our model achieves improved temporal consistency, with the flow difference (FD) decreasing from 0.551 to 0.540. Additionally, higher FVD and VFID scores indicate enhanced video quality. Furthermore, the PSNR also shows an increase, reflecting improved single-frame fidelity. VF FVD ↓ SSIM↑ PSNR↑ βœ“ 9.005 0.974 38.899 18.922 0.907 32.250 Table 4: Ablation study on V AE fine- tuning. V AE fine-tuning effectively en- hances video quality.Effectiveness of Progressive Training Strategy . We be- gin the ablation study by evaluating the full model. In the second row, we remove the temporal block refinement (TR) from stage 2, resulting in a decrease in temporal consistency (FD: 0.517 vs. 0.540). This highlights the importance of temporal block refinement in maintaining temporal coherence across video frames. As shown in Tab. 4, we alleviated the loss of high-frequency informa- tion by fine-tuning the V AE, which effectively improves both single-frame consistency and overall video quality. Ablation Study on Robustness. To test the contribution of TR to robustness, we introduce perturba- tions to the disparity in the test dataset
https://arxiv.org/abs/2505.21593v1
using elastic transform [ 48], Gaussian blur, and morphological transformations. As shown in the last two rows in Tab. 3, TR leads to improvements across all metrics, with particularly noticeable gains in temporal consistency (FD and RM) and video quality (VFID-I and FVD). These results demonstrate that training the temporal blocks with noisy data during the TR stage effectively enhances the model’s robustness. 5 Conclusions In this work, we propose a novel one-step video bokeh framework that enables controllable and temporally consistent bokeh effects from arbitrary video inputs. By leveraging the 3D-aware priors 9 of pre-trained video diffusion models and incorporating an MPI-guided conditioning mechanism, our method achieves higher-quality and more generalized bokeh effects. Additionally, we introduce a progressive training strategy that enhances robustness and detail preservation, significantly improving the quality of bokeh effects. We hope our findings inspire further exploration of optical phenomena in editing models, driving advancements in their application to content creation and visual effects. By better understanding these phenomena, we aim to enhance the realism and flexibility of future editing models, enabling more creative possibilities for the industry. References [1]Yao Teng, Enze Xie, Yue Wu, Haoyu Han, Zhenguo Li, and Xihui Liu. Drag-a-video: Non-rigid video editing with point-based interaction. arXiv preprint arXiv:2312.02936 , 2023. [2]Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent YF Tan, and Song Bai. Dragdiffusion: Harnessing diffusion models for interactive point-based image editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8839–8849, 2024. [3]Weijia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, and Di Zhang. Draganything: Motion control for anything using entity representation. In European Conference on Computer Vision , pages 331–348. Springer, 2024. [4]Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089 , 2023. [5]Armando Fortes, Tianyi Wei, Shangchen Zhou, and Xingang Pan. Bokeh diffusion: Defocus blur control in text-to-image diffusion models. arXiv preprint arXiv:2503.08434 , 2025. [6]Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, Jianyuan Zeng, Jiayu Wang, Jingfeng Zhang, Jingren Zhou, Jinkai Wang, Jixuan Chen, Kai Zhu, Kang Zhao, Keyu Yan, Lianghua Huang, Mengyang Feng, Ningyi Zhang, Pandeng Li, Pingyu Wu, Ruihang Chu, Ruili Feng, Shiwei Zhang, Siyang Sun, Tao Fang, Tianxing Wang, Tianyi Gui, Tingyu Weng, Tong Shen, Wei Lin, Wei Wang, Wei Wang, Wenmeng Zhou, Wente Wang, Wenting Shen, Wenyuan Yu, Xianzhong Shi, Xiaoming Huang, Xin Xu, Yan Kou, Yangyu Lv, Yifei Li, Yijing Liu, Yiming Wang, Yingya Zhang, Yitong Huang, Yong Li, You Wu, Yu Liu, Yulin Pan, Yun Zheng, Yuntao Hong, Yupeng Shi, Yutong Feng, Zeyinzi Jiang, Zhen Han, Zhi-Fan Wu, and Ziyu Liu. Wan: Open and advanced large-scale video generative models. arXiv preprint arXiv:2503.20314 , 2025. [7]Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with
https://arxiv.org/abs/2505.21593v1
an expert transformer. arXiv preprint arXiv:2408.06072 , 2024. [8]Pratul P Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, and Jonathan T Barron. Aperture supervision for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 6393–6401, 2018. [9]Lei Xiao, Anton Kaplanyan, Alexander Fix, Matt Chapman, and Douglas Lanman. Deepfocus: Learned image synthesis for computational display. In ACM SIGGRAPH 2018 Talks , pages 1–2. 2018. [10] Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V Conde, and Radu Timofte. Bokehlicious: Photorealistic bokeh rendering with controllable apertures. arXiv preprint arXiv:2503.16067 , 2025. [11] Tim Seizinger, Marcos V Conde, Manuel Kolmet, Tom E Bishop, and Radu Timofte. Effi- cient multi-lens bokeh effect rendering and transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1633–1642, 2023. 10 [12] Yichen Sheng, Zixun Yu, Lu Ling, Zhiwen Cao, Xuaner Zhang, Xin Lu, Ke Xian, Haiting Lin, and Bedrich Benes. Dr. bokeh: differentiable occlusion-aware bokeh rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 4515–4525, 2024. [13] Juewen Peng, Jianming Zhang, Xianrui Luo, Hao Lu, Ke Xian, and Zhiguo Cao. Mpib: An mpi-based bokeh rendering framework for realistic partial occlusion effects. In European Conference on Computer Vision , pages 590–607. Springer, 2022. [14] Juewen Peng, Zhiguo Cao, Xianrui Luo, Hao Lu, Ke Xian, and Jianming Zhang. Bokehme: When neural rendering meets classical rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16283–16292, 2022. [15] Wang Lijun, Shen Xiaohui, Zhang Jianming, Wang Oliver, Lin Zhe, Hsieh Chih-Yao, Kong Sarah, and Lu Huchuan. Deeplens: Shallow depth of field from a single image. ACM Trans. Graph. (Proc. SIGGRAPH Asia) , 37(6):6:1–6:11, 2018. [16] Yawen Luo, Min Shi, Liao Shen, Yachuan Huang, Zixuan Ye, Juewen Peng, and Zhiguo Cao. Video bokeh rendering: Make casual videography cinematic. In ACM Multimedia 2024 , 2024. [17] Xuaner Zhang, Kevin Matzen, Vivien Nguyen, Dillon Yao, You Zhang, and Ren Ng. Synthetic defocus and look-ahead autofocus for casual videography. ACM Transactions on Graphics (TOG) , 38(4):1–16, 2019. [18] Benjamin Busam, Matthieu Hog, Steven McDonagh, and Gregory Slabaugh. Sterefo: Efficient image refocusing with stereo vision. In Proceedings of the IEEE/CVF international conference on computer vision workshops , pages 0–0, 2019. [19] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Do- minik Lorenz, Yam Levi, Zion English, Vikram V oleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127 , 2023. [20] Rui Zhao, Yuchao Gu, Jay Zhangjie Wu, David Junhao Zhang, Jia-Wei Liu, Weijia Wu, Jussi Keppo, and Mike Zheng Shou. Motiondirector: Motion customization of text-to-video diffusion models. In European Conference on Computer Vision , pages 273–290. Springer, 2024. [21] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. International Conference on Learning Representations , 2024. [22] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank
https://arxiv.org/abs/2505.21593v1
adaptation of large language models. In International Conference on Learning Representations , 2022. [23] Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, and Yi Jin. Motionclone: Training-free motion cloning for controllable video generation. InThe Thirteenth International Conference on Learning Representations , 2025. [24] Yu Yuan, Xijun Wang, Yichen Sheng, Prateek Chennuri, Xingguang Zhang, and Stanley Chan. Generative photography: Scene-consistent camera control for realistic text-to-image synthesis. CVPR , 2025. [25] Matt Pharr, Wenzel Jakob, and Greg Humphreys. Physically based rendering: From theory to implementation . MIT Press, 2023. [26] Michael Potmesil and Indranil Chakravarty. A lens and aperture camera model for synthetic image generation. ACM SIGGRAPH Computer Graphics , 15(3):297–305, 1981. [27] Tomas Akenine-Moller, Eric Haines, and Naty Hoffman. Real-time rendering . AK Peters/crc Press, 2019. 11 [28] Neal Wadhwa, Rahul Garg, David E Jacobs, Bryan E Feldman, Nori Kanazawa, Robert Carroll, Yair Movshovitz-Attias, Jonathan T Barron, Yael Pritch, and Marc Levoy. Synthetic depth-of- field with a single-camera mobile phone. ACM Transactions on Graphics (ToG) , 37(4):1–13, 2018. [29] Yang Yang, Haiting Lin, Zhan Yu, Sylvain Paris, and Jingyi Yu. Virtual dslr: High quality dynamic depth-of-field synthesis on mobile platforms. Electronic Imaging , 28:1–9, 2016. [30] Xiaoyong Shen, Aaron Hertzmann, Jiaya Jia, Sylvain Paris, Brian Price, Eli Shechtman, and Ian Sachs. Automatic portrait segmentation for image stylization. In Computer Graphics Forum , volume 35, pages 93–102. Wiley Online Library, 2016. [31] Xiaoyong Shen, Xin Tao, Hongyun Gao, Chao Zhou, and Jiaya Jia. Deep automatic portrait matting. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14 , pages 92–107. Springer, 2016. [32] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnifi- cation: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817 , 2018. [33] Pratul P. Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, and Jonathan T. Barron. Aperture supervision for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2018. [34] Takuhiro Kaneko. Unsupervised learning of depth and depth-of-field effect from natural images with aperture rendering generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15679–15688, 2021. [35] Andrey Ignatov, Jagruti Patel, and Radu Timofte. Rendering natural camera bokeh effect with deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops , pages 418–419, 2020. [36] Saikat Dutta, Sourya Dipta Das, Nisarg A Shah, and Anil Kumar Tiwari. Stacked deep multi- scale hierarchical network for fast bokeh effect rendering from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2398–2407, 2021. [37] Juewen Peng, Zhiguo Cao, Xianrui Luo, Ke Xian, Wenfeng Tang, Jianming Zhang, and Guosheng Lin. Bokehme++: Harmonious fusion of classical and neural rendering for versatile bokeh creation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 47(3):1530– 1547, 2025. [38] Xianrui Luo, Juewen Peng, Ke Xian, Zijin Wu, and Zhiguo Cao. Bokeh rendering from defocus estimation. In European Conference on Computer Vision
https://arxiv.org/abs/2505.21593v1
, pages 245–261. Springer, 2020. [39] Guangkai Xu, Yongtao Ge, Mingyu Liu, Chengxiang Fan, Kangyang Xie, Zhiyue Zhao, Hao Chen, and Chunhua Shen. What matters when repurposing diffusion models for general dense perception tasks? arXiv preprint arXiv:2403.06090 , 2024. [40] Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, and Lei Zhang. One-step effective diffusion network for real-world image super-resolution. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [41] Shiyue Yan, Xiaoshi Qiu, Qingmin Liao, Jing-Hao Xue, and Shaojun Liu. Reschedule diffusion- based bokeh rendering. In IJCAI International Joint Conference on Artificial Intelligence , pages 1543–1551. IJCAI, 2024. [42] Muzhi Zhu, Yang Liu, Zekai Luo, Chenchen Jing, Hao Chen, Guangkai Xu, Xinlong Wang, and Chunhua Shen. Unleashing the potential of the diffusion model in few-shot semantic segmentation. arXiv preprint arXiv:2410.02369 , 2024. [43] Vikram V oleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion. In European Conference on Computer Vision , pages 439–457. Springer, 2024. 12 [44] Sili Chen, Hengkai Guo, Shengnan Zhu, Feihu Zhang, Zilong Huang, Jiashi Feng, and Bingyi Kang. Video depth anything: Consistent depth estimation for super-long videos. arXiv preprint arXiv:2501.12375 , 2025. [45] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. CVPR , 2023. [46] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoor- thi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM , 65(1):99–106, 2021. [47] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. [48] Alexander Buslaev, Vladimir I. Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail Druzhinin, and Alexandr A. Kalinin. Albumentations: Fast and flexible image augmenta- tions. Information , 11(2), 2020. [49] Ken Perlin. An image synthesizer. ACM Siggraph Computer Graphics , 19(3):287–296, 1985. [50] Gaurav Parmar, Taesung Park, Srinivasa Narasimhan, and Jun-Yan Zhu. One-step image translation with text-to-image models. arXiv preprint arXiv:2403.12036 , 2024. [51] Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L Curless, Steven M Seitz, and Ira Kemelmacher-Shlizerman. Real-time high-resolution background matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8762–8771, 2021. [52] Jizhizi Li, Jing Zhang, Stephen J Maybank, and Dacheng Tao. Bridging composite and real: towards end-to-end deep image matting. International Journal of Computer Vision , 130(2):246– 266, 2022. [53] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing , 13(4):600– 612, 2004. [54] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video-to-video synthesis. In Proceedings of the 32nd International Conference on Neural Information Processing Systems , pages 1152–1164, 2018. [55] Joao Carreira and Andrew Zisserman.
https://arxiv.org/abs/2505.21593v1
Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 6299–6308, 2017. [56] Songwei Ge, Aniruddha Mahapatra, Gaurav Parmar, Jun-Yan Zhu, and Jia-Bin Huang. On the content bias in frΓ©chet video distance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024. [57] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16 , pages 402–419. Springer, 2020. [58] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. 13 All in Focus Disparity Map Bokeh ResultsFigure 7: Example of synthetic datasets, we randomly define the focal plane, the position of each foreground, and the blur intensity. Video1 Video2 Question: Choose your favorite bokeh effect Video1 Video 2 Same Confirm Figure 8: The user study interface, where they were asked to select their preferred videos. A The Details of Dataset As mentioned on Sec. 4.1, we use objects from the video matting dataset [ 51] and image matting dataset [ 52] and collect 1,300 background images from the intern dataset and background dataset [ 51]. Following MPIB [ 13], we use a ray-tracing-based method to generate accurate bokeh effects. Specifi- cally, we assume that the disparities of all images are planar, with their size and position randomly determined. The disparity map dis then set as a plane equation of pixel coordinates (x, y) d=1βˆ’axβˆ’by c, (6) where a, b, and care parameters that define the spatial depth relationship between pixels. For each pixel, we sample multiple rays passing through the lens, find the intersection of each ray with the scene, and project this intersection onto the sensor plane to obtain the final render results. As shown in Fig. 7, for each video, we randomly select background and foreground clips, as well as the focal plane and aperture. The foreground objects are moved along random 3D trajectories, with movement in six dimensions: forward and backward, left and right, up and down. Each video contains 25 frames, and all videos in the dataset have a resolution of 1024Γ—576pixels. To evaluate model performance, we use the same approach to synthesize a test set of 200 videos. B User Study Details Given the subjective nature of perceiving video bokeh rendering results, and the absence of ground truth (GT) data for video bokeh, we conducted a user study on real-world videos to evaluate different methods from a subjective perspective. We randomly selected 20 videos from the DA VIS dataset, each with a resolution of 1024Γ—576, featuring subjects such as people, animals, and various other objects. Since ground truth disparity maps were unavailable, we generated disparity maps using a depth prediction model [ 44]. Different methods were used to render the videos with the same control parameters. During testing, the videos were
https://arxiv.org/abs/2505.21593v1
presented in random order to avoid bias. As shown in the interface (Fig. 8), participants were asked to select the method that produced the most consistent and aesthetically pleasing bokeh effect. 14 Input Disparity Result s Figure 9: Any-to-Bokeh struggles to recover missing structures in the disparity map. C Limitations and Future Work Although our method can handle small differences at the disparity boundaries through temporal consistency, it struggles to recover missing structures in the disparity map due to the significant gap between the training data and real-world scenarios. As shown in Fig. 9, while our method is able to recover part of the missing region in the second frame, the model misidentifies it as the defocus area due to the missing structure in the disparity map. Future work will focus on integrating depth recovery techniques and exploring longer time-series memories to minimize such issues and enhance robustness against severe depth errors. D More Comparison Results To further compare the performance of our model with baselines, we selected two representative examples from the DA VIS dataset to demonstrate the superiority of our approach. As shown in Fig. 10, Fig. 11 and Fig. 12, our model produces a more natural bokeh transition at the edges. In contrast, DeepLens struggles with focusing accurately on the correct focal plane due to the limitations of its built-in depth model. Additionally, MPIB, Dr.Bokeh, and BokehMe exhibit varying degrees of edge color bleeding and overly sharp transitions, particularly along hair edges in the images, which leads to an unnatural visual effect. These artifacts underscore the advantages of our method in generating more realistic and visually coherent bokeh effects. 15 16,25,50 变捒倧小 20,25,25Ours BokehMe Dr.Bokeh MPIB DeepLens Input Figure 10: Comparison results with baselines on the real dataset. The area inside the red border is zoomed in to highlight more details. Please zoom in to view them. 16 1,11,50 变捒倧小 35Input DeepLens MPIB Dr.Bokeh BokehMe Ours Figure 11: Comparison results with baselines on the real dataset. The area inside the red border is zoomed in to highlight more details. Please zoom in to view them. 17 Input DeepLens MPIB Dr.Bokeh BokehMe Ours 0,15,25Figure 12: Comparison results with baselines on the real dataset. The area inside the red border is zoomed in to highlight more details. Please zoom in to view them. 18
https://arxiv.org/abs/2505.21593v1
arXiv:2505.21594v1 [cs.RO] 27 May 2025Fast and Cost-effective Speculative Edge-Cloud De- coding with Early Exits Yeshwanth Venkatesha yeshwanth.venkatesha@yale.edu Department of Electrical Engineering Yale University Souvik Kundu souvikk.kundu@intel.com Intel Labs Priyadarshini Panda priya.panda@yale.edu Department of Electrical Engineering Yale University Abstract Large Language Models (LLMs) enable various applications on edge devices such as smartphones, wearables, and embodied robots. However, their deployment often depends on expensive cloud-based APIs, creating high operational costs, which limit access for smaller organizations and raise sustainability concerns. Certain LLMs can be deployed on-device, offering a cost-effective solution with reduced latency and improved privacy. Yet, limited computing resources constrain the size and accuracy of models that can be deployed, necessitating a collaborative design between edge and cloud. We propose a fast and cost- effective speculative edge-cloud decoding framework with a large target model on the server and a small draft model on the device. By introducing early exits in the target model, tokens are generated mid-verification, allowing the client to preemptively draft subsequent tokens before final verification, thus utilizing idle time and enhancing parallelism between edge and cloud. Using an NVIDIA Jetson Nano (client) and an A100 GPU (server) with Vicuna-68M (draft) and Llama2-7B (target) models, our method achieves up to a 35% reduction in latency compared to cloud-based autoregressive decoding, with an additional 11% improvement from preemptive drafting. To demonstrate real-world applicability, we deploy our method on the Unitree Go2 quadruped robot using Vision-Language Model (VLM) based control, achieving a 21% speedup over traditional cloud-based autoregressive decoding. These results demonstrate the potential of our framework for real-time LLM and VLM applications on resource-constrained edge devices. 1 Introduction Large Language Models (LLMs) have become pivotal in advancing artificial intelligence, transforming natural language processing (NLP), and enabling a wide range of applications such as chatbots, virtual assistants, robotics, translation, coding, and content generation Zeng et al. (2023); Huang et al. (2024); Sun et al. (2024); Zhang et al. (2023). Their importance lies in their ability to understand and generate human-like text, making 1 ServerClient βœ•π‘‡/𝜏DraftVerifyEEServer Idle time Save drafting time and server idle timeβœ•π‘‡ServerClient ServerClient βœ•π‘‡/𝜏 PrefixVerified tokensDraft tokensDraft ModelTarget ModelDraftVerify Server Idle timeClient Idle TimeUtilize idle timeGenerate (c) Vanilla Spec Edge-Cloud Decoding(d) Fast Spec Edge-Cloud Decoding(a) Cloud Autoregressive Decoding Cost effectiveHigh API cost Low latency Cost effective ServerClient DraftVerify(b) Cloud Speculative Decodingβœ•π‘‡/𝜏 High API cost Low latency Low latency Figure 1: Illustration of traditional cloud-based autoregressive decoding versus cloud-based speculative decoding, vanilla speculative edge-cloud decoding, and the proposed preemptive drafting mechanism. interactions between humans and machines seamless and suggesting potential emergent capabilities Wei et al. (2022). Recent advances include large-scale models like OpenAI’s GPT Radford et al. (2019); Brown (2020); Achiam et al. (2023), Meta’s LLaMA Touvron et al. (2023); Dubey et al. (2024), and Google’s Gemma Team et al. (2024), driving breakthroughs in applications ranging from personalized assistants to complex problem-solving across various domains. However, running large models is costly, requiring extensive computational resources for training, often spanning thousands of GPU hours, while inference at scale demands specialized hardware to maintain responsiveness Samsi et al. (2023). This creates significant barriers for
https://arxiv.org/abs/2505.21594v1
smaller organizations and researchers who rely on expensive cloud-based APIs; for example, GPT-4.1 text generation costs $2.00/1M input tokens and $8.00/1M output tokens at the time of writing this paper.1 A potential solution is deploying LLMs on edge devices, which offers benefits like low latency, faster customization, and enhanced privacy in addition to cost-effectiveness. This is especially critical for real-time robotics applications, where decisions must be made on the fly, and the server cost can add up. For example, robotic platforms such as the Unitree Go2 quadruped are being equipped with language interfaces for real- world tasks like navigation, object interaction, and instruction following Cheng et al. (2024). However, such robots typically run on compute-constrained devices, making it infeasible to host large LLMs locally. For instance, the Unitree Go2 is powered by a Jetson Orin board with 16GB of unified memory, which is insufficient to run models over 10B parameters that require over 40GB of memory. Efficient decoding strategies like speculative decoding provide cost-effective solutions to bridge this gap. Speculative decoding Leviathan et al. (2023) uses a smaller model to generate tokens quickly, which are then verified by a larger model in parallel, significantly speeding up LLM inference. Despite its success on standalone machines, the application of speculative decoding on edge devices remains underexplored. In this work, we propose a novel speculative edge-cloud decoding method to enable fast and cost-effective LLM inference at the edge. As shown in Fig. 1(a), traditional cloud-based autoregressive decoding takes a prompt from the client and performs Tforward passes on the target model to generate Ttokens, incurring an API cost proportional to T. Speculative decoding on the cloud (Fig. 1(b)) reduces target model calls by a factor of Ο„, the number of tokens generated per draft-verify round. However, it introduces additional draft model calls which comes with a non-negligible cost Yan et al. (2024). Shifting drafting to the edge can 1OpenAI’s API pricing as of May 2025. 2 Table 1: Cost comparison between cloud autoregressive (AR) decoding and cloud speculative decoding (SD) and speculative edge-cloud decoding across different API providers on a set of candidate models. The measurement is based on 1 million requests, each consisting of 100 input tokens and 500 output tokens, assuming a draft length of Ξ³= 4tokens and an average of Ο„= 2.5accepted tokens per draft. API Provider Draft/TargetCost (In/Out per 1M tokens) API Cost Draft Target Cloud AR Cloud SD Edge-cloud SD Together AI Qwen1.5-0.5B/Qwen1.5-72B $0.1/$0.1 $0.9/$0.9 $540 $360 (33% ↓) $270 ( 50%↓) OpenRouter llama-3.1-8b/llama-3.1-405b $0.02/$0.05 $0.9/$0.9 $540 $312 (42% ↓) $270 ( 50%↓) Groq llama3-8b-8192/llama3-70b-8192 $0.05/$0.08 $0.59/$0.79 $454 $286 (37% ↓) $217 ( 52%↓) OpenRouter Qwen-2-VL-7B/Qwen-2-VL-72B $0.2/$0.2 $0.7/$0.7 $420 $390 (7% ↓) $210 ( 50%↓) eliminate this cost. Table 1 shows potential savings for example model pairs from various API providers.2 Speculative edge-cloud decoding can reduce costs by up to 52% over cloud autoregressive decoding. A straightforward edge speculation and cloud verification approach (Fig. 1(c)) suffers from inefficiencies: the client remains idle during server verification, and the server is unutilized while the client drafts tokens. To address this, we propose
https://arxiv.org/abs/2505.21594v1
a preemptive drafting mechanism to maximize client-server utilization. As shown in Fig. 1(d), we introduce early exits in the target model to produce verified tokens before full verification. These early tokens enable the client to draft the next set preemptively, a process we call pre- drafting . If the final verification confirms the early tokens, the next set of draft tokens is readily available for verification, minimizing idle time and keeping both client and server continuously active. Our contributions are summarized as follows: β€’We propose a novel framework that splits speculative decoding by hosting the draft model on the edge and the target model on the server, significantly reducing target model API costs. β€’We introduce early exits in the target model to generate verified tokens ahead of full verification, enabling the client to preemptively draft the next tokens, minimizing idle time for both client and server. β€’We conduct a comprehensive evaluation across 6 generation tasks on 3 sets of models. With Vicuna- 68M as the draft model and Llama2-7B as the target model, we show an average 35% latency reduction from autoregressive to vanilla edge cloud speculative decoding and a further speedup of 11% with our fast decoding method. β€’We demonstrate our approach on a real-world robotics platform (Unitree Go2 equipped with an NVIDIA Jetson Orin), highlighting the applicability of our method for enabling edge-cloud collabo- rative inference in embodied intelligence applications. 2 Background Speculative Decoding : Speculative decoding follows a Draft-and-Verify approach, where each step starts with generating multiple candidate tokens, which are then verified by the target LLM in parallel, speeding up inference Leviathan et al. (2023). Formally, given an input prefix x0:t, and a target model Mq, a smaller draft modelMpgenerates the next Ξ³tokensxt:t+Ξ³and their corresponding probability distribution pt:t+Ξ³ autoregressively: xt:t+Ξ³,pt:t+Ξ³=DRAFT (Mp,x0:t) (1) 2API cost as of May 2025 based on https://www.helicone.ai/llm-cost. 3 PrefixPrefixPrefixEE1EE2EElFinal ExitPrefixPrefixPrefixPre-Draft Cache HitMissServerClientDraft ModelTarget ModelMinimize Server Idle TimeUtilize Client Idle TimePrefixPrefixPrefix Server EE QueueClient EE Queue PrefixPrefixDraft Tokens Verified + Generated TokensNext set ofDraft Tokens NextRoundofVerification12 34 5DraftingVerificationEE QueuingPre-draftingNext Round of Drafting6Figure 2: Illustration of our proposed approach. Given a prefix, the client generates two draft tokens and sends them to the server. The server verifies them using a target model with early exits, returning verified tokens and the next generated token. For each early exit, the client pre-drafts the next tokens and stores them in the pre-draft cache. If the final output matches a cache entry, the draft tokens are sent immediately, reducing latency. The target modelMqverifies these tokens and decides how many to accept denoted by Ξ΄(δ≀γ), then produces the next token: xt:t+Ξ΄+1=VERIFY (Mq,xt:t+Ξ³,pt:t+Ξ³) (2) The process repeats with the input prefix extended to t+Ξ΄+ 1and passed back to the draft model for the next round. Early Exit in Large Language Models : Early exit strategies improve the efficiency of LLMs by terminating the generation process early if a sufficiently confident output is identified Panda et al. (2016); Chen et al. (2023). Given an LLM MwithLlayers and an input sequence x1:t, the hidden state at each layer lis computed as: h(l)=f(l)(h(lβˆ’1),x1:t), (3) whereh(0)is
https://arxiv.org/abs/2505.21594v1
the input embedding. At each layer l, the model calculates logits by passing the hidden state through a language model (LM) head, denoted as z(l)=LMH EAD(h(l)). It also computes a confidence scoreS(l)based on the softmax probability: S(l)= max/parenleftig softmax (z(l))/parenrightig . (4) The model exits early at layer lβ€²if the confidence score exceeds a predefined threshold, S(lβ€²)β‰₯Ο„, and the next tokenxt+1is sampled from softmax (z(lβ€²)). We leverage this mechanism to generate early verified tokens in the target model, which are used to preemptively produce the next set of draft tokens. 4 3 Methodology In our distributed speculative decoding setup, the client runs a lightweight draft model , denoted asMp, while the server hosts a large target modelMq. As shown in Fig. 2, the algorithm takes the following sequence of steps: Step 1:Given a prefix sequence x0:t={x0,x1,...,xt}, the client uses the draft model Mpto predict a sequence of Ξ³draft tokens (Eq. 1). These draft tokens xt:t+Ξ³, along with their probability distributions pt:t+Ξ³ are transmitted to the server for verification by the target model Mq. Step 2:The target modelMqis designed with multiple early exits, denoted as M(1:n) q. Each early exit i∈{1,...,n}performs a verification step on the draft tokens (Eq. 2) and generates the next token. For example, if the early exit iacceptsΞ΄(i)tokens and generates the next token, the total generated tokens would be: x(i) t:t+Ξ΄(i)+1=VERIFY (M(i) q,xt:t+Ξ³,pt:t+Ξ³) (5) Here,Ξ΄(i)denotes the number of draft tokens accepted by early exit i. Step 3:Given that the communication channel is typically the bottleneck, early exit outputs are queued in the server’s early exit queue as soon as they become available and are transmitted to the client sequentially. Step 4:The client, in turn, stores the early exit outputs from the server in its own queue and processes each one in a new thread, preemptively generating the subsequent set of draft tokens for each early exit. This process is referred to as pre-drafting . For an early exit i, the newly verified/generated tokens from the server x(i) t:t+Ξ΄(i)+1are concatenated with the original prefix x1:t, resulting in a new prefix: y(i) 0:tβ€²=Concat (x0:t,x(i) t:t+Ξ΄(i)+1) (6) Thepre-draft tokens represented as y(i) tβ€²:tβ€²+Ξ³and their corresponding probabilities p(i) tβ€²:tβ€²+Ξ³, are then computed as: y(i) tβ€²:tβ€²+Ξ³,p(i) tβ€²:tβ€²+Ξ³=PREDRAFT (Mp,y(i) 0:tβ€²) (7) These pre-drafted tokens are subsequently stored in a cache referred to as the pre-draft cache. Layer 1Layer 2Layer 3Layer LLM HeadLM HeadLM HeadLM HeadInputEE1LossEE2LossEE3LossFinal ExitTrainable AdaptersBackprop to AdaptersTrainableFrozen Figure 3: Illustration of training early exit adapters.Steps 5&6:Once the final output xt:t+Ξ΄+1from the tar- get model is received, the client checks whether these tokens were already processed in any of the early exits by looking at the pre-draft cache. If there is a hit, the corresponding pre-draft tokens are retrieved from the pre-draft cache and immediately sent to the server for the next round of verifi- cation, avoiding any delay. If it is a miss, a new set of draft tokens is generated following the usual drafting process. The server proceeds with the next round of verification over the new set of draft tokens. This design enhances efficiency by leveraging the client’s idle time for
https://arxiv.org/abs/2505.21594v1
pre-drafting and reducing the server’s idle time be- tween verification rounds whenever there is a pre-draft cache hit. Importantly, the output is identical to that of standard 5 Table 2: Early Exit training details. # Params and % Params denote the total number of trainable adapter parameters and their fraction compared to total model parameters respectively. Model # Exits # Params % Params Context GPU Hours lmsys/Vicuna-7B-v1.3 Zheng et al. (2023) 31 101M 1.48 1600 117 lmsys/Vicuna-13B-v1.3 Zheng et al. (2023) 39 158M 1.20 800 122 meta-llama/Llama-2-7B-hf Touvron et al. (2023) 31 101M 1.48 1600 119 Qwen/Qwen2-VL-7B-Instruct Wang et al. (2024) 27 88M 1.02 1600 136 Table 3: Notations used in our analysis. Notation Description Tp: Time for a single forward pass of the draft model Mp Tq: Time for a forward pass of the target model Mq Tc: Communication latency between the client and server c: Latency ratio between the draft and target models ( Tp/Tq) Ξ³: Number of draft tokens Ο„:Effective number of tokens generated per draft-verify round (# accepted tokens + 1 generated). n: Total number of tokens generated r: Cache miss rate Tr: Latency of thread synchronization on cache hit speculative decoding since all tokens are verified at the final exit of the target model, guaranteeing no loss in accuracy. For detailed system design and pseudocode please refer to Appendix A. Early Exit Training : We add adapter layers after each layer of the target model to train the early exits, as shown in Fig. 3. Each adapter connects to the language model (LM) head, and its loss is backpropagated to update only that adapter. This minimizes trainable parameters while preserving the original model. For language generation models, we train early exit adapters on the publicly available ShareGPT conversation dataset (hf:RyokoAI/ShareGPT52K) using a single NVIDIA A100 GPU with 80GB of VRAM. We fine-tune three modelsβ€”Vicuna-7B, Vicuna-13B, and Llama2-7Bβ€”for 10 epochs each, using a batch size of 1 and a learning rate of 1e-4. Additionally, we train early exit adapters for a vision-language model based on Qwen2VL-7B using the Spacellava dataset (hf:remyxai/vqasynth_spacellava) which is generated by open source implementation of SpatialVLM Chen et al. (2024). Table 2 summarizes the number of early exits, total training time, and the number of trainable parameters for each model. Note that the context length was reduced during training to ensure compatibility with the memory limitations of a single A100 GPU. 4 Experiments Models and Benchmarks: Following the standard speculative decoding literature Li et al. (2024), we evaluate our method on three model sets: Vicuna-68M/Vicuna-7B, Vicuna-160M/Vicuna-13B, and Vicuna- 68M/Llama2-7B. We show the experiments on 6 standard generative task benchmarks spanning conversation Zheng et al. (2023), code generation Chen et al. (2021), mathematical reasoning Cobbe et al. (2021), 6 instruction following Taori et al. (2023), summarization Nallapati et al. (2016), and question-answering tasks Kwiatkowski et al. (2019). Server Side Hardware : We utilize a high performance computing cluster node equipped with a single A100 GPU with 80GB VRAM, 16 CPU cores, and 8GB of CPU memory per core as our server. Client Side Hardware: We
https://arxiv.org/abs/2505.21594v1
demonstrate our system on two types of client devices: 1.NVIDIA Jetson Nano: A compact AI development board tailored for edge computing. It includes a quad-core ARM Cortex-A57 CPU, a 128-core Maxwell GPU, and 4GB of LPDDR4 RAM shared between the CPU and GPU. With a performance of up to 472 GFLOPs, the Jetson Nano is ideal for edge applications. 2.Cluster Node with RTX 2080 Ti: This setup features a single RTX 2080 Ti GPU with 12GB VRAM, an 8-core CPU, and 4GB of RAM per core, providing a more powerful alternative for our experiments. Communication: Communication between cluster nodes is facilitated by InfiniBand high-speed interconnect. Ethernet is used for communication between the cluster node and the Jetson Nano. Table 4: Latency calculation. AR denotes cloud-based au- toregressive decoding. SD and FSD refer to vanilla and fast speculative edge-cloud decoding, respectively. Method Latency Cloud AR 2Tc+nTq Edge-Cloud SDn Ο„(2Tc+Ξ³Tp+Tq) Edge-Cloud FSDn Ο„(2Tc+rΞ³Tp+ (1βˆ’r)Tr+Tq)Latency Calculation : Table 3 summarizes the notations used in our analysis. Key vari- ables include Tq,Tp, andTc, representing la- tencies of the target model, draft model, and communication, respectively. Speculative de- coding metrics include Ο„, the effective tokens per draft-verification round. Hyperparameters aren, the total tokens, and Ξ³, tokens per draft- verification round. Additional factors for our fast speculative decoding method include the cache miss rate rand synchronization latency Trfor cache hits. The latency calculation for autoregressive (AR) decoding, vanilla speculative edge-cloud decoding (SD), and our fast speculative edge-cloud decoding (FSD) are presented in Table 4. The latency of FSD method depends on the cache miss rate, r. In case of a cache hit, threads must synchronize, incurring a latency of Tr. Unless otherwise specified, we use Ξ³= 4and n= 200 in our experiments. 4.1 System Metrics We report the system metrics in Table 5, including drafting latency ( Ξ³Tp) atΞ³= 4, verification latency ( Tq), the latency ratio c, and communication latency ( Tc). On the Jetson Nano, drafting is about three times slower and communication twice as slow as on a cluster node with an RTX GPU. We report the maximum GPU VRAM and the number of early-exit threads supported by the system. On the RTX-equipped node, the system handles up to 30 threads for the Vicuna-68M model, but GPU VRAM (12 GB) limits the Vicuna-160M model to 15 threads before encountering an out-of-memory (OOM) error. On the Jetson Nano, both CPU threading and RAM are bottlenecks. The maximum number of threads is capped at 15 for the Vicuna-68M model, while the 4 GB memory limit allows only 7 threads for the Vicuna-160M model. 7 Table 5: Average system metrics that are dataset agnostic. MetricVicuna-68m/Vicuna-7B Vicuna-160m/Vicuna-13B Vicuna-68m/Llama2-7B Jetson RTX Jetson RTX Jetson RTX Drafting Latency ( Ξ³Tp,Ξ³= 4) 334ms 102ms 1596ms 555ms 301ms 99ms Verification Latency ( Tq) 497ms 442ms 616ms 618ms 522ms 467ms Latency Ratio ( c=Tp/Tq) 0.17 0.06 0.65 0.22 0.14 0.05 Communication Latency ( Tc) 95ms 42ms 91ms 46ms 96ms 47ms Max GPU memory 1.7G 3.2G 3.5G 8.9G 1.7G 3.2G Num EE Threads 15 30 7 15 15 30 4.2 Speedup Results Evaluation Metrics: Our
https://arxiv.org/abs/2505.21594v1
fast decoding method with early exit is exact, with outputs identical to standard speculative decoding, ensuring no loss in accuracy . We define the following metrics to evaluate our method. β€’Speedup ARβ†’SD: Latency savings of vanilla speculative edge-cloud decoding (SD) compared to cloud based autoregressive (AR) baseline. β€’Speedup SDβ†’FSD: Latency savings of our fast speculative edge-cloud decoding (FSD) compared to the vanilla speculative edge-cloud decoding (SD). β€’Cache miss rate (lower the better) : Frequency of cache misses, that indicates how often we fail to find the final output in one of the early exits. β€’Average Early Exit (lower the better) : The average early exit that produces the same output as the final exit. Table 6 presents the evaluation metrics on the benchmark datasets. In addition to the aforementioned evaluation metrics, the effective number of generated tokens per verification ( Ο„) is also reported. Note, Ο„ remains identical to that of vanilla SD as our FSD method produces identical outputs but it highlights the reduction in API call costs. ARβ†’SD: On average, using RTX, vanilla SD achieves a 1.2x and 1.94x speedup over autoregressive decoding with the Vicuna-68M/Vicuna-7B and Vicuna-68M/Llama2-7B models, respectively. However, it results in a marginal slowdown with Vicuna-160M/Vicuna-13B. Jetson, being slower at drafting coupled with higher communication cost, reduces the speedup relative to autoregressive decoding, making it slower for Vicuna-68M/Vicuna-7B and Vicuna-160M/Vicuna-13B models, though it achieves a 1.34x speedup with Vicuna-68M/Llama2-7B. The speedup from autoregressive to SD primarily depends on c,Ο„, andTc, with ideally requiring low values of candTcand a highΟ„. Since Jetson has a high candTc, it underperforms compared to autoregressive on Vicuna-68M/Vicuna-7B and Vicuna-160M/Vicuna-13B models. In contrast, for Vicuna-68M/Llama2-7B, a lower ccombined with a higher Ο„yields a 1.35x speedup. SDβ†’FSD: Our FSD provides consistent speedup over vanilla SD across all datasets on both RTX and Jetson. On the RTX client, it achieves average speedups of 1.06x, 1.10x, and 1.06x for Vicuna-68M/Vicuna- 7B, Vicuna-160M/Vicuna-13B, and Vicuna-68M/Llama2-7B, respectively. Similarly, on Jetson Nano, the speedups are 1.04x, 1.06x, and 1.11x for the same model pairs. The primary benefit of FSD lies in its pre-drafting mechanism, which enables these improvements over vanilla SD. This mechanism’s impact is reflected in the cache miss rate, indicating how often the final output is available through early exits, allowing 8 Table 6: Speedup evaluation on standard language benchmark datasets. Benchmark MetricVicuna-68m/Vicuna-7B Vicuna-160m/Vicuna-13B Vicuna-68m/Llama2-7B Jetson RTX Jetson RTX Jetson RTX MT-benchSpeedup ARβ†’SD 0.70x 1.30x 0.42x 0.97x 1.34x 2.01x Speedup SDβ†’FSD 1.04x 1.04x 1.05x 1.09x 1.07x 1.02x Avg Tokens Ο„ 2.30 2.01 2.28 2.98 4.12 3.48 Cache miss rate 60.92% 20.07% 62.49% 16.40% 36.73% 13.38% Avg EE 8 13 9 15 9 11 HumanEvalSpeedup ARβ†’SD 0.79x 1.42x 0.47x 0.85x 1.53x 2.09x Speedup SDβ†’FSD 1.04x 1.03x 1.06x 1.15x 1.22x 1.07x Avg Tokens Ο„ 2.04 2.66 2.14 2.04 4.16 3.83 Cache miss rate 64.73% 15.79% 57.64% 25.16% 17.6% 1.75% Avg EE 7 16 8 14 4 3 GSM8KSpeedup ARβ†’SD 0.63x 1.11x 0.40x 0.77x 1.15x 1.69x Speedup SDβ†’FSD 1.04x 1.08x 1.06x 1.13x 1.07x 1.06x Avg Tokens Ο„ 2.08 1.96 2.23 2.29 3.48 3.29 Cache miss rate 61.21% 12.40% 58.04% 18.37%
https://arxiv.org/abs/2505.21594v1
41.87% 9.55% Avg EE 8 13 9 14 7 10 AlpacaSpeedup ARβ†’SD 0.63x 1.06x 0.42x 0.74x 1.42x 1.99x Speedup SDβ†’FSD 1.04x 1.07x 1.05x 1.12x 1.10x 1.18x Avg Tokens Ο„ 2.09 1.96 2.32 2.36 4.29 3.62 Cache miss rate 64.60% 19.29% 60.94% 25.45% 36.60% 4.28% Avg EE 8 14 8 14 9 2 CNN/DMSpeedup ARβ†’SD 0.72x 1.20x 0.38x 0.73x 1.41x 1.91x Speedup SDβ†’FSD 1.03x 1.07x 1.03x 1.07x 1.07x 1.04x Avg Tokens Ο„ 2.32 1.95 2.08 2.10 4.32 3.43 Cache miss rate 70.70% 29.40% 69.92% 38.09% 47.13% 13.97% Avg EE 10 15 8 16 14 13 NQSpeedup ARβ†’SD 0.65x 1.10x 0.46x 0.82x 1.26x 1.93x Speedup SDβ†’FSD 1.02x 1.06x 1.04x 1.12% 1.05x 1.01x Avg Tokens Ο„ 2.08 2.05 2.44 2.62 3.83 3.62 Cache miss rate 71.50% 22.59% 63.25% 30.25% 57.11% 14.88% Avg EE 8 14 8 15 13 14 AverageSpeedup ARβ†’SD (↑) 0.69x 1.20x 0.42x 0.94x 1.35x 1.94x Speedup SDβ†’FSD (↑) 1.04x 1.06x 1.05x 1.10x 1.11x 1.06x Avg Tokens Ο„(↑) 2.15 2.10 2.25 2.40 4.03 3.54 Cache miss rate (↓) 65.61% 19.59% 62.05% 27.95% 39.94% 9.63% Avg EE (↓) 8 14 8 14 9 9 pre-drafting of the next set of tokens. The average early exit metric further highlights how quickly verified tokens are obtained, enabling efficient generation of subsequent draft tokens. 4.3 Ablation Studies Effect of Number of Threads: In Figure 4a, we show the speedup of our FSD relative to vanilla SD and the cache miss rate as the number of early exit threads increases up to 30. Using the GSM8K dataset with the Vicuna-68M/Vicuna-7B models on an RTX client, we find that the cache miss rate decreases as more threads are added, improving speedup. However, after around 15 threads, the speedup begins to plateau, and further increases in thread count yield minimal additional speedup. This is because the priority queues process the most promising early exits first, making it more likely to match the final output with the initial threads rather than the later ones. 9 (a) Effect of number of early exit threads. (b) Effect of number of draft tokens ( Ξ³). Figure 4: Ablation studies: (a) Effect of varying the number of early exit threads, (b) Effect of varying the number of draft tokens ( Ξ³). Effect ofΞ³:The number of tokens, Ξ³, significantly influences the efficiency of speculative decoding. In Fig. 4b, we plot speedup between SD and FSD, and between autoregressive (AR) and SD, as Ξ³increases up to 10. We use the GSM8K dataset with the Vicuna-68M/Vicuna-7B models on an RTX-based client. Our FSD method shows greater latency improvements compared to vanilla SD as Ξ³increases, enhancing the benefits of pre-drafting. However, an excessively large Ξ³can hinder the speculative decoding process, causing the overall speedup to decrease. As shown, the speedup of SD relative to AR falls below 1x when Ξ³exceeds 5. Table 7: Ablation study of client-server queue strate- gies.rindicates the average response time (lower is better) for different queue lengths. Client Server r(3T)r(5T)r(10T) PriorityPriority 79.85 62.93 27.57 FIFO 80.40 63.48 27.73 RandomPriority 82.32 63.25 26.92 FIFO 84.56 65.36 27.68 FIFOPriority 82.61 64.43 27.69 FIFO 85.43 65.67 27.84Importance of
https://arxiv.org/abs/2505.21594v1
Priority Queue: Since our system is asynchronous, we need queues for graceful operation. Further, we organize the queues in priority, determined by the confidence score of the generated token (Eq. 4). This prioritization is especially beneficial when the number of threads is limited. Table 7 presents an ablation study comparing different queue configurations. On the server side, we use either a priority queue or a FIFO queue, while on the client side, we also include a random queue as an option. We report the cache miss rate (r) for systems with 3, 5, and 10 threads, denoted as 3T, 5T, and 10T, respectively. Our findings indicate that when the server uses a priority queue, it significantly improves performance for any given queue type on the client, although this benefit decreases with a higher thread count. On the client side, a priority queue consistently outperforms both the random and FIFO queues. 4.4 Robotics Case Study: Vision-Language Navigation on Unitree Go2 To evaluate the real-world applicability of our approach, we deploy the edge-cloud speculative decoding system on the Unitree Go2 EDU quadruped robot. This platform features an onboard NVIDIA Jetson Orin board, which includes an 8-core ARM Cortex-A78AE v8.2 64-bit CPU and 16GB of 128-bit LPDDR5 unified memory, offering up to 157 TOPS of compute. Communication between the robot and the server is established over Wi-Fi 6. The robot receives natural language instructions such as β€œgo to the red chair” or β€œturn left at the hallway” and uses its front-facing RGB camera to perceive the environment. A vision-language model (VLM) processes 10 Figure 5: Example run of the Unitree Go2 robot performing an object-finding task using vision-language- based control. The robot receives the instruction β€œfind the silver bottle” and navigates the environment while distinguishing the correct object from a similar distractor. Table 8: System-level evaluation metrics for our edge-cloud speculative decoding setup on the Unitree Go2 robot (Jetson Orin) and A100 server. (a) Reports core system and latency metrics. (b) Summarizes the performance gains with our FSD method. Metric Value Drafting Latency ( Ξ³Tp,Ξ³= 4) 288ms Verification Latency ( Tq) 620ms Latency Ratio ( c=Tp/Tq) 0.11 Communication Latency ( Tc) 120ms Max GPU Memory 12.4G Num EE Threads 6 (a) System and latency metricsMetric Value Avg Tokens ( Ο„) (↑) 2.92 Cache Miss Rate (↓) 55.63 Avg EE (↓) 8 Speedup ARβ†’SD (↑) 0.90x Speedup SDβ†’FSD (↑)1.34x (b) FSD speedup and performance metrics the visual observations and language commands to generate mid-level navigation actions (e.g., move forward small/medium/large), following the approach of Cheng et al. (2024). These actions are then executed by the robot’s onboard controller. To enhance decision quality and interpretability, we additionally prompt the VLM to provide reasoning alongside its action outputs. We deploy a quantized version of Qwen-2-VL-2B as the on-device draft model and offload token verification to the full-size Qwen-2-VL-7B model hosted on an A100 GPU in the cloud. Figure 5 illustrates an example scenario in which the robot is instructed to locate a specific objectβ€”in this case, a silver bottle. To test the model’s reasoning and grounding capabilities, we
https://arxiv.org/abs/2505.21594v1
introduce a distractor object of similar appearance. The robot successfully navigates the environment and identifies the correct object, demonstrating the effectiveness of our method on a vision-language-based control task. Table 8(a) reports key system-level metrics from our deployment, including drafting and verification latencies, communication overhead, and peak GPU memory usage. Table 8(b) highlights the performance improvements enabled by our method, showing speedups from standard autoregressive decoding (AR) to edge-cloud speculative decoding (SD) and further to Fast Speculative Decoding (FSD). It also includes average accepted tokens per round, cache miss rate, and average early exits. Overall, our system achieves a 21% speedup over conventional cloud-based autoregressive decoding, validating the practicality of our approach for real-time, language-conditioned robot control on resource-constrained edge platforms. 11 5 Related Work There is a significant interest in enabling edge devices to run LLMs. The deployment of collaborative AI inference systems across the edge and the cloud introduces unique challenges such as latency constraints, bandwidth limitations, and inconsistent network conditions. One straightforward approach is to design smaller models Lu et al. (2024). While all the popular class of models such as OPT Zhang et al. (2022), Llama Touvron et al. (2023); Dubey et al. (2024), and Gemma Team et al. (2024) have smaller scale models, they are either not small enough to run on an edge device or not accurate enough to reliably deploy in practical applications. Quantization is one of the heavily focused methods to enable on-device LLMs Lin et al. (2024). Yu et al. (2024) aim to compress the models with layer-wise pruning and quantization to enable edge LLMs. Qu et al. (2024) discuss an approach of enabling LLMs to run on 6g edge devices. On system side, Xu et al. (2024a) focus on leveraging on-device Neural Processing Unit (NPU). Early exit strategies, which allow intermediate layers of deep networks to make predictions without waiting for the full forward pass, have been extensively explored for resource-constrained devices. Pioneering works such as Conditional Deep Learning Panda et al. (2016) and BranchyNet Teerapittayanon et al. (2016) introduced the idea of adding multiple exit points to deep neural networks to reduce computation time. Recent research has also explored layer skipping in LLMs for enhanced efficiency Fan et al. (2024), with dynamic compute allocation based on tokens Raposo et al. (2024). In terms of multi-device speculative decoding, McDanel (2024) has recently shown that asynchronous speculative decoding over multiple GPUs can be beneficial. However, it uses shared memory to communicate between devices, so it is not directly applicable to edge-cloud scenarios. Our work lies at the intersection of LLM decoding, early exit mechanisms, and distributed inference opti- mization, addressing a critical gap by proposing a preemptive speculative decoding framework tailored for edge-cloud environments. To the best of our knowledge, this is the first work to show end-to-end speculative decoding with models split between edge and cloud. Further, we comprehensively analyze and demonstrate the system-level trade-offs during the implementation of collaborative edge-cloud decoding, which no prior work has investigated. 6 Conclusion and Discussion We introduced a novel speculative edge-cloud decoding framework, offering a
https://arxiv.org/abs/2505.21594v1
cost-effective alternative to traditional cloud-based deployment. By distributing the draft and target models between edge and server environments, our solution significantly reduces high API costs. Early exits and pre-drafting allow us to enhance parallelism by leveraging idle client time and reducing server idle time. Our comprehensive end-to- end evaluation on the NVIDIA Jetson Nano highlights the feasibility of efficient edge-cloud collaborative LLM inference on resource-limited edge devices. On Jetson Nano, speculative edge-cloud decoding achieves up to a 35% speedup over cloud-based autoregressive decoding, with up to an additional 11% performance gain enabled by pre-drafting and early exits. Further, we validate our approach with execution of vision language models on the Unitree Go2 quadruped robot. We achieve an overall 21% speedup over standard cloud-based autoregressive decoding, demonstrating the effectiveness and real-world applicability of our framework for robotics use cases. Our method operates effectively without making assumptions about communication delays, and we show that it remains practical under real-world conditions. While communication latency can be a limiting factor in 12 extreme cases, our use of priority queues helps optimize bandwidth usage. For scenarios with constrained network conditions, further tuning and adaptive scheduling policies offer promising avenues to enhance performance. Similarly, while pre-drafting leverages idle client time for parallelism, it introduces modest compute overhead on the client. This is manageable for most edge platforms, and the ability to scale the number of threads provides a flexible trade-off between latency and energy efficiency, especially for battery-powered devices. Our proof-of-concept is implemented in Python, which, while convenient for experimentation, leaves room for further optimization. A low-level C++ implementation with shared memory could substantially improve performance, making the system even more suitable for latency-sensitive applications. Currently, our system supports single-client interaction with the server, but extending it to support multi-client concurrency is a natural next step. We envision future work enabling scalable, concurrent edge-cloud inference with early exits, making our approach even more applicable to real-world deployment scenarios. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Mohammed Banafaa, Ibraheem Shayea, Jafri Din, Marwan Hadri Azmi, Abdulaziz Alashbi, Yousef Ibrahim Daradkeh, and Abdulraqeb Alhammadi. 6g mobile communication technology: Requirements, targets, applications, challenges, advantages, and opportunities. Alexandria Engineering Journal , 64:245–274, 2023. Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. Boyuan Chen, Zhuo Xu, Sean Kirmani, Brain Ichter, Dorsa Sadigh, Leonidas Guibas, and Fei Xia. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 14455–14465, 2024. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, and Jingren Zhou. Ee-llm: Large-scale training and inference of early-exit large language models with 3d parallelism. arXiv preprint arXiv:2312.04916 , 2023. An-Chieh Cheng, Yandong Ji, Zhaojing Yang,
https://arxiv.org/abs/2505.21594v1
Zaitian Gongye, Xueyan Zou, Jan Kautz, Erdem BΔ±yΔ±k, Hongxu Yin, Sifei Liu, and Xiaolong Wang. Navila: Legged robot vision-language-action model for navigation. arXiv preprint arXiv:2412.04453 , 2024. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. 13 Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, and Zhongyuan Wang. Not all layers of llms are necessary during inference. arXiv preprint arXiv:2403.02181 , 2024. Othmane Friha, Mohamed Amine Ferrag, Burak Kantarci, Burak Cakmak, Arda Ozgun, and Nassira Ghoualmi-Zine. Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness. IEEE Open Journal of the Communications Society , 2024. Kaiyu Huang, Fengran Mo, Hongliang Li, You Li, Yuanchi Zhang, Weijian Yi, Yulong Mao, Jinchen Liu, Yuzhuang Xu, Jinan Xu, et al. A survey on large language models with multilingualism: Recent advances and new frontiers. arXiv preprint arXiv:2405.10936 , 2024. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:453–466, 2019. Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. InInternational Conference on Machine Learning , pp. 19274–19286. PMLR, 2023. Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle-2: Faster inference of language models with dynamic draft trees. arXiv preprint arXiv:2406.16858 , 2024. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. Awq: Activation-aware weight quantization for on-device llm compression and acceleration. Proceedings of Machine Learning and Systems , 6:87–100, 2024. Zheng Lin, Guanqiao Qu, Qiyuan Chen, Xianhao Chen, Zhe Chen, and Kaibin Huang. Pushing large language models to the 6g edge: Vision, challenges, and opportunities. arXiv preprint arXiv:2309.16739 , 2023. Zhenyan Lu, Xiang Li, Dongqi Cai, Rongjie Yi, Fangming Liu, Xiwen Zhang, Nicholas D Lane, and Mengwei Xu. Small language models: Survey, measurements, and insights. arXiv preprint arXiv:2409.15790 , 2024. Bradley McDanel. Amusd: Asynchronous multi-device speculative decoding for llm acceleration. arXiv preprint arXiv:2410.17375 , 2024. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023 , 2016. Priyadarshini Panda, Abhronil Sengupta, and Kaushik Roy. Conditional deep learning for energy-efficient and enhanced pattern recognition. In 2016 design, automation & test in europe conference & exhibition (DATE) , pp. 475–480. IEEE, 2016. Guanqiao Qu, Qiyuan Chen, Wei Wei, Zheng Lin, Xianhao Chen, and Kaibin Huang. Mobile edge intelligence for large language models: A contemporary survey. arXiv preprint arXiv:2407.18921 , 2024. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog
https://arxiv.org/abs/2505.21594v1
, 1(8):9, 2019. David Raposo, Sam Ritter, Blake Richards, Timothy Lillicrap, Peter Conway Humphreys, and Adam Santoro. Mixture-of-depths: Dynamically allocating compute in transformer-based language models. arXiv preprint arXiv:2404.02258 , 2024. 14 Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, and Vijay Gadepally. From words to watts: Benchmarking the energy costs of large language model inference. In 2023 IEEE High Performance Extreme Computing Conference (HPEC) , pp. 1–9. IEEE, 2023. Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, et al. A survey of neural code intelligence: Paradigms, advances and beyond. arXiv preprint arXiv:2403.14734 , 2024. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane RiviΓ¨re, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024. Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd international conference on pattern recognition (ICPR) , pp. 2464–2469. IEEE, 2016. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 , 2022. Daliang Xu, Hao Zhang, Liming Yang, Ruiqi Liu, Gang Huang, Mengwei Xu, and Xuanzhe Liu. Empowering 1000 tokens/second on-device llm prefilling with mllm-npu. arXiv preprint arXiv:2407.05858 , 2024a. Minrui Xu, Dusit Niyato, Jiawen Kang, Zehui Xiong, Shiwen Mao, Zhu Han, Dong In Kim, and Khaled B Letaief. When large language model agents meet 6g networks: Perception, grounding, and alignment. IEEE Wireless Communications , 2024b. Minghao Yan, Saurabh Agarwal, and Shivaram Venkataraman. Decoding speculative decoding. arXiv preprint arXiv:2402.01528 , 2024. Zhongzhi Yu, Zheng Wang, Yuhan Li, Haoran You, Ruijie Gao, Xiaoya Zhou, Sreenidhi Reedy Bommu, Yang Katie Zhao, and Yingyan Celine Lin. Edge-llm: Enabling efficient large language model adaptation on edge devices via layerwise unified compression and adaptive layer tuning and voting. arXiv preprint arXiv:2406.15758 , 2024. Fanlong Zeng, Wensheng Gan, Yongheng Wang, Ning Liu, and Philip S Yu. Large language models for robotics: A survey. arXiv preprint arXiv:2311.07226 , 2023. 15 Mingjin Zhang, Jiannong Cao, Xiaoming Shen, and Zeyang Cui. Edgeshard: Efficient llm inference via collaborative edge computing. arXiv preprint arXiv:2405.14371 , 2024. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt:
https://arxiv.org/abs/2505.21594v1
Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 , 2022. Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao, Zi Gong, Hang Yu, Jianguo Li, and Rui Wang. A survey on language models for code. arXiv preprint arXiv:2311.07989 , 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. A System Design Client: On the client side, the primary goal is to maximize idle time usage and increase cache hit rates. As shown in Algorithm 1, the client maintains a priority queue Qp, a pre-draft cache C, and a RECEIVER thread. After sending draft tokens to the server for verification, the RECEIVER thread listens for server callbacks, which provide outputs from early exits. Since the client’s bottleneck lies in the processing power required for generating pre-draft tokens, it is essential to prioritize the handling of early exit outputs. The priority queue Qporganizes these outputs according to their confidence levels (Eq. 4), prioritizing the most promising ones for pre-drafting. It is populated asynchronously as early exit outputs are received by the client. If the client’s device has multiple available threads, it can process several early exits in parallel to generate more pre-draft tokens. All the pre-draft tokens are stored in the pre-draft cache. Once the client receives the final exit output, it checks the pre-draft cache for the corresponding tokens. If there is a cache hit, the pre-drafted tokens are sent to the server immediately for the next verification round. Server: As detailed in Algorithm 2. the server consists of two asynchronous threads: LISTENER andSENDER . The LISTENER processes the verification requests from the client. It takes in the prefix x1:t, draft tokens xt:t+Ξ³, and their corresponding probability distribution p1:Ξ³. As shown in Fig. 2, communication typically becomes the bottleneck in the server as early exit outputs are produced faster than the network can transmit. Early exit outputs are placed in a queue on the server side and transmitted sequentially to handle this. Let Qerepresent the server-side queue storing the early exit outputs: Qe={x(1) t:t+Ξ΄(1)+1,...,x(L) t:t+Ξ΄(L)+1}. (8) Asynchronously, the SENDER thread sends the early exit outputs from the queue based on priority determined by the confidence score (Eq. 4). B Speedup Projection Analysis One of the bottlenecks in our system is communication between the devices. In addition to directly adding to the latency, it also bottlenecks the number of early exit verifications communicated back to the client. This increases the cache miss rate further increasing the latency. While this is a challenge at present due 16 Algorithm 1 Client-Side Algorithm 1:Initialize: Draft modelMp, QueueQp, CacheC, RECEIVER 2:Input: Prefixx1:t, # Total Tokens T, # Draft tokens Ξ³ 3:Output: Final tokens xt+1:T 4:fori= 1toΞ³do 5:xt+i,pi←DRAFT (Mp,x1:t+iβˆ’1) 6:end for 7:SEND(x1:t+Ξ³,p1:Ξ³) 8:whilet<T do 9: whileQpnot empty do 10:xβ€² t+1:t+Ξ΄β€²+1,s′←Qp.pop() 11: ifxβ€² t+1:t+Ξ΄β€²+1not inCthen 12: C[xβ€² t+1:t+Ξ΄β€²+1]←PREDRAFT (x1:t,xβ€² t+1:t+Ξ΄β€²+1) 13: end if 14: end while 15:y1:t′←Concat (x1:t,xt+1:t+Ξ΄+1) 16: ifxt+1:t+Ξ΄+1∈Cthen 17:ytβ€²:tβ€²+Ξ³,p1:γ←C[xt+1:t+Ξ΄+1] 18: else 19: fori= 1toΞ³do 20: ytβ€²+i,pi←DRAFT (Mp,y1:tβ€²+iβˆ’1) 21: end
https://arxiv.org/abs/2505.21594v1
for 22: end if 23: SEND(ytβ€²:tβ€²+Ξ³,p1:Ξ³) 24:t←tβ€²,x←y,C.reset(),Qp.reset() 25:end while 26:function PREDRAFT 27: Input: Prefixx1:t, Tokensxβ€² t+1:t+Ξ΄β€²+1 28:y1:t′←Concat (x1:t,xβ€² t+1:t+Ξ΄β€²+1) 29: fori= 1toΞ³do 30:ytβ€²+i,pβ€² i←DRAFT (Mp,xβ€² 1:tβ€²+iβˆ’1) 31: end for 32: returnytβ€²:tβ€²+Ξ³,pβ€² 1:Ξ³ 33:end function 34:function RECEIVER 35: Input: Tokensxβ€² t+1:t+Ξ΄β€²+1, Prioritysβ€²,isfinalexit 36: ifisfinalexit then 37:xt+1:t+Ξ΄+1←xβ€² t+1:t+Ξ΄β€²+1 38: else 39:Qp.push(xβ€² t+1:t+Ξ΄β€²+1,sβ€²) 40: end if 41:end function 17 Algorithm 2 Server-Side Algorithm 1:Initialize: Target ModelMq, Verification criterion V ERIFY , QueueQe, LISTENER , SENDER 2:function LISTENER 3: Input: Prefix and draft tokens x1:t+Ξ³, Probs.p1:Ξ³ 4:x(1:L) t+1:t+Ξ΄+1,q(1:L) 1:Ξ΄+1←VERIFY (Mq,x1:t+Ξ³,p1:Ξ³) 5: for alli= 1,...,Lβˆ’1do 6:s(i)←max(q(i) 1:Ξ΄+1) 7:Qe.push(x(i) t+1:t+Ξ΄+1,s(i)) 8: end for 9: SEND(x(L) t+1:t+Ξ΄+1,isfinalexit =True) 10:Qe.reset() 11:end function 12:function SENDER 13: whileQenot empty do 14: (xβ€² t+1:t+Ξ΄+1,sβ€²)←Qe.pop() 15: SEND(xβ€² t+1:t+Ξ΄+1,sβ€²,isfinalexit =False ) 16: end while 17:end function Figure 6: Estimated speedup per round of our FSD relative to vanilla SD method assuming no communication latency. We emphasize the contours for speedups of 1.1x, 1.2x, 1.5x, 2x, and 5x, and we indicate the position of our models within this landscape. Additionally, we highlight the operating range for the larger models OPT-66B and Llama-65B. to limited communication network capabilities, several works have shown the vision of having edge LLMs on 6g networks with a projected network speed up to 10 Tbps Banafaa et al. (2023); Lin et al. (2023); Xu et al. (2024b); Friha et al. (2024); Qu et al. (2024); Zhang et al. (2024). In scenarios where communication latenciesTcis negligible relative to drafting and verification time, and ignoring thread synchronization latency (Tr) for simplicity, we can approximate the speedup SD β†’FSD as: Speedup (SDβ†’FSD) =Ξ³c+ 1 r(Ξ³c) + 1(9) 18 This formula reduces the final speedup to be affected by two factorsβ€”Cache miss rate rand Latency Ratio c. Cache miss rate rdepends on the redundancy in the target model and how well the early exit adapters are trained. On the other hand, cis highly dependent on the compute capability of the edge device. Since edge devices are often slower, this pushes the cto be higher. We visualize Eq. 9 as a heatmap in Fig. 6 for Ξ³values 4, 7 and 10. For reference, we plot the measured cand rvalues based on the Jetson implementation of our current set of models within this landscape. Naturally, having a lower rwill improve speedup, but the usefulness of our FSD method becomes more pronounced as we get to higher candΞ³. For model sets with a latency ratio greater than 0.5 and well-trained early exit adapters that achieve a cache miss rate of less than 10%, we can anticipate a speedup over 5x. Extending our analysis to larger models, specifically OPT-66B and Llama-65B with draft models OPT-125M and NoFT-Wide-796M, we use reported latencies from Yan et al. (2024) (6.6 ms for draft, 67 ms for target) and factor in a 3x slowdown on Jetson, arriving at cβ‰ˆ0.3. This value is illustrated by the yellow line in Fig. 6. For instance, to achieve a 2x speedup with Ξ³values of 4, 7, and 10, the cache miss rate must remain below 10%, 25%, and 35%, respectively. C Batch Processing Table 9 shows latency analysis
https://arxiv.org/abs/2505.21594v1
for Vicuna-7B (A100) and Vicuna-68M (Jetson Nano). Batch processing im- proves throughput but not always latency; e.g., batch size 32 increases A100 latency by over 5x. However, API providers often offer discounts for batch processing (e.g., OpenAI provides 50% discount OpenAI Pricing), making it a cost-saving approach. On the client, batch processing shows a smaller latency increaseβ€”batch sizes of 1 and 8 differ by 15%. A batched pre-drafting approach could reduce latency but requires waiting to accumulate multiple early exits, introducing a trade-off. Table 9: Server and client latency for different batch sizes. Batch Server Latency Client Latency Size (Vicuna-7B) (Vicuna-68M) (A100) (Jetson) 1 17.94 7.76 2 21.75 8.02 3 24.35 7.57 4 24.63 7.87 8 30.73 8.97 12 44.52 9.71 16 45.79 10.63 32 96.94 14.49 64 258.79 OOM 19
https://arxiv.org/abs/2505.21594v1
arXiv:2505.21595v1 [cs.LG] 27 May 2025Relevance-driven Input Dropout: an Explanation-guided Regularization Technique Shreyas Gururaj1,2Lars GrΓΌne2Wojciech Samek1,3,4 Sebastian Lapuschkin1,5,†Leander Weber1,† 1Fraunhofer Heinrich Hertz Institute, Department of Artificial Intelligence, Berlin, Germany 2Lehrstuhl fΓΌr Angewandte Mathematik, University of Bayreuth, Bayreuth, Germany 3Technische UniversitΓ€t Berlin, Berlin, Germany 4BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany 5Centre of eXplainable Artificial Intelligence, Technological University Dublin, Dublin, Ireland †Corresponding: {sebastian.lapuschkin,leander.weber}@hhi.fraunhofer.de Abstract Overfitting is a well-known issue extending even to state-of-the-art (SOTA) Ma- chine Learning (ML) models, resulting in reduced generalization, and a significant train-test performance gap. Mitigation measures include a combination of dropout, data augmentation, weight decay, and other regularization techniques. Among the various data augmentation strategies, occlusion is a prominent technique that typically focuses on randomly masking regions of the input during training. Most of the existing literature emphasizes randomness in selecting and modifying the input features instead of regions that strongly influence model decisions. We pro- pose Relevance-driven Input Dropout (RelDrop) , a novel data augmentation method which selectively occludes the most relevant regions of the input, nudging the model to use other important features in the prediction process, thus improv- ing model generalization through informed regularization. We further conduct qualitative and quantitative analyses to study how Relevance-driven Input Dropout (RelDrop) affects model decision-making. Through a series of experiments on benchmark datasets, we demonstrate that our approach improves robustness towards occlusion, results in models utilizing more features within the region of interest, and boosts inference time generalization performance. Our code is available at https://github.com/Shreyas-Gururaj/LRP_Relevance_Dropout . 1 Introduction Deep learning models have achieved remarkable success in recent years, demonstrating superior performance in areas such as highly accurate classification [ 17], writing of complex text [ 51], or assisting in medical diagnosis [ 12]. Training such models requires vast amounts of (high-quality) data, which may be unavailable in real-world applications due to high annotation costs. Consequently, overfitting is a common occurrence, where biases or regularities specific to the training set are reflected by the model, resulting in bad generalization to unseen examples. However, as acquiring more training data tends to be difficult and expensive, eliminating such regularities by including additional (counter-)examples for training is generally infeasible. For this reason, data augmentation techniques are commonly employed to mitigate overfitting. These techniques introduce variations in training data, thereby artificially enlarging the available data and effectively regularizing the model to learn more robust representations. However, traditional data augmentation techniques are either computationally demanding [ 15], subject to the same regularities Preprint. Under review. R egulariz ed ModelDecisions based on Multiple F eatur escR elDr opb AugmentFinetune Explain Unr egulariz ed Model Ov er fitting on F e w F eatur esaFigure 1: Schematic overview of RelDrop workflow, illustrated for image data. As shown by the relevance heatmaps, Panel (a) depicts an Unregularized Model that, without data augmentation, overfits by heavily depending on a few highly relevant features. To nudge the model into learning from a larger set of informative features, RelDrop repeatedly computes attributions during training, and then augments the data by selectively masking the (currently) most relevant
https://arxiv.org/abs/2505.21595v1
parts of the input (Panel (b)). The impact of training with RelDrop is clear in Panel (c), where heatmaps at the bottom reflect how the Regularized Model utilizes a larger set of features to make predictions. as the training data [ 11], or based on random perturbations [ 58,34,77,67,74,79], which may not align with the parts of the data that are problematic in terms of overfitting. At worst, models may overfit on the same features as on unaugmented data if these features are perturbed as frequently as non-problematic features. Here, techniques from the domain of eXplainable artificial intelligence (XAI) offer a solution by providing importance scores that offer insights into which features contribute to model predictions. Previous works have successfully applied these importance scores to improve otherwise random regularization techniques within the model [ 82,72]. Consequently, leveraging the information provided by attributions in the context of data augmentation by considering the importance of different input regions in a more principled manner seems promising. We therefore introduce Relevance-driven Input Dropout (RelDrop), a novel XAI-guided technique that augments data by masking (currently) relevant input features. Our method can be efficiently applied during training, requiring only one additional backward pass per batch. We demonstrate its effectiveness in the context of image and point cloud classification and validate the increased robustness and generalization ability of the resulting model. In summary, we make the following contributions: β€’We introduce RelDrop, a novel data augmentation technique that utilizes information from attributions to mask input features in a principled manner. β€’We discuss the advantages, limitations, and caveats of utilizing XAI to guide input perturba- tions. β€’We validate empirically that RelDrop increases the model’s ability to generalize to unseen data in both image and point cloud classification domains, compared to the baselines of no input data augmentation or random input data augmentation. β€’We investigate the effects of RelDrop on model robustness qualitatively and quantitatively, demonstrating that it leads to the models utilizing more features for inference and growing more resilient against feature removal, effectively counteracting overfitting. 2 Related Work Regularization in the context of a Deep Neural Network (DNN) refers to the various techniques that aim to improve model generalization ability by reducing overfitting. The most common regularization techniques can be broadly classified into dropout [ 29,59,42,64,5,76,70], data augmentation [34, 79, 74, 58, 77, 72], weight decay, [40, 81, 43] and noise injection [13, 48, 50]. For any mapping function to be considered generalizing, it not only map the training samples precisely, but also any additional samples from the (source) data distribution [ 36]. Geometric transformations, such as random flipping, rotation, and cropping, are often applied during training [ 39,25]. More recent and advanced techniques such as Random Erasing (RE) [ 79], Mixup [ 77], CutMix [ 74], Hide-and-Seek [ 58], and PatchShuffle [ 34] are a class of random occlusion-based data augmentations 2 applied to inputs along with their labels. However, models regularized by the above methods may overfit on the same features, if the randomly occluded features are not (or rarely, over the training process) the ones being overfit
https://arxiv.org/abs/2505.21595v1
on. For this reason, we propose to leverage XAI attributions as a signal to guide data augmentation, as they provide a more informed approach to examine how these augmentations affect model predictions. Originally, the field of XAI research aims to reveal the prediction mechanisms underlying otherwise black-box models. Post-hoc XAI techniques seek to explain already trained models and can be broadly classified into two types: global explanations, which are overall insights into model behavior (e.g., discovery of encoded concepts or learned representations [ 8,35,31,30,1,62,32,22,10,37,18]), andlocal explanations, which provide insights into individual predictions by assigning importance to input features, e.g., via attribution scores. Sampling-based techniques [ 60,54,80,20,44] are computationally intensive but treat the model as a black box. In contrast, backpropagation-based methods [ 7,6,45,57,61] require internal access to model parameters but are efficient, needing only a single forward and backward pass. Alongside providing human-understandable input-level explanations, several backpropagation-based XAI methods provide intermediate importance scores for internal activations or weights. Consequently, they have been employed for applications such as pruning, quantization, and freezing of network components [ 63,55,73,9,24,19], or as an auxiliary training signal for gradient masking [ 41,49] and credit assignment [ 66]. Among these, some approaches [ 82,72] leverage XAI to control information flow in intermediate layers via explanation- based filtering. RelDrop is related to these (due to certain equivalencies between dropout anddata augmentation , cf. [ 78,36,75]), but is applied directly to the input, targeting the most salient regions and selectively masking them. Consequently, any information occluded by RelDrop is completely removed from the forward pass (as opposed to occlusions in feature space, that allow for the occluded information to remain available to the network, via a different path). 3 RelDrop This Section introduces RelDrop and discusses how it is used for 2D image classification and 3D point cloud classification tasks. Our approach draws inspiration from data augmentation techniques based on random occlusion methods for 2D images [ 79,77,74,58,34] and 3D point cloud domains [52,53,65,23], respectively. These methods generally augment random input regions without taking into account how the augmented features affect model predictions. Therefore, the likelihood of augmenting a particular feature that the model has overfitted on is as likely as augmenting a particular feature that the model is invariant to. In comparison, RelDrop, as shown schematically in Figure 1, masks input regions based on the respective attribution maps and serves as an input regularizer that mitigates overfitting by occluding features that are currently relevant to the model prediction. For this purpose, we utilize an attribution function Rthat assigns importance scores to all features of the inputI. The most important features, as indicated by the attribution map, are consequently masked. We define the RelDrop input augmentation as follows: IRelDrop =MRβŠ™ I+ (1βˆ’ M R)Β·s (1) Here,βŠ™denotes element-wise multiplication. Iis the original input, and MRis a binary mask that occludes input regions based on attributions R, which assigns importance scores to input features. In practice, we use Layer-wise Relevance Propagation (LRP) [ 6] for attribution due to its pixel- or point-level granularity, computational efficiency, and prior success in several similar applications
https://arxiv.org/abs/2505.21595v1
[73,19,66]. The replacement value for all the masked features s(e.g., dataset mean or zero) can vary with the data modality. Equation 1 is a general formulation of RelDrop that can be adapted to different data modalities, such as 2D images and 3D point clouds. To balance performance and regularization, controlling how and when input features are masked is important. We hypothesize that persistent occlusion of the features that a model has learned to be most informative (i.e., that have the largest attribution scores) may be detrimental to convergence. This hypothesis aligns with previous work [ 72], where the authors note that aggressive removal of high-attribution positions negatively impacts convergence. We therefore implement a balanced masking strategy that only drops high-attribution features with a large probability, as this strongly encourages the model to utilize other features while preserving core information needed for stable optimization. Furthermore, unlike dropout [ 59] of internal activations, where the (complete) input is 3 still encoded by preceding layers and alternate paths, masking input features removes information entirely, making it unavailable to the model. Excessive masking in input space (e.g., with values common for dropout, such as 50%) may therefore cause the model to train on an insufficient signal, degrading performance. For the above reasons, we design the binary mask function MRto retain partial randomness and control the proportion of input feature replacement via tunable hyperparameters, ensuring not all relevant information is masked. For 2D images, we utilize the same hyperparameters as RE [ 79]: dropout probability pto control the augmentation frequency and occlusion area SOcombined with aspect ratio rOto affect the occluded area. For 3D point clouds, we introduce two similar hyperparameters: Ξ±for the proportion of attribution-guided vs. randomly applied augmentation and Ξ²for the overall fraction of points replaced. The exact use of these hyperparameters is detailed in the following sections. 3.1 2D Image Classification For 2D image classification, the input is an image I2d∈RHΓ—WΓ—C, where HandWare the height and width of the image, respectively, and Cthe number of channels (e.g., C= 3for RGB images). In this setting, we extend RE [ 79] by centering the occlusion region around the most important pixel, as determined by the normalized attribution R2d norm∈RHΓ—W, instead of randomly selecting it. Let S=HΓ—Wbe the area of the image, and (x, y)denote the height and width coordinates of a pixel of I2d. The normalized relevance map R2d norm is obtained by summing channel-wise over the original mapR ∈RCΓ—HΓ—Wand then normalizing the result to be interpretable as input dropout probabilities, bounded in [0, 1]: R2d(H, W ) =CX c=1R(c, H, W ), (2) R2d norm=R2d(H, W ) + max |R2d(H, W )| 2Β·max (|R2d(H, W )|)∈[0,1] (3) We then select the centroid pixel to be the most relevant pixel: (xcen, ycen) = arg max (x,y)R2d norm (4) Similar to Zhong et al. [79], we occlude a rectangular block around (xcen, ycen), with dimensions HO andWOcomputed as: HO=p SOΒ·rO, W O=r SO rO, where SO=U(Slow, Shigh)Β·S, r O=U(rlow,1 rlow)(5) Here, U(Β·)represents a random uniform distribution defined by the hyperparameters Slow,Shighand rlow.SOdenotes the area of the occlusion
https://arxiv.org/abs/2505.21595v1
region (i.e., the number of pixels to be masked), and rOis the aspect ratio, determining the relative dimensions of the height and width of the rectangular block. The occlusion region Ois then determined by width WOand height HO: O= (x, y)|x∈ xcenβˆ’WO 2, xcen+WO 2 , y∈ ycenβˆ’HO 2, ycen+HO 2 (6) Since only pixels in Oare occluded, the occlusion mask is defined as follows: M2d R=0,if(x, y)∈O 1,otherwise(7) 4 Finally, we apply M2d Rto the input image, according to Equation (1), choosing the channel-wise dataset mean Β΅as the replacement value s. A pseudo-algorithm inspired by RE [ 79], which depicts the application of RelDrop to an input image is shown in the Appendix A.1 3.2 3D Point Cloud Classification In this setting, an input consists of a point cloud I3d={(xi, yi, zi)|i∈ {1, ..., N}}, i.e., a collection ofNpoints (xi, yi, zi)∈R3, encoding spatial coordinates. Similarly, an importance score is attributed per point as a vector R3d∈RN. In contrast to 2D image data, where we adopted block- level operations for input occlusion, we instead replace individual points from I3dwith the origin, setting them to (0,0,0)(cf. Equation (1):s= (0,0,0)) in the 3D setting. This nullifies their influence on the prediction, as the PointNet++ model [ 53] is designed to operate on a metric loss and to generate a fixed-size embedding for a given input point cloud of fixed size N. Additionally, we explicitly balance random and attribution-based modifications to prevent the complete removal of all the important features necessary for making a prediction. For this purpose, we introduce parameters Ξ±, β∈[0,1], where β€’Ξ±determines the proportion of random to attribution-guided input data augmentation. β€’Ξ²controls the total percentage of occluded points. To compute the mask M3d R, we again normalize the intermediate relevance to be interpretable as input dropout probabilities: R3d norm=R3dβˆ’min(R3d) max(R3d)βˆ’min(R3d)R3d norm∈[0,1] (8) Using this R3d norm, the mask is then generated as follows: M3d R=0,ifΞ±# Β»v+ (1βˆ’Ξ±)R3d normβ‰₯(1βˆ’Ξ²) 1,otherwise(9) where# Β»v∈[0.0,1.0]Nis a 1D vector of size Nthat represents random masking, sampled from a uniform distribution. Finally, we apply the mask to the input point cloud by setting the points corresponding to M3d R= 0to the origin. 4 Results In this section, we study the effects of applying RelDrop for 2D image classification and 3D point cloud classification tasks, respectively. Refer to Appendices A.2, A.3, and A.4 for details on the chosen benchmark datasets, evaluation metrics, experimental setup, and hyperparameters, respectively. 4.1 2D Image Classification We first investigate the effect of RelDrop on model performance in the setting of image classification by comparing the test accuracies of RelDrop-trained models to that of RE-trained models and a baseline without data augmentation. We apply block-level occlusions similar to RE, (cf. the RelDrop formulation from Section 3.1), and use the pre-trained weights downloaded from the Pytorch Image Models (timm) library [ 68]) as a starting point, which is further finetuned with the different augmentation strategies. We finetune the models for 100epochs for CIFAR-10/100 and 50epochs for ImageNet-1k, respectively. For RelDrop and RE, we set the dropout probability to p= 0.5as recommended by the authors of
https://arxiv.org/abs/2505.21595v1
RE [79]. As shown in Table 1 (Blue Columns) , models trained with RelDrop consistently outperform both the RE and the baseline. Our proposed approach further improves the test performance over the RE by almost doubling the gain over the baseline in all the considered models and datasets. The average margin of improvement across all the considered models over the baseline is 0.64%,0.89%, and0.93% for CIFAR-10, CIFAR-100, and ImageNet-1k datasets, respectively. This indicates that 5 Table 1: Effects of RelDrop on ResNet generalization ability. We evaluate the test accuracies (%, on the test and validation sets, respectively) of models trained on CIFAR-10/100 and ImageNet- 1k datasets ( Finetuning ) and the zero-shot test accuracy (%) of ImageNet-1k trained models on ImageNet-R [ 27], ImageNet-A, and ImageNet-O [ 28] datasets ( Zero-shot ). During training, we employ different input augmentation schemes, where REandRelDrop (ours) represent RE [ 79] and our method, respectively, and baseline denotes an augmentation-free setting. For CIFAR-10/100, the mean and standard deviation over 3 different random seeds are reported and for ImageNet-1k, results are reported for a single randomly chosen seed. The values highlighted in bold represent the best-performing model for the respective model and dataset. Also, the Mean Relevance Rank Accuracy (RRA) (%) [ 4] computed over 12419 test samples from in the ImageNet-S validation set, along with their respective segmentation maps [ 21], is shown in the rightmost column ( Mean RRA (%)). RelDrop improves upon the estimated model generalization ability in all investigated settings compared to RE and the baseline and increases Mean RRA , except for Mean RRA . ModelAugmentation Finetuning (test Acc) Zero-shot (test Acc) Mean RRA Type CIFAR-10 CIFAR-100 ImageNet-1k ImageNet-R ImageNet-A ImageNet-O ImageNet-S ResNet-18baseline 94.98Β±0.08 76.80Β±0.19 71.39 32.10 1.67 15.66 59.68 RE [79] 95.27Β±0.07 76.90Β±0.24 71.54 32.34 1.72 15.57 59.72 RelDrop (Ours) 95.70Β±0.04 77.41Β±0.14 71.91 33.15 1.91 15.67 60.02 ResNet-34baseline 95.63Β±0.13 78.84Β±0.31 76.84 36.18 3.95 16.94 58.83 RE [79] 96.10Β±0.07 79.33Β±0.27 77.04 36.07 3.97 16.95 59.19 RelDrop (Ours) 96.30Β±0.09 79.56Β±0.19 77.64 37.17 4.05 17.01 59.54 ResNet-50baseline 95.41Β±0.12 79.01Β±0.35 79.19 37.85 8.37 19.32 61.74 RE [79] 95.63Β±0.14 79.72Β±0.42 79.45 37.88 8.69 19.22 62.23 RelDrop (Ours) 95.93Β±0.11 80.35Β±0.31 79.94 38.65 9.17 19.37 62.47 ResNet-101baseline - - 80.84 42.31 16.61 22.39 56.89 RE [79] - - 81.18 42.14 17.02 22.45 57.91 RelDrop (Ours) - - 81.74 42.75 18.08 22.66 57.72 50 60 70 80 90 100 Epoch707580859095100Training Accuracy (%)baseline RE RelDrop(Ξ΅=0.001) RelDrop(Ξ΅=0.8) 50 60 70 80 90 100 Epoch65707580Test Accuracy (%) Figure 2: Regularization effect of RelDrop on ResNet50 overfitting. The curves show the augmentation-free baseline ( baseline ), RE ( RE), and two variations of RelDrop ( RelDrop ), with attribution hyperparameters Ξ΅= 0.8(performing worst) and Ξ΅= 0.001(performing best) on the CIFAR-100 dataset. A moving average (more solid lines) with window size 5is visualized along with the raw data for both the training ( left) and test ( right ) curves. RelDrop-trained model generalizes better to the test data compared to both RE and the baseline. Note that the y-axis scale of training (left) and test ( right ) figures are adjusted to the respective
https://arxiv.org/abs/2505.21595v1
ranges of training and test accuracies. the regularization effect of RelDrop results in consistent improvement in the model’s generalization ability compared to both RE and the baseline. Investigating this effect in more detail in Figure 2, we observe that RelDrop-trained models have the smallest difference between training and test accuracies, indicating reduced overfitting, while still converging similarly as RE. While the stabilizer hyperparameter Ξ΅introduced by LRP affects these results (we investigated Ξ΅= [0.1,0.01,0.001,0.2,0.4,0.8,1.0,2.0], best and worst are shown in the figure), the worst case performance remains close to RE. 4.1.1 Generalization in Zero-Shot Applications While these consistent test performance improvements (cf. Table 1 (Blue Columns) ) suggest that the RelDrop-trained models generalize better to the test (or validation) data, a better metric to gauge the true generalization would be to validate whether these improvements are further transferred while conducting zero-shot evaluations on datasets with different distributions. Towards this, the best states of all the models finetuned on ImageNet-1k for 50 epochs, while applying both RE and RelDrop are used as a starting point for Zero-shot classification of ImageNet-R [ 27], ImageNet-A, 6 and ImageNet-O [ 28] test samples and their respective results are reported in the Table 1 (Green Columns) . These variations of the ImageNet-1k dataset are specifically designed to introduce adversarial sam- ples, distribution shifts, harder-to-recognize objects, and scenarios where models rely on spurious correlations. Evaluating Zero-shot test accuracy on these datasets provides a comprehensive assess- ment of the model’s robustness, generalization ability, and resistance to distribution shifts. It also signifies the model’s ability to capture semantically meaningful features, mitigate reliance on spurious correlations, and ensure well-calibrated confidence scores when encountering out-of-distribution samples. The performance improvement on ImageNet-1K test data is also observed on the test sets of these ImageNet-1k variants when predicting classes in a zero-shot fashion. Mailbo x Mobile home FlyGr ound beet leInputImageNet PTBaselineRandom Er asingR elDr op(Ours) Dhole Figure 3: Qualitative effects of RelDrop on model decision-making. LRP attributions are visualized (deeper red indicates higher importance). Columns show, from left to right, the ImageNet pre-trained ResNet50 model, the baseline finetuned without input augmentation, and the models finetuned with RE and RelDrop, respectively. The class labels for the input images are shown on the left. We observe increased reliance on within-object features for the RelDrop finetuned model. Although this effect does not hold for all samples equally (last row), we confirm an overall improvement via the quantitative evaluation of Relevance Rank Accuracy (RRA) in Table 1. As shown in Table 1 (Green Columns) and the qualitative results in Figure 3, RelDrop encourages the model to learn more robust, albeit not fundamentally different, features across the entire training set, leading to more effective knowledge transfer. RelDrop produces an average relative improvement over baseline considered across all the models is 2.88%,8.83%, and 0.63% for ImageNet-R, ImageNet-A, and ImageNet-O test datasets, respectively (Also, c.f. Appendix A.5.1, which indicates that RelDrop enables the models to learn semantically richer representations). Although these improvements may seem small, they are consistent, with RelDrop improving the model’s zero-shot capabilities over all the considered datasets as
https://arxiv.org/abs/2505.21595v1
opposed to RE. In the previous paragraphs, we investigated the effects of RelDrop on (estimated) model generalization ability. However, RelDrop functions by removing the input features that are (currently) most relevant to a model’s decision-making. As such, while we expect model decisions to be based on a larger 7 Table 2: shows the effect of RelDrop on PointNet++ [ 53] generalization using test accuracy on ModelNet40 [ 69] and ShapeNet [ 14]. Models with input augmentation, especially with a balanced strategy replacement ( Ξ±=0.5), outperform the baseline and fully random ( Ξ±=1.0). Reported values are averaged over 3 different random seeds, with their maximum standard deviation, and the bold indicates the best results. DropoutModelNet40 ShapeNet Inst Acc Class Acc Inst Acc Class Acc baseline 91.94 Β±0.39 89.18 Β±0.59 98.22 Β±0.32 98.01 Β±0.43 Ξ±=0.0, Ξ²=0.15 92.34 Β±0.49 89.92 Β±0.45 98.85 Β±0.28 98.30 Β±0.54 Ξ±=0.0, Ξ²=0.5 91.75 Β±0.23 87.95 Β±0.37 98.52 Β±0.58 97.05 Β±0.41 Ξ±=0.0, Ξ²=0.85 64.79 Β±0.37 55.74 Β±0.46 95.89 Β±0.37 85.43 Β±0.29 Ξ±=0.5, Ξ²=0.15 92.41 Β±0.38 89.71 Β±0.59 98.79 Β±0.27 98.62 Β±0.18 Ξ±=0.5, Ξ²=0.5 92.30 Β±0.12 89.65 Β±0.24 98.66 Β±0.31 97.24 Β±0.55 Ξ±=0.5, Ξ²=0.85 59.73 Β±0.57 53.24 Β±0.42 95.67 Β±0.32 88.99 Β±0.45 Ξ±=1.0, Ξ²=0.15 91.74 Β±0.56 89.07 Β±0.46 98.73 Β±0.30 97.97 Β±0.45 Ξ±=1.0, Ξ²=0.5 91.52 Β±0.55 89.34 Β±0.34 98.46 Β±0.44 98.05 Β±0.39 Ξ±=1.0, Ξ²=0.85 91.13 Β±0.49 88.65 Β±0.46 98.26 Β±0.61 97.59 Β±0.76 number of features after applying RelDrop, there is no guarantee that these features are part of the object of interest itself. In the worst case, this could lead to models that rely on spurious correlations or context rather than the object itself. We therefore assess whether RelDrop-trained models utilize the object of interest for their decision- making by evaluating attribution RRA [ 4]. This metric measures the percentage of top positive attributions within a given Ground Truth (GT) mask for each input. Using ImageNet-1k finetuned models with different masking strategies, we compute the Mean RRA over 12,419 test samples found in the validation set of ImageNet-S along with their respective segmentation maps [ 21], which we employ as GT masks. We evaluate the baseline-finetuned models and the best-performing models with RE and our method, respectively. The results are included in Table 1 (Orange Columns) , and show that RelDrop preserves a similar mean RRA to the baseline and RE, indicating that no detrimental change in decision-making occurs. Interestingly, we observe that RelDrop even improves RRA across all ResNet models except ResNet101, implying less reliance on background features. We hypothesize that by occluding the most relevant features, RelDrop mitigates the influence of confounders, especially early on during training, while forcing the model to learn more features of the object of interest for several epochs and multiple augmentations. Additionally, we evaluate the differences in decision-making qualitatively in Figure 3. Here, we visualize heatmaps comparing relevance distributions of the original ImageNet-1k pre-trained model, the finetuned baseline (finetuned without data augmentation), and models finetuned while applying RE and RelDrop. The first four rows show improved feature distribution for classes "Mailbox" , "Mobile house" ,"Fly" , and "Ground beetle" while the fifth row highlights an
https://arxiv.org/abs/2505.21595v1
example of "Dhole" where our method fails to improve the feature distribution. While the improved decision-making does not seem to hold for every example, the mean RRA quantifies this effect on a dataset level, demonstrating the overall effectiveness of our approach. 4.2 3D Point Cloud Classification After demonstrating the efficacy of RelDrop for improving model generalization ability for 2D image classification, we investigate our method’s applicability to 3D point cloud classification in the following. For this purpose, we consider the PointNet++ architecture [ 53], as well as the ModelNet40 and ShapeNet [ 69,14] benchmark datasets. Unlike the 2D image classification, where block-level occlusions are applied, we augment individual points here (cf. the RelDrop formulation from Section 3.2) and train models from scratch for 50 epochs instead of fine-tuning. The results of this experiment are shown in Table 2. Here, the models trained with an equal combination of random and RelDrop, i.e., Ξ±= 0.5, consistently outperform both the baseline and the models utilizing fully random input data augmentation ( Ξ±= 1.0). For both the ModelNet40 and the ShapeNet datasets, we observe the parameter configurations Ξ±= 0.5andΞ²= 0.15to perform the best. Compared to the baseline, we observe an average overall improvement of 0.74% 8 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of points flipped0.00.20.40.60.81.0Mean Prediction(Class/Macro Accuracy)Mean Prediction vs Fraction of points flipped(ModelNet40) baseline Ξ±=1.0, Ξ²=0.15 Ξ±=1.0, Ξ²=0.5 Ξ±=0.5, Ξ²=0.15 Ξ±=0.5, Ξ²=0.5 Ξ±=0.0, Ξ²=0.15Figure 4: Robustness of RelDrop-trained models to ordered point removal on ModelNet40 [ 69]. RelDrop-trained models are more robust against point removal, especially when a balance between random and relevance-guided input data augmentation is employed ( Ξ±= 0.5andΞ²= 0.5). and0.61% for the ModelNet40 and ShapeNet datasets in Class accuracy. The fully attribution-guided models ( Ξ±= 0.0) are outperformed by the combined models ( Ξ±= 0.5), which aligns with our above hypothesis (cf. Section 3) about the potentially detrimental effects of consistently occluding the most important features. Interestingly, however, the fully attribution-guided models outperform the fully random-guided models ( Ξ±= 1.0) for the best value of Ξ²= 0.15by a small margin. Again, these results highlight an increased generalization ability of RelDrop-trained models. We also observe that for a fixed value of Ξ±, increasing Ξ²beyond 0.15(and thus reducing the threshold RHS = (1βˆ’Ξ²)) results in reduced instance accuracies. Since increasing Ξ²leads to more points being removed, if Ξ²becomes too large, the remaining features may not be sufficiently discriminative to accurately classify an object (cf. this is captured qualitatively in the Figure A.1). As discussed in Section 3, a smaller percentage of optimal information removal is to be expected for input occlusion, which removes information entirely from the prediction graph, in contrast to, e.g., dropout, where larger amounts of information removal ( β‰ˆ50%) are feasible. Therefore, selecting an appropriate value for Ξ²is essential. During our experiments, we found Ξ²= 0.15to perform best. 4.2.1 Robustness towards Point Removal By removing or occluding (currently) important features, RelDrop aims to force models to base their inference on a larger number of features and thus increase their robustness and generalization ability. While we demonstrated increased test performance (implying improved
https://arxiv.org/abs/2505.21595v1
generalization ability) when applying RelDrop in the previous section, this does not necessarily imply increased robustness, i.e. that the model utilizes more features for predicting. Therefore, we perform an ablation study in the following, using the feature perturbation [ 6,56,26] metric. Originally conceptualized for image classification, this metric progressively occludes pixels from the most to least relevant according to an attribution map, evaluating the model’s response. Here, we reformulate this metric aspoint flipping to evaluate robustness towards occlusion by progressively removing points (setting their (x, y, z )coordinates to (0,0,0)) and measuring the prediction accuracy after each step. This approach simulates real-world scenarios where objects might be partially occluded or incompletely scanned. In our experiments with point input clouds of size N= 1024 , we remove 32points per step, requiring 32 total steps until no points are left. We test various models trained with different RelDrop hyperparameters from Table 2. The results of this experiment are displayed in Figure 4. Initially, the best-performing model ( Ξ±= 0.5, Ξ²= 0.15,red) maintains a mean prediction β‰ˆ5-6%higher than the baseline (blue) until around 40% of points are removed. Beyond 30-35% removal of points, the baseline drops steeply, while the others decline more gradually. At 55% removal, the baseline reaches β‰ˆ55%, lower than all models. 9 After 75% removal, the best-performing model falls below ( Ξ±= 1.0,Ξ²= 0.5)(green) . Notably, (Ξ±= 0.5,Ξ²= 0.5)(purple) starts lower but performs best after 25% removal, outperforming all models towards the end. A higher retention of performance indicates a higher degree of robustness against point removal, and as such, we infer that input data augmentation results in more robust models, with RelDrop increasing this effect up to a point. The model with ( Ξ±= 0.5,Ξ²= 0.5), using an equal balance between random and relevance-guided input data augmentation, retains performance the longest, by a large margin. 5 Limitations In the previous sections, we demonstrated the positive effects of applying RelDrop, such as improved model generalization and robustness towards occlusion. However, RelDrop is also subject to several limitations. For instance, training a model with the same number of parameters using RelDrop increases compute time by 2–2.5 Γ—compared to the baseline and RE. This occurs because each data batch requires an additional backward pass to compute attributions. Additionally, RelDrop introduces additional hyperparameters specific to the attribution method, which demand additional time and computing resources for optimal tuning. To balance random and attribution-based erasing, we rely on further hyperparameters that require tuning, but nevertheless make recommendations for optimal values in this work. 6 Conclusion and Outlook We propose RelDrop, an alternative to random input data augmentations as a general framework in Section 3. It can be extended and applied to different domains and tasks as a strategic regularizer by choosing different XAI methods and masking strategies. We evaluate our method’s effectiveness for the 2D image and 3D point cloud classification tasks, choosing LRP as the attribution method with ResNet and PointNet++ architectures and various benchmark datasets as covered in Section 4. In comparison to random data augmentation, we observe double the improvement in average test
https://arxiv.org/abs/2505.21595v1
accuracies against the baseline for both the 2D image and 3D point cloud classification tasks, consistent across different models and datasets. We observe a similar trend for zero-shot tests, where our method increases the model’s generalization capabilities. RelDrop achieves this by nudging the model to focus on more diverse features in the input for the prediction, rendering it more robust against feature removal. For future research, our approach can be extended beyond the ResNet and PointNet++ architectures to transformers, autoencoders, variational autoencoders, and generative models, provided corresponding attribution methods are available; e.g., extensions to attention modules [ 2]. Further exploration includes erasing multiple smaller patches, experimenting with different block/patch sizes and shapes, and assessing the impact on different architectures. Building on RE [ 79], we study the effects of RelDrop through a localised block erasure technique that can serve as a foundation for developing RelDrop-based adaptations of CutMix [ 74], MixUp [ 77], and other augmentation techniques. Addi- tionally, future studies can explore its effectiveness across diverse input modalities, different datasets, a broader range of tasks, and different attribution methods. Furthermore, balancing the strategy of random and attribution-guided augmentations beyond hyperparameter selection is subject to future work. 7 Acknowledgements This work was supported by the European Union’s Horizon Europe research and innovation programme (EU Horizon Europe) as grants [ACHILLES (101189689), TEMA (101093003)]. This work was further supported by the Federal Ministry of Education and Research (BMBF) as grant BIFOLD (01IS18025A, 01IS180371I); the German Research Foundation (DFG) as research unit DeSBi [KI-FOR 5363] (459422098) and the Fraunhofer Internal Programs (Fraunhofer) as grant ESPINN (PREPARE 40-08394). 10 References [1]Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, and Sebastian Lapuschkin. From attribution maps to human-understandable explanations through concept relevance propagation. Nature Machine Intelligence , 5(9):1006–1019, 2023. [2]Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, and Wojciech Samek. Attnlrp: Attention-aware layer-wise relevance propagation for transformers. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [3]Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert MΓΌller, and Sebastian Lapuschkin. Software for dataset-wide xai: From local explanations to global insights with Zennit, CoRelAy, and ViRelAy. CoRR , abs/2106.13200, 2021. [4]Leila Arras, Ahmed Osman, and Wojciech Samek. CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations. Inf. Fusion , 81:14–40, 2022. [5]Lei Jimmy Ba and Brendan J. Frey. Adaptive dropout for training deep neural networks. In Christopher J. C. Burges, LΓ©on Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States , pp. 3084– 3092, 2013. [6]Sebastian Bach, Alexander Binder, GrΓ©goire Montavon, Frederick Klauschen, Klaus-Robert MΓΌller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE , 10(7):e0130140, 2015. [7]David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert MΓΌller. How to explain individual classification decisions. Journal of Machine Learning Research , 11: 1803–1831,
https://arxiv.org/abs/2505.21595v1
2010. [8]David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quanti- fying interpretability of deep visual representations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 , pp. 3319–3327. IEEE Computer Society, 2017. [9]Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten MΓΌller, and Sebastian Lapuschkin. Ecqx: Explainability-driven quantization for low-bit and sparse dnns. In Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert MΓΌller, and Wojciech Samek (eds.), xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers , volume 13200 of Lecture Notes in Computer Science , pp. 271–296. Springer, 2020. [10] Usha Bhalla, Alex Oesterling, Suraj Srinivas, FlΓ‘vio P. Calmon, and Himabindu Lakkaraju. Interpreting CLIP with sparse linear concept embeddings (splice). In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang (eds.), Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 , 2024. [11] Christopher Bowles, Liang Chen, Ricardo Guerrero, Paul Bentley, Roger N. Gunn, Alexander Hammers, David Alexander Dickie, Maria del C. ValdΓ©s HernΓ‘ndez, Joanna M. Wardlaw, and Daniel Rueckert. GAN augmentation: Augmenting training data using generative adversarial networks. CoRR , abs/1810.10863, 2018. [12] Giovanni Briganti and Olivier Le Moine. Artificial intelligence in medicine: today and tomorrow. Frontiers in Medicine , 7:509744, 2020. [13] Alexander Camuto, Matthew Willetts, Umut Simsekli, Stephen J. Roberts, and Chris C. Holmes. Explicit regularisation in gaussian noise injections. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual , 2020. [14] Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qi-Xing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository. CoRR , abs/1512.03012, 2015. 11 [15] Ekin D. Cubuk, Barret Zoph, Dandelion ManΓ©, Vijay Vasudevan, and Quoc V . Le. Autoaugment: Learning augmentation strategies from data. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pp. 113–123. Computer Vision Foundation / IEEE, 2019. [16] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA , pp. 248–255. IEEE Computer Society, 2009. [17] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021 . OpenReview.net, 2021. [18] Maximilian Dreyer, Jim Berend, Tobias Labarta, Johanna Vielhaben, Thomas Wiegand, Sebastian La- puschkin, and Wojciech Samek. Mechanistic understanding and validation of large AI models with semanticlens. CoRR , abs/2501.05398, 2025.
https://arxiv.org/abs/2505.21595v1
[19] Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, and Sebastian Lapuschkin. Explain to not forget: Defending against catastrophic forgetting with XAI. In Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar R. Weippl (eds.), Machine Learning and Knowledge Extraction - 6th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2022 , volume 13480 of Lecture Notes in Computer Science , pp. 1–18. Springer, 2022. [20] Ruth C. Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. InIEEE International Conference on Computer Vision, ICCV 2017 , pp. 3449–3457. IEEE Computer Society, 2017. [21] Shanghua Gao, Zhong-Yu Li, Ming-Hsuan Yang, Ming-Ming Cheng, Junwei Han, and Philip H. S. Torr. Large-scale unsupervised semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. , 45(6): 7457–7476, 2023. [22] Liv Gorton. The missing curve detectors of inceptionv1: Applying sparse autoencoders to inceptionv1 early vision. CoRR , abs/2406.03662, 2024. [23] Meng-Hao Guo, Junxiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, and Shi-Min Hu. PCT: point cloud transformer. Comput. Vis. Media , 7(2):187–199, 2021. [24] Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, and Sebastian Lapuschkin. Pruning by explaining revisited: Optimizing attribution methods to prune cnns and transformers. CoRR , abs/2408.12568, 2024. [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 , pp. 770–778. IEEE Computer Society, 2016. [26] Anna HedstrΓΆm, Leander Weber, Daniel Krakowczyk, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, and Marina M.-C. HΓΆhne. Quantus: An explainable AI toolkit for responsible evaluation of neural network explanations and beyond. J. Mach. Learn. Res. , 24:34:1–34:11, 2023. [27] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021 , pp. 8320–8329. IEEE, 2021. [28] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021 , pp. 15262–15271. Computer Vision Foundation / IEEE, 2021. [29] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Im- proving neural networks by preventing co-adaptation of feature detectors. CoRR , abs/1207.0580, 2012. [30] Fred Hohman, Haekyu Park, Caleb Robinson, and Duen Horng (Polo) Chau. Summit: Scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Trans. Vis. Comput. Graph. , 26(1):1096–1106, 2020. [31] Jie Hu, Liujuan Cao, Tong Tong, Qixiang Ye, Shengchuan Zhang, Ke Li, Feiyue Huang, Ling Shao, and Rongrong Ji. Architecture disentanglement for deep neural networks. pp. 652–661, 2021. 12 [32] Robert Huben, Hoagy Cunningham, Logan Riggs, Aidan Ewart, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net,
https://arxiv.org/abs/2505.21595v1
2024. [33] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015 , volume 37 of JMLR Workshop and Conference Proceedings , pp. 448–456. JMLR.org, 2015. [34] Guoliang Kang, Xuanyi Dong, Liang Zheng, and Yi Yang. Patchshuffle regularization. CoRR , abs/1707.07103, 2017. [35] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. ViΓ©gas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCA V). In Proceedings of the 35th International Conference on Machine Learning, ICML 2018 , volume 80 of Proceedings of Machine Learning Research , pp. 2673–2682. PMLR, 2018. [36] Kishore Reddy Konda, Xavier Bouthillier, Roland Memisevic, and Pascal Vincent. Dropout as data augmentation. CoRR , abs/1506.08700, 2015. [37] Matthew Kowal, Achal Dave, Rares Ambrus, Adrien Gaidon, Konstantinos G. Derpanis, and Pavel Tokmakov. Understanding video transformers via universal concept discovery. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pp. 10946–10956. IEEE, 2024. [38] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto , 2009. [39] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutional neural networks. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, LΓ©on Bottou, and Kilian Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems, NIPS 2012 , pp. 1106–1114, 2012. [40] Anders Krogh and John A. Hertz. A simple weight decay can improve generalization. In John E. Moody, Stephen Jose Hanson, and Richard Lippmann (eds.), Advances in Neural Information Processing Systems 4, [NIPS Conference, Denver, Colorado, USA, December 2-5, 1991] , pp. 950–957. Morgan Kaufmann, 1991. [41] Jin Ha Lee, Ik hee Shin, Sang gu Jeong, Seung-Ik Lee, Muhamamad Zaigham Zaheer, and Beom-Su Seo. Improvement in deep networks for optimization using explainable artificial intelligence. In 2019 International Conference on Information and Communication Technology Convergence, ICTC 2019 , pp. 525–530. IEEE, 2019. [42] Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, and Tie-Yan Liu. R-drop: Regularized dropout for neural networks. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual , pp. 10890–10905, 2021. [43] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 . OpenReview.net, 2019. [44] Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, NIPS 2017 ,
https://arxiv.org/abs/2505.21595v1
pp. 4765–4774, 2017. [45] GrΓ©goire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert MΓΌller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition , 65: 211–222, 2017. [46] GrΓ©goire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert MΓΌller. Layer-wise relevance propagation: An overview. In Wojciech Samek, GrΓ©goire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert MΓΌller (eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , volume 11700 of Lecture Notes in Computer Science , pp. 193–209. Springer, 2019. [47] Franz Motzkus, Leander Weber, and Sebastian Lapuschkin. Measurably stronger explanation reliability via model canonization. In 2022 IEEE International Conference on Image Processing, ICIP 2022, Bordeaux, France, 16-19 October 2022 , pp. 516–520. IEEE, 2022. 13 [48] Rafael MΓΌller, Simon Kornblith, and Geoffrey E. Hinton. When does label smoothing help? In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’AlchΓ©-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada , pp. 4696–4705, 2019. [49] Vineel Nagisetty, Laura Graves, Joseph Scott, and Vijay Ganesh. xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems. CoRR , abs/2002.10438, 2020. [50] Hyeonwoo Noh, Tackgeun You, Jonghwan Mun, and Bohyung Han. Regularizing deep neural networks by noise: Its interpretation and optimization. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA , pp. 5109–5118, 2017. [51] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022 , 2022. [52] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017 , pp. 77–85. IEEE Computer Society, 2017. [53] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA , pp. 5099–5108, 2017. [54] Marco T. Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, SIGKDD 2016 ,
https://arxiv.org/abs/2505.21595v1
pp. 1135–1144. ACM, 2016. [55] Muhammad Sabih, Frank Hannig, and JΓΌrgen Teich. Utilizing explainable AI for quantization and pruning of deep neural networks. CoRR , abs/2008.09072, 2020. [56] Wojciech Samek, Alexander Binder, GrΓ©goire Montavon, Sebastian Lapuschkin, and Klaus-Robert MΓΌller. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Networks Learn. Syst. , 28(11):2660–2673, 2017. [57] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propa- gating activation differences. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017 , volume 70 of Proceedings of Machine Learning Research , pp. 3145–3153. PMLR, 2017. [58] Krishna Kumar Singh and Yong Jae Lee. Hide-and-seek: Forcing a network to be meticulous for weakly- supervised object and action localization. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 , pp. 3544–3553. IEEE Computer Society, 2017. [59] Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. , 15(1):1929–1958, 2014. [60] Erik Ε trumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems , 41(3):647–665, 2014. [61] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017 , volume 70 of Proceedings of Machine Learning Research , pp. 3319–3328. PMLR, 2017. [62] Johanna Vielhaben, Stefan Bluecher, and Nils Strodthoff. Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees. Trans. Mach. Learn. Res. , 2023, 2023. [63] Elena V oita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self- attention: Specialized heads do the heavy lifting, the rest can be pruned. In Anna Korhonen, David R. Traum, and LluΓ­s MΓ rquez (eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019 , pp. 5797–5808. Association for Computational Linguistics, 2019. 14 [64] Li Wan, Matthew D. Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013 , volume 28 of JMLR Workshop and Conference Proceedings , pp. 1058–1066. JMLR.org, 2013. [65] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon. Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. , 38(5):146:1–146:12, 2019. [66] Leander Weber, Jim Berend, Moritz Weckbecker, Alexander Binder, Thomas Wiegand, Wojciech Samek, and Sebastian Lapuschkin. Efficient and flexible neural network training through layer-wise feedback propagation. CoRR , abs/2308.12053, 2025. [67] Jason W. Wei and Kai Zou. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019 , pp. 6381–6387. Association for Computational Linguistics, 2019. [68] Ross Wightman. Pytorch image models. https://github.com/rwightman/pytorch-image-models , 2019. [69] Zhirong Wu,
https://arxiv.org/abs/2505.21595v1
Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 , pp. 1912–1920. IEEE Computer Society, 2015. [70] Lingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, and Qi Tian. Disturblabel: Regularizing CNN on the loss layer. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV , USA, June 27-30, 2016 , pp. 4753–4762. IEEE Computer Society, 2016. [71] Xu Yan. Pointnet/pointnet2pytorch. https: // github. com/ yanx27/ Pointnet_ Pointnet2_ pytorch , 2019. [72] Tao Yang, Jinghao Deng, Xiaojun Quan, Qifan Wang, and Shaoliang Nie. AD-DROP: attribution-driven dropout for robust language model fine-tuning. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , 2022. [73] Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus- Robert MΓΌller, and Wojciech Samek. Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognition , 115:107899, 2021. [74] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019 , pp. 6022–6031. IEEE, 2019. [75] Xiao Zang, Yi Xie, Siyu Liao, Jie Chen, and Bo Yuan. Noise injection-based regularization for point cloud processing. CoRR , abs/2103.15027, 2021. [76] Matthew D. Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional neu- ral networks. In Yoshua Bengio and Yann LeCun (eds.), 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Conference Track Proceedings , 2013. [77] Hongyi Zhang, Moustapha CissΓ©, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings . OpenReview.net, 2018. [78] Dazhi Zhao, Guozhu Yu, Peng Xu, and Maokang Luo. Equivalence between dropout and data augmentation: A mathematical check. Neural Networks , 115:82–89, 2019. [79] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. InThe Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020 , pp. 13001–13008. AAAI Press, 2020. 15 [80] Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel, and Max Welling. Visualizing deep neural network decisions: Prediction difference analysis. In 5th International Conference on Learning Representations, ICLR 2017 . OpenReview.net, 2017. [81] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology , 67(2):301–320, 03 2005. ISSN 1369-7412. [82] Andrea Zunino, Sarah Adel Bargal, Pietro Morerio,
https://arxiv.org/abs/2505.21595v1
Jianming Zhang, Stan Sclaroff, and Vittorio Murino. Excitation dropout: Encouraging plasticity in deep neural networks. Int. J. Comput. Vis. , 129(4):1139–1152, 2021. A Technical Appendix A.1 Details on Attribution Computation For LRP, several rules exist that affect the obtained attributions [ 6,46]. In our experiments, we follow recommendations of these previous works. Note that we distinguish between computing attributions used in RelDrop, which should focus on faithfulness to the model, and computing attributions for visualization, which need to be understandable: β€’For attribution computation during RelDrop, we apply the Flat-rule to the first layer of each model, and either the LRP- Ξ΅-rule (with Ξ΅= 1eβˆ’6if not stated otherwise) or the LRP- z+-rule to all of the remaining convolutional and linear layers of the model, as specified in Appendix A.4.3. For RRA and point dropout computation (Table 1, Figure 4, Figure A.1), we use the z+-version of the above rule combinations. β€’For visualization of attributions in Figure 3, we use a combination of Box-rule for the first layer, Ξ³-rule (with Ξ³= 0.25) for the convolutional layers, and Ξ΅-rule for the fully connected layers. Similarly, we use a combination of Ξ³-rule (with Ξ³= 0.25) for the convolutional layers and Ξ΅-rule for the fully connected layers for the intermediate attribution computation in Figure A.2. We further canonize all batch normalization layers [ 33], merging them into preceding convolutional layers, as suggested by [ 47]. Since batch normalization consists of two consecutive linear operations, it poses an issue for modified backpropagation attribution methods such as LRP, which are not implementation-invariant. Canonization seeks to alleviate this issue by merging the batchnorm into a single nonlinearly activated layer, thus ensuring canonical structure of the DNNs. To compute attributions, we use the zennit [ 3] package https://github.com/chr5tphr/zennit , licensed under LGPL-3. To compute RRA and point flipping metrics, we rely on the Quantus [ 26] software package https://github.com/understandable-machine-intelligence-lab/ Quantus/tree/main , licensed under LGPL-3. A.2 Datasets For 2D image classification experiments, we compare our method’s performance against the "RE data augmentation" [79]. First, we evaluate the methods on CIFAR-10 and CIFAR-100 datasets [ 38] and extend the study to a large-scale dataset, in our case, the ImageNet-1k dataset [ 16] (Custom research, non-commercial license). Further, we evaluate the zero-shot performance of the models trained on the ImageNet-R [ 27], ImageNet-A [ 28], and ImageNet-O [ 28] datasets. The CIFAR-10 dataset consists of 60,000 color images of the resolution (32Γ—32), which are distributed into 10 different classes. CIFAR-100 dataset consists of 600 images of resolution (32Γ—32)per class with 100 classes instead of 10. Each class contains 100 test images and 500 training images. Both of these datasets can be downloaded from their source https://www.cs.toronto.edu/~kriz/cifar.html . For all the pre-trained models and our experiments, we use ImageNet-1k (ILSVRC) 2012 , the most popular ImageNet subset, which includes 1,281,167 training images covering 1,000 item classes, 100,000 test images, and 50,000 validation images. We also resize and crop each of these images to a resolution of (224Γ—224) for training. The ImageNet-1k dataset can be downloaded from its source https://www.image-net.org/download.php . ImageNet-S is a Large Scale
https://arxiv.org/abs/2505.21595v1
Unsupervised Semantic Segmentation (LUSS) benchmark dataset that collects and annotates pixel-level labels from the ImageNet-1k dataset. Following the removal 16 of some unsegmentable categories, such as bookshops, there are 919 categories with 1,183,322 training images and 12,419 validation images, each with its segmentation mask. It can be down- loaded from https://github.com/LUSSeg/ImageNet-S .ImageNet-R (rendition) includes a test set of 30,000 images of 200 distinct classes of the original ImageNet-1k dataset, including cartoons, deviant art, graffiti, embroidery, graphics, origami, paintings, sculptures, and drawings [27]. The ImageNet-R dataset’s real-world distribution shift, which includes variations in image style, blurriness, and other factors, may be downloaded from https://people.eecs.berkeley. edu/~hendrycks/imagenet-r.tar and is used to evaluate the model’s generalization and out-of- distribution robustness. ImageNet-A examples are part of the ImageNet-1k samples; however, they are more difficult and can consistently cause classification errors across various models because of scene complexity and classifier blind spots. ImageNet-O (MIT license) uses out-of-distribution image examples from ImageNet-1K that models repeatedly misrepresent as high-confidence in-distribution instances. In contrast to ImageNet-A, which allows testing image classification performance when the distribution of input data changes, ImageNet-O allows testing out-of-distribution detection per- formance when the distribution of labels changes [ 28]. Both ImageNet-A and ImageNet-O can be downloaded from https://people.eecs.berkeley.edu/~hendrycks/imagenet-a.tar and https://people.eecs.berkeley.edu/~hendrycks/imagenet-o.tar respectively. For the 3D point cloud classification experiments, we evaluate the effect of dropping out the most important points against randomly dropping out points on the two well-known datasets, ModelNet40 and ShapeNet. The ModelNet40 dataset contains 100 unique objects per category, which are further augmented by rotating each model every 30 degrees along the gravity direction (i.e., 12 poses per model), resulting in a total of 48,000 CAD models (with rotation enlargement), of which 38,400 are used for training and 9,600 for testing [ 69].ShapeNetPart (just ShapeNet in further references) (custom non-commerical license), a portion of the entire ShapeNetCore dataset, which contains 33,700 distinct, manually aligned and annotated 3D models is used for our study [ 14]. Both ModelNet40 and ShapeNet can be downloaded from their respective sources, https://modelnet.cs.princeton. edu/ andhttps://www.kaggle.com/datasets/mitkir/shapenet/data respectively. A.3 Metrics A.3.1 Accuracy During our experiments, we compute micro andmacro accuracies, where the former averages accuracy over samples, and the latter averages over classes. Micro accuracy (β€œinstance accuracy” or simply β€œaccuracy”) is measured for both the 2D and 3D settings, while we only compute macro accuracy (β€œclass accuracy”) in the 3D setting. Note that we refer to these metrics as β€œtrain accuracy” if they were computed on the training data, and β€œtest accuracy” if they were computed on the test or validation data β€” since only validation data is available for some of the datasets, such as ImageNet-1k. A.3.2 Relevance Rank Accuracy (RRA) The Relevance Rank Accuracy (RRA) quantifies the proportion of high-intensity attribution scores that fall under the ground truth segmentation mask [ 4]. It can be computed by selecting the top- K relevant values highlighted by the attribution method, where Krepresents the size of the ground truth mask, Ground Truth (GT) : Ptop-K={p1, p2, . . . , p K| Rp1>Rp2> . . . > RpK} (A.1) Where Ptop-Kis the set
https://arxiv.org/abs/2505.21595v1
of Kpixels with Rtop-Krelevance scores Rp1,Rp2, . . . ,RpK. Then, we divide the number of these values that fall within the ground truth locations by the ground truth’s size:, obtaining the RRA of a single sample as follows: Relevance Rank Accuracy (RRA) =|Ptop-K∩GT| |GT|(A.2) Taking the average RRA over all samples in the dataset yields the Mean RRA we report in our results. 17 Algorithm A.1 RelDrop for 2D Images 1:Inputs: Input image I2d; Image width W; Image height H; Area of image S; Relevance Erasing probability p; Erasing area ratio range SlowandShigh; Aspect ratio range of the Erasing block rlowandrhigh;centroid xy= (xcen, ycen); 2:Outputs: Erased Image I2dβˆ— 3:Initialize: pinitβ†βˆ’ Rand (0,1) 4:ifpinitβ‰₯pthen 5: I2dβˆ—β†βˆ’ I2d; 6: returnI2dβˆ— 7:else 8: while True do 9: SO←Rand (Slow, Shigh)βˆ—S; 10: rO←Rand (rlow, rhigh); 11: HOβ†βˆšSOβˆ—rO,WO←q SO rO; 12: xcen←centroid xy[0],ycen←centroid xy[1];β–· centroid xyis the most relevant pixel 13: ifxcen+WO≀Wandycen+HO≀Hthen 14: O= (x, y)|x∈ xcenβˆ’WO 2, xcen+WO 2 , y∈ ycenβˆ’HO 2, ycen+HO 2 ; 15: I2d(O)←Mean (R, G, B ); 16: I2dβˆ—β†βˆ’ I2d; 17: returnI2dβˆ— 18: end if 19: end while 20:end if Algorithm A.2 RelDrop for 3D Point Clouds 1:Inputs: Input point cloud I3d={(xi, yi, zi)|i∈ {1, ..., N}}; Point cloud size N; Relevance score vector R3d norm∈RN; Parameters Ξ±, β∈[0,1]; 2:Outputs: Augmented point cloud I3dβˆ— 3:βƒ— v←Random uniform distribution U(0,1)∈[0,1]N 4:foreach point i∈ {1, ..., N}do 5: ifΞ±βƒ— vi+ (1βˆ’Ξ±)R3d norm(i)β‰₯(1βˆ’Ξ²)then 6: M3d R(i)←0; β–·Mark point for occlusion 7: I3dβˆ—(i)←(0,0,0); β–·Replace with origin 8: else 9: M3d R(i)←1; β–·Keep point 10: I3dβˆ—(i)← I3d(i); β–·Retain the values 11: end if 12:end for 13:returnI3dβˆ— A.4 Details on Experiments A.4.1 2D Image Classification ResNet architectures [ 25] of different depths (18, 34, 50) on the CIFAR-10 and CIFAR-100 [ 38] benchmark datasets and (18, 34, 50, 101) on the ImageNet-1k dataset [ 16] are considered to study the effect of our method against the baseline without input augmentation, i.e., augmentation probability p=0, and RE. Other standard regularization techniques (e.g., weight decay, label smoothing, and batch normalization) and simple data augmentation techniques (e.g., random horizontal flipping and cropping) are applied along with our method. For our experiments, all the additional scripts required 18 to apply LRP and place the blocks during batch training are built on top of the official implementation of Random Erasing (RE) from https://github.com/zhunzhong07/Random-Erasing/tree/ master . We use CIFAR-10/100 and ImageNet-1k pre-trained models as a starting point and their details can be found from https://huggingface.co/edadaltocg (MIT license) and https: //huggingface.co/docs/hub/en/timm (Apache 2.0 license), and these are further finetuned for 100 and 50 epochs respectively with the different augmentation strategies. The finetuning starts with a high learning rate, which is gradually reduced using the cosine annealing scheduler. A.4.2 3D Point Cloud Classification: While the official implementation of the PointNet++ [ 53] paper uses Tensorflow, we instead utilize the alternate "PyTorch Implementation of PointNet++" [ 71] for all our experiments due to the ease of setting up and running the experiments. The GitHub repository of the PyTorch implementation also contains a comparison table of the model performance with the official implementation. It is found
https://arxiv.org/abs/2505.21595v1
to be reliable and matches the results of the official implementation. All the experiments reported in Table 2 are conducted by training the PointNet++ model from scratch for 50 epochs with different dropout parameters. Figure A.1: Attribution maps indicate the dropout strategy for different dropout parameters during training. The points highlighted in red indicate the (x, y, z )coordinates of the points being dropped and replaced by (0,0,0), and the ones highlighted in blue are unaltered and retained for the next epoch of training. An "Aeroplane" sample is considered for illustration with a constant value of Ξ±= 0.5and varying values of Ξ²= (0.15,0.5,0.85).Left: represents the dropout with parameters, Ξ±= 0.5,Ξ²= 0.15,Center: represents the dropout with parameters, Ξ±= 0.5,Ξ²= 0.15and the, Right: represents the dropout with parameters, Ξ±= 0.5,Ξ²= 0.85. A.4.3 Hyperparameters Table 3 shows the final hyperparameters for ResNet and PointNet++ architectures trained on various benchmark datasets. The values under the ResNet further consist of CIFAR/ImageNet-1k variations; only one value means CIFAR and ImageNet-1k share the parameter. A.4.4 Hardware Of all the experiments in this work, the Zero-shot testing in Section 4.1.1 ran on a local machine, while all other experiments ran on an internal HPC Cluster . All deep learning models ran on the respective GPUs. The local machine used Ubuntu 20.04.6 LTS, an NVIDIA TITAN RTX Graphics Card with 24GB of memory, an Intel Xeon CPU E5-2687W V4 with 3.00GHz, and 32GB of RAM. The HPC-Cluster used Ubuntu 18.04.6 LTS, an NVIDIA Ampere A100 Graphics Card with 40GB of memory, an Intel Xeon Gold 6150 CPU with 2.70GHz, and 512GB of RAM. Apptainer was used to containerize experiments on the cluster. Estimated runtime for each experimental run was 2.5 GPU days, with a combined runtime of β‰ˆ2500-3000 GPU days. Including preliminary and failed experiments, total runtime is estimated to 3400 GPU days. Note that 1 GPU day = 1*(NVIDIA Ampere A100) used for 24 hours. 19 Table 3: Hyperparameters for ResNet and PointNet++ models. lr= 0.01 lr= 0.001 beta = 0.9 betas = (0.9,0.999) - eps = 1e-8 w_decay = 1e-4/5e-4w_decay = 1e-4 lrmin= 1e-6 step = 20 Tmax = 100 /10 Ξ³= 0.7 Batch Size 128 24 # points - 1024 Max erase area shigh = 0.4 - Aspect erase rlow= 0.3 - LRP Composite Ξ΅-rule z+-rule Ξ΅value 0.001 -Hyperparams ResNet PointNet++ OptimizerSGD AdamW SchedulerCosLR StepLR A.5 Additional Results A.5.1 Distribution of the Relevances Across Channels After observing the relevances being more distributed over the object of interest in Figure 3, we investigate this effect further, evaluating whether it is observable in intermediate layers as well. For this purpose, we compare relevance distributions at various depths of ResNet50 and ResNet101 for RE and RelDrop Figure A.2, assuming that neurons correspond to concepts [ 1]. Here, we compare how the AUC of the relevance distributions differs, relative to the baseline. To compute AUC, channels are first ordered by their proportion of relevance per layer and normalized by the respective first (maximum) value. Due to this sorting, a lower AUC implies that the model’s predictions are based on fewer concepts, as
https://arxiv.org/abs/2505.21595v1
a higher proportion of relevance is distributed to fewer neurons. This reduced feature distribution can harm semantic expressiveness, Especially in later layers where high-level complex features are formed, a low AUC indicates lower semantic expressiveness and robustness. In the Figure, we observe that for the initial layers of ResNet50 and ResNet101, except for a few outliers, the relative AUC differences compared to the baseline are trending towards the negative for both RE and RelDrop, with increasing magnitude until around β€œLayer2”. This implies that in shallow layers, where the network learns extremely low-level features such as edges and textures, both methods lead to a higher concentration of relevance on fewer channels. Throughout the deeper blocks (β€œLayer3” and β€œLayer4”), particularly for ResNet50, RE consistently causes large negative AUC changes over baseline, indicatingsemantically sparse representations, which may be undesirable for robustness and adaptability of the model to complex inputs. In comparison, RelDrop tends to cause strong positive increases of AUC, particularly in the latter halves of the networks. This suggests a less sparse distribution of relevance across channels and reliance on a larger number of concepts for predicting, which is preferable for generalization and robustness. In both ResNet50 and ResNet101, RelDrop obtains significant AUC gains for the deeper layers.This offers an explanation as to the mechanisms causing RelDrop to consistently improve zero-shot inference performances in all the considered adversarial and distribution-shifted settings, as reported in Table 1 (Green Columns) . 20 conv1 layer1.0.conv1 layer1.0.conv2 layer1.0.conv3 layer1.1.conv1 layer1.1.conv3 layer1.2.conv1 layer1.2.conv3 layer2.0.conv1 layer2.0.conv3 layer2.1.conv1 layer2.2.conv1 layer2.3.conv1 layer2.3.conv3 layer3.0.conv1 layer3.0.conv3 layer3.2.conv1 layer3.4.conv1 layer3.5.conv1 layer3.5.conv3 layer4.0.conv1 layer4.0.conv3 layer4.1.conv1 layer4.2.conv1 layer4.2.conv3 Layersβˆ’30βˆ’20βˆ’10010Relative AUC difference from baseline (%)Relative AUC differences from baseline across layers RandomErasing RelDrop(Ours) conv1 layer1.0.conv1 layer1.0.conv2 layer1.0.conv3 layer1.1.conv1 layer1.1.conv3 layer1.2.conv1 layer1.2.conv3 layer2.0.conv1 layer2.0.conv3 layer2.1.conv1 layer2.2.conv1 layer2.3.conv1 layer2.3.conv3 layer3.0.conv1 layer3.0.conv3 layer3.1.conv1 layer3.2.conv1 layer3.3.conv1 layer3.4.conv1 layer4.0.conv1 layer4.0.conv3 layer4.1.conv1 layer4.2.conv1 layer4.2.conv3 Layersβˆ’40βˆ’30βˆ’20βˆ’100102030Relative AUC difference from baseline (%)Relative AUC differences from baseline across layers RandomErasing RelDrop(Ours)Figure A.2: The graphs present relative AUC (%) improvements over the baseline for RE [ 79] and RelDrop. AUC is computed by sorting channel relevance scores in descending order for each layer and normalizing by the first (maximum) value. Top: Relevance distribution across ResNet50 layers, ordered by network depth. Bottom: Relevance distribution across ResNet101 layers, also ordered by depth. 21
https://arxiv.org/abs/2505.21595v1
Learning optimal treatment strategies for intraoperative hypotension using deep reinforcement learning Esra Adiyekea,b,*, Tianqi Liua,c,*, Venkata Sai Dheeraj Naganaboina a,d,*, Han Lia, Tyler J. Loftusa,e, Yuanfang Rena,b, Benjamin Shickela,b, Matthew M. Rupperta,b , Karandeep Singhf, Ruogu Fanga,g, Parisa Rashidia,g, Azra Bihoraca,b,#, Tezcan Ozrazgat -Baslanti a,b,# * These authors have contributed equally as first authors # These authors have contributed equally as senior authors a Intelligent Clinical Care Center (IC3), University of Florida, Gainesville, FL. b Department of Medicine, Division of Nephrology, Hypertension, and Renal Transplantation, University of Florida, Gainesville, FL. c Department of Electrical and Computer Engineering , University of Florida, Gainesville, FL. d Department of Computer Science , University of Florida, Gainesville, FL. e Department of Surgery, University of Florida, Gainesville, FL. f Department of Medicine, University of California San Diego, San Diego, CA g Department of Biomedical Engineering, University of Florida, Gainesville, FL. Corresponding author: Azra Bihorac MD MS, Department of Medicine, Intelligent Clinical Care Center (IC3), Division of Nephrology, Hypertension, and Renal Transplantation, PO Box 100224, Gainesville, FL 32610 -0224. Telephone: (352) 294 -8580; Fax: (352) 392 -5465; Email: abihorac@ufl.edu Author contributions: AB, TOB, PR, BS, TJL, TL, DN, EA, and YR contributed to the study design. EA, TL, DN, HL, TJL, and TOB drafted the manuscript. EA, TL, DN and MR worked on data processing and analysis. All authors contributed to data interpretation and provided critical revisions. Words : 2,992 Number of tables: 1 Number of figures: 5 Keywords: Machine learning, artificial intelligence, hypotension, deep reinforcement learning, surgery Running title: Optimal treatment strategies for intraoperative hypotension ABSTRACT Importance : Traditional methods of surgical decision making heavily rely on human experience and prompt actions , which are variable . A data -driven system that generate s treatment recommendations based on patient state s can be a substantial asset in perioperative decision - making , as in cases of intraoperative hypotension, for which suboptimal management is associated with acute kidney injury (AKI), a common and morbid postoperative complication . Objective: To develop a Reinforcement Learning (RL) model to recommend optimum dose of intravenous (IV) fluid and vasopressors during surgery to avoid intraoperative hypotensi on and postoperative AKI. Design, setting, participants: We retrospectively analyzed 50,021 surgeries from 42,547 adult patients who underwent major surgery at a quaternary care hospital between June 2014 and September 2020. Of these, 34,186 surgeries were used for model training and internal validation while 15,835 surgeries were reserved for testing. We developed a n RL model based on Deep Q-Network s to provide optimal treatment suggestions. Exposures: Demographic and baseline c linical characteristics , intraoperative physiologic time series , and total dose of IV fluid and vasopressor s were extracted every 15-minute s during the surgery . Main outcomes: In the RL model, intraoperative hypotension (MAP<65 mmHg) and AKI in the first three days following the surgery were considered . Results : The developed model replicated 69% of physician’s decisions for the dosage of vasopressors and proposed higher or lower dosage of vasopressors than received in 10% and 21% of the treatments , respectively. In terms of
https://arxiv.org/abs/2505.21596v1
intravenous fluids, the model’s recommendations were within 0.05 ml/kg/15 min of the actual dose in 41% of the cases , with higher or lower dose s recommended for 27% and 32% of the treatments , respectively . The RL policy resulted in a higher estimated policy value compared to the physicians ’ actual treatments , as well as random 3 policies and zero -drug policies. The p revalence of AKI was lowest in the patients who received medication dosages that aligned with our agent model’s decisions. Conclusion s and Relevance : Our finding s suggest that implementation of the model’s policy has the potential to reduce postoperative AKI and improve other outcomes driven by intraoperative hypotension . Introduction Postoperative a cute kidney injury (AKI) affects almost 20% of the patients under going surgery and poses substantial risk for short - and long -term organ dysfunction and mortalit y.1-7 Management of intravascular volume and vasomotor tone plays a substantial role in determining the risk for posto perative AKI .6,8,9 Accordingly , prior research has show n association s between intraoperative hypotension and AKI.10,11 Despite t his importance , there are few consensus guidelines regarding best practices for predicting and treating intraoperative hypotension .12 The Acute Disease Quality Initiative and Perioperative Quality Initiative recommended maintaining mean arterial pressure (MAP) > 65 mmHg based on moderate quality evidence , and recommended goal -directed blood pressure optimization based on stronger evidence .5,13,14 Two randomized trials have demonstrated that goal -directed therapy reduces the odds of postoperative AKI and another demonstrated that machine learning -derived predictions of impending hypotension prompt anesthesiologists to act earlier, differently, and more frequently in managing intravascular volume and vasomotor tone, resulting in less time -weighted hypotension .13,15 Reinforcement learning (RL) is an artificial intelligen ce subfield that identifies the sequence of actions yielding the greatest probability of achieving a goal, such as normal postoperative renal function.16 Previous works have applied RL algorithms to situations requiring real-time decisions accounting for a patient's continuously changing state s17-19, including sepsis,20-23 sedation24, pain management25, diabetic glycemic management26 and delirium27. Reinforcement learning has been particularly studied in sepsis management, where RL agents have been refined to account for model uncertainty ,28 multidimensional drivers of sepsis mortality,29 and physician input to optimize fluid and vasopressor dosages.30 In hypotension management, Futoma et al .31 developed an RL approach to suggest multiple equivalent actions in an ICU setting. Zhang et al.28 further refined RL optimization of ICU hypotension treatment by limiting an agent to only suggest actions at critical decision -making points , rather than at time 5 intervals throughout the length of an ICU stay. However, there are currently no RL agents to minimize hypotension -related complications in the perioperative setting . Our objective is to develop and validate a deep Q networks -based RL model that offer s optimal dosing strategies for intravenous (IV) fluid and vasopressor administration for every 15 - minute intervals during a major surgery to maintain optimal hemodynamic physiology and avoid postoperative AKI. Methods Data Source The University of Florida Integrated Data Repository was
https://arxiv.org/abs/2505.21596v1
the honest broker to assemble a single center longitudinal perioperative cohort for all patients admitted to the University of Florida Health (UFH) for patients of age 18 years or older during admission, following any type of major operative procedure between June 1st, 2014 through September 20th, 2020 by integrating electronic health records data from preoperative, intraoperative, and postoperative phases of care with other clinical, administrative, and public databases as previously described.32 We excluded patients with end -stage kidney disease or underwent cardiac surgery , who died within 24 hours of surgery , hospital stay less than 24 hours, or had insufficient data . The final cohort consisted of 50,021 inpatient encounters who underwent major surger y (Supplemental Fig. 1). We chronologically split the cohort into development (admissions between June 1st 2014 and 29th November 2018 , 70% of the entire cohort ) and test (admissions between 30th November 2018 and 20th September 2020 , 30% of the entire cohort ) sets. We trained the model using the development cohort, allocating 30% for internal validation and parameter tuning, and reported model performance on the test set . If patients underwent multiple surgeries during an admission, only the first surgery was considered in the analys es. The University of Florida Institutional Review Board and Privacy Office approved this study with waiver of informed consent (IRB#201600223 , IRB#201600262 ). 6 Assessment of Kidney Function In determining AKI and chronic kidney disease (CKD) , we considered Kidney Disease: Improving Global Outcomes (KDIGO) serum creatinine criteri a.33 We considered preadmission and admission serum creatinine records in determining baseline creatinine. We computed creatinine by back -calculation using the 2021 CKD -EPI refit without race equation assuming a baseline estimated glomerular filtration rate (eGFR) of 75 ml/min/per 1.73 m2 in cases with no preadmission creatinine was available , in non -CKD patients .34 We refer to a study by Ozrazgat - Baslanti et al. for all necessary details regarding the data elements , assumptions and algorithms of the computable phenotyping pipeline we developed and used .35 Predictor Features The model development used 1 6 features, 4 static baseline features (age, sex, Charlson comorbidities index, body mass index) , 9 intraoperative physiologic time series data ( heart rate, systolic blood pressure, mean arterial pressure, body temperature, respiratory rate, minimum alveolar concentration, SpO 2/FiO 2 ratio, peak inspiratory pressure, and end -tidal carbon dioxide ), 1 preoperative feature, which is the average mean arterial blood pressure of past 48 hours, and 2 additional features as cumulative IV fluid and cumulative pressor administered during surgery . We derived preoperative comorbidities from International Classification of Diseases (ICD) codes to calculate Charlson comorbidity indices.36 We computed the mean values of vital signs from intraoperative time series dat a by resampl ing into 15 -minute intervals. List of all input features and their statistical characteristics were given in Supplemental Tables 1 and 2 . Reinforcement Learning Model The proposed RL algorithm is conceptualized as a dynamic model that utilizes preoperative and intraoperative features to recommend actions during surgery . We modeled for action prediction to
https://arxiv.org/abs/2505.21596v1
prevent short term outcome of intraoperative hypotensi on (MAP<65 mmHg) and long -term outcome of AKI in the first three days following the surgery.10 7 The r esulting action s (agent polic y) are assessed compared to the actions taken by the physician s based on their experience (Fig. 1) . This workflow simulates clinical tasks faced by physicians involved in perioperative care where patients’ preoperative information is subsequently enriched by the influx of new data from the operating room. The final output produces the suggested actions in a dynamic manner, updating the suggestions every 15 minutes . Figure 1 . Data flow and d esign of reinforcement learning approach using deep learning framework. Data from our adult surgical cohort included static and times series features. We defined the action space with a total of 25 discrete actions, combining 5 different levels of IV fluid and vasopressor dosages. The model environment consists of EHR information, reward function, and associated actions. Environment supplies the observable states to the agent. Agent analyzes the state using a Deep Neural Network trained to produce the best action given optimum policy. The suggested action is sent to the environment, and the scored reward due to the action is sent back to the agent for training the policy. Model was trained and validated on 70% of the data and tested on 30% of the data, respectively. The algorithm consists of two main layers, data processing and modeling , and each contai ns a data transformer core and a data analytics core.32 Briefly, the workflow uses a data transformer to integrate data from multiple surgery rooms , and prepares the data for analysis through preprocessing and feature transformation (Supplemental Fig. 2). We used Dueling 8 Double Deep Q Networks ( D3QN) as architecture for the RL model to test and analyze the optimal behavior . We discretized the state space by clustering the clinical variables which were resampled at 15 -minute intervals as described in Predictor Features . Using the K -means++ clustering algorithm , we grouped patients to capture high -level similarities and then computed transition probabilities by considering the transition counts. We determined the number of clusters using silhouette analysis and selected 200 as the optimal value . We developed 50 distinct RL models for 50 different clustering outcomes obtained by different random initializations to account for the variability in policy value s. We considered SHapley Additive exPlanations (SHAP) to evaluate the features’ influences in RL model.37 We used an enhanced version of the Deep Learning I mportant FeaTures (DeepLIFT) algorithm to approximate the SHAP values.38 DeepLIFT can decompose the prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. We set the model running for 300 epochs, the batch size 256 experiences , and a learning rate of 1e-3. Action Space and Reward Function We converted the vasopressors using their norepinephrine equivalent in mcg/kg/min (Supplemental Table 3).41 We discretized the action space into 5 groups using cutoff values derived from the
https://arxiv.org/abs/2505.21596v1
empirical distribution of historical actions to ensure sufficient representation for each action category for IV fluids and vasopressors separately, one group corresponding to dosage of 0 (Supplemental Tables 4 and 5). We defined the action space with a total of 25 discrete actions, combining 5 different levels of IV fluid and 5 different levels of vasopressor dosages. We developed a reward function as a combination of long- and short -term rewa rds with two major parts to consider. In modeling the postoperative outcomes, we assigned the long-term reward (or penalty ) utilizing a dedicated reward of +15 for AKI occurs within 3 days following the surgery , and -15 otherwise. The second part of the reward function adds a penalty of -1.75 if the state is hypotensive ; that is if MAP is less than 65 mmHg or 20% lower than 9 baseline MAP. We calculated t he baseline MAP by calculating the median non -invasive MAP within the 48 hours prior to the first surgery start date -time that was between 60 and 110 mmHg. Performance Evaluation In RL, model performance is typically evaluated through interaction with a simulated environment using metrics like cumulative reward . However, due to lack of such environment in the current scenario, we assessed the model's policy quality through action prediction accuracy and off -policy evaluation techniques . To study that, we adopted visual analysis and weighted importance sampling (WIS) as quantitative metho d.20,39,40 We adopted an o ff-policy evaluation to quantitative ly assess our trained AI agent ( Ο€e) based on physicians’ policies. WIS takes the idea to re-weight the rewards in the historical data ( physician’s policy Ο€b) by the importance sampling ratio between Ο€e and Ο€b. For visual analysis, policy scores of the model were compared to the physician policy relating the scores to outcome prevalence, along with distribution of actions in action space. We also evaluated the impact of difference in actions on the long -term outcome prevalence. To address the instability of evaluation performance in long -term cases, we propose a modified version of WIS for long -term applications . To clarify , we us ed logarithmic transformation combined with Softmax to constrain the range of the cumulative importance ratio, improving stability in long -term experiments (Supplemental Methods). Results Patient Baseline Characteristics and Outcomes Among the 28,586 patients with 34,186 major surgeries in the development cohort, the mean age was 57 (standard deviation [SD], 17) , 17,031 (50 %) were female, 4,776 (14 %) were African -American, 1,530 (4 %) were Hispanic , and 15,578 (46 %) had Medicare type of insurance (Table 1 , Supplemental Table 2 ). The test cohort had 15,835 major surgeries from 13,961 patients . In this cohort, mean age was 59 (SD, 17) , 7909 (50 %) were female, 2,233 (14 %) were African -American, 756 (5 %) were Hispanic , and 7,672 (48 %) had Medicare type of insurance. 10 The most common types of surgery, in descending order of frequency, were orthopedic, neurosurgical, and vascular procedures in both cohorts . The prevalence of
https://arxiv.org/abs/2505.21596v1
postoperative complications in the development cohort was 11% for AKI within the first three days following surgery , 2% for 30 day -mortality and 4% for 90 day -mortality . In test cohort, percentage was 12% for postoperative AKI, 2% for 30 day -mortality and 3% for 90 day -mortality. Table 1. Clinical characteristics and outcomes of the patients. Features Development Cohort Test Cohort Number of encounters, n 34,186 15,835 Demographic information Age, years, mean (SD) 57 (17) 59 (17) Female, n (%) 17,155 (50) 7,926 (50) African American , n (%) 4,776 (14) 2,233 (14) Body Mass Index, median (IQR) 28 (24, 34) 28 (24, 34) Emergency admission , n (%) 12,243 (36) 5,763 (36) Baseline clinical information Three m ost common types of surgery , n (%) Orthopedic surgery 10,109 (30) 4,341 (27) Neurosurgery 3,744 (11) 2,546 (16) Vascular surgery 3,420 (10) 1,704 (11) Charlson comorbidity index, median (IQR) 4 (2, 6) 4 (2, 6) Reference estimated glomerular filtration rate, median (IQR) 95.27 (81.86, 109.73) 93.44 (80.53, 107.71) Baseline mean arterial pressure mmHg , median (IQR)a 86 (78, 94) 87 (79, 95) Intraoperative v itals, median (IQR) Systolic blood pressure , mmHg 114 (102, 130) 116 (104, 132) Mean arterial pressure , mmHg 79 (70, 90) 82.0 (72, 93) Heart rate , bpm 75.0 (65.50, 86.50) 75.0 (66.0, 86.50) Oxygen saturation (SpO2) , % 99.10 (97.50, 10.0 0) 99.00 (97.20, 10.0 0) Fraction of inspired oxygen (FiO2) , % 40 (40, 40) 40.0 (40, 40) End-tidal carbon dioxide (EtCO2) , mmHg 34 (32, 37) 35 (33, 38) Respiration rate , breaths/minute 10 (8, 12) 12 (10, 14) Peak inspiratory pressure , mmHg 18.0 (14, 23) 18.0 (14, 22) Minimum alveolar concentration (MAC) 0.62 (0.44, 0.81) 0.56 (0.31, 0.77) Core temperature , degrees Celsius 36.83 (36.28, 37.33) 36.94 (36.33, 37.44) Intraoperative medications, median (IQR)b Vasopressor total dose per 15 min (mcg/kg) 0 (0, 0.04) 0 (0, 0.08) 11 Intravenous fluids total dose per 15 min (ml/kg) 0.12 (0.03, 0.39) 0.13 (0.03, 0.37) Intraoperative h ypotensi on, n (%) 24,144 (70) 9,677 (61) Postoperative AKI, n (%) AKI within 3 days after surgery 3,870 (11) 1,856 (12) AKI within 7 days after surgery 4,637 (14) 2,242 (14) AKI during hospitalization 5,649 (17) 27,72 (18) Mortality, n (%) 30-day mortality 749 (2) 320 (2) 90-day mortality 1,350 (4) 475 (3) Abbreviations. SD, standard deviation; IQR, interquartile range; AKI, acute kidney injury, ICU, intensive care unit. a Baseline MAP was calculating as the median non -invasive MAP within the 48 hours prior to the first surgery start date -time that was between 60 and 110 mmHg . b Values were calculated for 15 -minute resampled series. Model Evaluation We trained the D3QN based RL models on a development cohort of 34,186 surgeries, resampled to 545,965 15-minute epochs . All results were reported from a test cohort of 15,8 35 surgeries, resampled to 255,748 15-minute epochs . We illustrated different aspects of agent suggestion evaluati on in Fig ures 2-5. Action space distribution in Fig. 2 suggests that the distributions of the actions recommended by
https://arxiv.org/abs/2505.21596v1
the agent and actions taken by the physicians have a high degree of similarity . The average Q -value distributions for surgeries that were grouped by the presence or absence of postoperative AK I were illustrated in Fig. 3 (A) . We observe d that surgeries with postoperative AKI tend ed to have lower return values , and in contrast, sessions without AKI concluded in higher return values. Fig. 3 (B) presents the relationship between physicians’ actions and postoperative AKI within the first 3 days following surgery. In Fig 3 (B), physician s’ treatments with a higher return value correspond to lower AKI probability , while treatments with a low return led to higher AKI probability. On average, the RL model recommends less vasopressors and more IV fluids. The model replicated the physicians’ vasopressor dosing decisions in 69% of cases, and it recommended a higher dose in 21% and a lower dose in 10% of treatments. For IV fluids, the actual dose was within 0.05 ml/kg per 15 min of the model’s suggestion in 41% of cases; in the remainder, the model’s recommendation was higher in 32% and lower in 27% of treatments. 12 Fig. 4 shows that administering either treatment at doses higher or lower than those recommended by the AI policy was associated with an increased probability of AKI. We presented the distribution of the estimated policy value of the physician’s actual treatments, the RL policy, a random policy and a zero -drug policy in Fig. 5 (A), where estimated policy value is the expected cumulative reward each policy would yield over time. In this figure, the RL policy resulted in a higher estimated value compared to the alternatives. The order for the most influential variables in the RL model was given Fig. 5 (B). We identified Charlson comorbidity index and age as the top two important features in the decision -making process. Example Surgeries We illustrated four example surgical sessions including cases with and without postoperative AKI within the first three days after surgery. It is notable that these two example patients did not experience hypotensive episodes, and we observe th e physician tended to administer IV fluids more and vasopressors almost each hour when MAP fluctuated around 65 mmHg compared to the case where the 15 -minute average MAP remained consistently above 90 mmHg (Supplemental Figures 3 and 4). In both cases, the RL model did not recommend vasopressor administration . Similarly, compared to the RL model, the physician administered more IV fluids in the former case, while the RL model recommended more fluids in the latter . In the hypotensive example with postoperative AKI, the RL model recommends higher amounts of IV fluids from the start and maintains this level throughout most of the surgery compared to the physician’s administration. Additionally, the RL model recommends the use of vasopressors, whereas the physician chose not to administer them. In the case of a non -hypotensive surgical patient who had postoperative AKI, the RL model’s recommendations aligned with the physicians’ no vasopressors administ ration decision along with similar IV
https://arxiv.org/abs/2505.21596v1
fluid administration on average. (Supplemental Figures 5 and 6). 13 Figure 2. Comparison of actions proposed by the trained agent (A) and taken by physician agent (B). Each bin represents the tuple for discretized IV fluids and vasopressor actions. A B A B Figure 3. Average return per surgery (A) and the relationship between the return of physicians’ treatments and postoperative AKI within the first 3 days following surgery in test set (B). (A) Distribution of average returns for cases with and without postoperative AKI. (B) Relationship between physician’s return values and AKI prevalence in the test set. 15 A B Figure 4. The a verage dose excess , calculated as the difference between given and recommended dose over all 15 -minute intervals for IV fluids (A) and for vasopressors (B) per surgery . A B Figure 5. Distribution of estimated values for physicians, agent, zero and random policies (A), feature importance derived for the trained model (B). (A) Comparison of estimated policy value by WIS for physician policy, agent policy, zero policy, and random policy with 50 different K -Means initializations. (B) Feature importance obtained for RL model 17 Discussion Postoperative AKI is prevalent and occurs in almost one in five patients following a major surgery.7 Intraoperative hypotension affects approximately one in three patients undergoing non-cardiac surgical interventions .42 Prior research has demonstrated strong associations between intraoperative hypotension and postoperative AKI.43 Improving the management of intraoperative hypotension during non -cardiac surgery could save hospitals up to $4.6 million annually, primarily by decreasing the incidence of postoperative AKI.44 In this study, we introduce d a RL modeling approach aimed to assist physicians in preventing intraoperative hypotension and reducing the risk of postoperative AKI by providing optimal IV fluid and vasopressor dosage recommendations . The model demonstrated promise in recommending actions that are associated with improved outcomes . While t he actions recommended by the agent and the actions taken by physicians had a high degree of similarity , administering either IV fluids or vasopressors higher doses than those recommended by the AI policy was associated with an increased probability of postoperative AKI, and administering vasopressors at lower doses than recommended by the AI policy was also (albeit weakly) associated with an increased probability of postoperative AKI . We showed that t he RL policy resulted in a higher estimated value compared to the alternatives , including treatments administered by physicians . As previously presented, there have been studies dedicated to this issue, but their effectiveness relies heavily on the correctness of physician actions , which may replicate no n- evidence -based practices in some cases, yielding develop ment of a n unstable policy. One possible solution could involve comparison of physician actions to actions suggested by multiple subject matter experts after thorough review of a sample of physician actions in various scenarios , especially scenarios in which patients experienced better or worse than expected outcomes . There exists no optimal, one -size-fits-all approach to clinical decision -making .17,45 18 Due to the complex, high -stakes, and often uncertain nature of surgical decisions
https://arxiv.org/abs/2505.21596v1
for patients ’ with varying characteristics , a collaborative approach to shared decision -making involving the patient and all members of a clinical care team can improve patient satisfaction and may reduce costs associated with unnecessary treatments. Although our results show promise, there are numerous challenges associated with implementing such a method in healthcare, especially in perioperative setting s, unlike applications that can be tested in simulated environments where RL performance can be easily quantified . Unlike synthetic environments, clinical settings do not permit real -time experimentation with unvalidated policies. Thorough model validation is required to ensure patient safety . The literature describing the application of RL in healthcare focuses on its usage in dynamic environments to optimize treatment regimens. These dynamic environments, such as sepsis, mechanical ventilation, or glycemic decompensation, require that current models adapt rapidly to evolving patient states to avoid decompensation and physiological failure. Thus, one challenge to RL algorithms is the frequency of treatment recommendations. In their model for sepsis, another condition partially defined by hypotension, Komorowski et al .20 and Tang et al.30 aggregated patient data every four hours for sepsis treatment. The shortest timestep used in current literature was one hour.21,24,31 Zhang et al .46 utilize d a model that specifically identifie d highly variable decision -making points amongst physicians to make treatment recommendations. Yu et al.24 used 10-minute interval s to accomplish optimal mechanical ventilation, a treatment that is similar to fluid resuscitation in its timing and necessity. Peine et al.47 used 4 -hour time steps to dynamically optimize mechanical ventilation regime for critically ill patients. Prasad et al.48 used a 10-minute timestep, but they optimize mechanical ventilation and sedation dosage and weaning from mechanical ventilation . In this work, we chose a time interval of 15 minutes under the assumption that intraoperative hypotension requires rapid correction, especially in a perioperative environment. 19 Future implementations of reinforcement learning in surgical settings should incorporate dynamic reward functions to accept input from both the patient and all members of a perioperative care team. Such collaborative reward functions can balance the risk a version of individual patients and physicians with the expected benefits and postoperative care trajectories highlighted by other team members. By giving the patient increased control over their own algorithm -influenced clinical care, collaborative and dynamic reward functions can lead to an increase in overall patient satisfaction. Conclusion We present a reinforcement learning modeling approach to recommend optimal intravenous fluid and vasopressor doses to avoid intraoperative hypotension and postoperative acute kidney injury . When clinician actions most closely imitated model recommendations, AKI incidence was lowest. These findings require prospective validation and clinical implementation to assess the potential to improve perioperative care and patient outcomes. Acknowledg ements We acknowledge the University of Florida Integrated Data Repository (IDR) and the University of Florida Health Office of the Chief Data Officer for providing the analytic data set for this project. Additionally, the Research reported in this publication was supported by the National Center for Advancing Translational Sciences of the National Institutes of H ealth under University of Florida Clinical
https://arxiv.org/abs/2505.21596v1
and Translational Science Awards UL1 TR000064 and UL1TR001427. Funding T.O.B. was supported by K01 DK120784 from the National Institute of Diabetes and Digestive and Kidney Diseases (NIH/NIDDK). TOB received grant (97071) from Clinical and Translational Science Institute, University of Florida and Research Opportunity Seed Fund grant ( DRPD - ROSF2023) from University of Florida Research . Additionally, the Research reported in this publication was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under University of Florida Clinical and Translational Science Awards UL1 TR000064 and UL 1TR001427. The Titan X Pascal partially used for this research was donated by the NVIDIA Corporation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Natio nal Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish, or p reparation of the manuscript. AB and TOB have full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis . Author contributions AB and TOB have full access to the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. TOB, PR, BS, TJL, TL, DN, EA, and YR contributed to the study design . EA, TL, DN, HL, TJL, and TOB drafted the manuscript. EA, TL, DN, HL, and MR worked on data processing. Analysis was performed by EA, TL, and DN . 21 Funding was obtained by TOB and TJL . Administrative, technical, and material support was provided by AB and PR. All authors contributed to the interpretation of data and to critical revision of the manuscript for important intellectual content. Additional information Competing interests The authors declare no competing interests. References 1. Bihorac A, Yavas S, Subbiah S, et al. Long -term risk of mortality and acute kidney injury during hospitalization after major surgery. Annals of surgery. 2009;249(5):851 -858. 2. Gameiro J, Fonseca JA, Neves M, Jorge S, Lopes JA. Acute kidney injury in major abdominal surgery: incidence, risk factors, pathogenesis and outcomes. Annals of intensive care. 2018;8:1 -10. 3. Nadim MK, Forni LG, Bihorac A, et al. Cardiac and vascular surgery –associated acute kidney injury: the 20th international consensus conference of the ADQI (acute disease quality initiative) group. Journal of the American Heart Association. 2018;7(11):e008834. 4. Park JT. Postoperative acute kidney injury. Korean journal of anesthesiology. 2017;70(3):258 -266. 5. Prowle JR, Forni LG, Bell M, et al. Postoperative acute kidney injury in adult non -cardiac surgery: joint consensus report of the Acute Disease Quality Initiative and PeriOperative Quality Initiative. Nature Reviews Nephrology. 2021;17(9):605 -618. 6. Romagnoli S, Ricci Z, Ronco C. Perioperative acute kidney injury: prevention, early recognition, and supportive measures. Nephron. 2018;140(2):105 -110. 7. Zarbock A, Weiss R, Albert F, et al. Epidemiology of surgery associated acute kidney injury (EPIS -AKI): a prospective international observational multi -center clinical study. Intensive Care Medicine. 2023;49(12):1441 -1455. 8. GΓΆcze I, Jauch D, GΓΆtz M, et al. Biomarker
https://arxiv.org/abs/2505.21596v1
-guided intervention to prevent acute kidney injury after major surgery: the prospective randomized BigpAK study. In: LWW; 2018. 9. Meersch M, Schmidt C, Hoffmeier A, et al. Prevention of cardiac surgery -associated AKI by implementing the KDIGO guidelines in high risk patients identified by biomarkers: the PrevAKI randomized controlled trial. Intensive care medicine. 2017;43:1551 -1561. 10. Penev Y, Ruppert MM, Bilgili A, et al. Intraoperative hypotension and postoperative acute kidney injury: A systematic review. The American Journal of Surgery. 2024;232:45 -53. 11. Saugel B, Sander M, Katzer C, et al. Association of intraoperative hypotension and cumulative norepinephrine dose with postoperative acute kidney injury in patients having noncardiac surgery: a retrospective cohort analysis. British Journal of Anaesthesia. 2025;134(1):54 -62. 12. Kouz K, Hoppe P, Briesenick L, Saugel B. Intraoperative hypotension: Pathophysiology, clinical relevance, and therapeutic approaches. Indian journal of anaesthesia. 2020;64(2):90 -96. 13. Calvo -Vecino JM, RipollΓ©s -Melchor J, Mythen M, et al. Effect of goal -directed haemodynamic therapy on postoperative complications in low –moderate risk surgical patients: a multicentre randomised controlled trial (FEDORA trial). British journal of anaesthesia. 2018;120(4):734 -744. 14. Salmasi V, Maheshwari K, Yang D, et al. Relationship between intraoperative hypotension, defined by either reduction from baseline or absolute thresholds, and acute kidney and myocardial injury after noncardiac surgery: a retrospective cohort analysis. Anesthesiology. 2017;126(1):47 -65. 15. Wijnberge M, Geerts BF, Hol L, et al. Effect of a machine learning –derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: the HYPE randomized clinical trial. Jama. 2020;323(11):1052 -1060. 16. Yu C, Liu J, Nemati S, Yin G. Reinforcement learning in healthcare: A survey. ACM Computing Surveys (CSUR). 2021;55(1):1 -36. 17. Datta S, Li Y, Ruppert MM, et al. Reinforcement learning in surgery. Surgery. 2021;170(1):329 -332. 23 18. Khezeli K, Siegel S, Shickel B, Ozrazgat -Baslanti T, Bihorac A, Rashidi P. Reinforcement Learning for Clinical Applications. Clinical Journal of the American Society of Nephrology. 2023;18(4):521 -523. 19. Liu S, See KC, Ngiam KY, Celi LA, Sun X, Feng M. Reinforcement learning for clinical decision support in critical care: comprehensive review. Journal of medical Internet research. 2020;22(7):e18477. 20. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nature medicine. 2018;24(11):1716 -1720. 21. Nanayakkara T, Clermont G, Langmead CJ, Swigon D. Unifying cardiovascular modelling with deep reinforcement learning for uncertainty aware control of sepsis treatment. PLOS Digital Health. 2022;1(2):e0000012. 22. Peng X, Ding Y, Wihl D, et al. Improving sepsis treatment strategies by combining deep and kernel -based reinforcement learning. Paper presented at: AMIA Annual Symposium Proceedings2018. 23. Kalimouttou A, Kennedy JN, Feng J, et al. Optimal vasopressin initiation in septic shock: the OVISS reinforcement learning study. JAMA. 2025. 24. Yu C, Ren G, Dong Y. Supervised -actor -critic reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units. BMC medical informatics and decision making. 2020;20:1 -8. 25. Lopez -Martinez D, Eschenfeldt P, Ostvar S, Ingram M, Hur C, Picard R. Deep reinforcement learning for
https://arxiv.org/abs/2505.21596v1
optimal critical care pain management with morphine using dueling double -deep Q networks. Paper presented at: 2019 41st annual international conferenc e of the IEEE engineering in medicine and biology society (EMBC)2019. 26. Sun Q, Jankovic MV, Budzinski J, et al. A dual mode adaptive basal -bolus advisor based on reinforcement learning. IEEE journal of biomedical and health informatics. 2018;23(6):2633 -2641. 27. Lee HY, Chung S, Hyeon D, et al. Reinforcement learning model for optimizing dexmedetomidine dosing to prevent delirium in critically ill patients. npj Digital Medicine. 2024;7(1):325. 28. Zhang K, Wang H, Du J, et al. An interpretable RL framework for pre -deployment modeling in ICU hypotension management. npj Digital Medicine. 2022;5(1):173. 29. Jeter R, Lehman L -W, Josef C, Shashikumar S, Nemati S. Learning to treat hypotensive episodes in sepsis patients using a counterfactual reasoning framework. medRxiv. 2021:2021.2003. 2003.21252863. 30. Tang S, Modi A, Sjoding M, Wiens J. Clinician -in-the-loop decision making: Reinforcement learning with near -optimal set -valued policies. Paper presented at: International Conference on Machine Learning2020. 31. Futoma J, Masood MA, Doshi -Velez F. Identifying distinct, effective treatments for acute hypotension with SODA -RL: safely optimized diverse accurate reinforcement learning. AMIA summits on translational science proceedings. 2020;2020:181. 32. Bihorac A, Ozrazgat -Baslanti T, Ebadi A, et al. MySurgeryRisk: development and validation of a machine -learning risk algorithm for major complications and death after surgery. Annals of surgery. 2019;269(4):652 -662. 33. Kellum JA, Lameire N, Aspelin P, et al. Kidney disease: Improving global outcomes (KDIGO) acute kidney injury work group. KDIGO clinical practice guideline for acute kidney injury. Kidney International Supplements. 2012;2(1):1 -138. 34. Inker LA, Eneanya ND, Coresh J, et al. New creatinine -and cystatin C –based equations to estimate GFR without race. New England Journal of Medicine. 2021;385(19):1737 - 1749. 24 35. Ozrazgat -Baslanti T, Ren Y, Adiyeke E, et al. Development and validation of a race - agnostic computable phenotype for kidney health in adult hospitalized patients. Plos one. 2024;19(4):e0299332. 36. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. Journal of chronic diseases. 1987;40(5):373 -383. 37. Lundberg SM, Lee S -I. A unified approach to interpreting model predictions. Advances in neural information processing systems. 2017;30. 38. Shrikumar A, Greenside P, Kundaje A. Learning important features through propagating activation differences. Paper presented at: International conference on machine learning2017. 39. Hanna J, Stone P, Niekum S. Bootstrapping with models: Confidence intervals for off - policy evaluation. Paper presented at: Proceedings of the AAAI Conference on Artificial Intelligence2017. 40. Thomas P, Brunskill E. Data -efficient off -policy policy evaluation for reinforcement learning. Paper presented at: International conference on machine learning2016. 41. Goradia S, Sardaneh AA, Narayan SW, Penm J, Patanwala AE. Vasopressor dose equivalence: A scoping review and suggested formula. Journal of Critical Care. 2021;61:233 -240. 42. Saasouh W, Christensen AL, Xing F, et al. Incidence of intraoperative hypotension during non -cardiac surgery in community anesthesia practice: a retrospective observational analysis. Perioperative Medicine. 2023;12(1):29. 43. Sun LY, Wijeysundera DN, Tait GA, Beattie WS. Association of intraoperative
https://arxiv.org/abs/2505.21596v1
hypotension with acute kidney injury after elective noncardiac surgery. Anesthesiology. 2015;123(3):515 -523. 44. Keuffel EL, Rizzo J, Stevens M, Gunnarsson C, Maheshwari K. Hospital costs associated with intraoperative hypotension among non -cardiac surgical patients in the US: a simulation model. Journal of Medical Economics. 2019;22(7):645 -651. 45. Loftus TJ, Tighe PJ, Filiberto AC, et al. Artificial intelligence and surgical decision - making. JAMA surgery. 2020;155(2):148 -158. 46. Zhang K, Wang Y, Du J, et al. Identifying decision points for safe and interpretable reinforcement learning in hypotension treatment. arXiv preprint arXiv:210103309. 2021. 47. Peine A, Hallawa A, Bickenbach J, et al. Development and validation of a reinforcement learning algorithm to dynamically optimize mechanical ventilation in critical care. NPJ Digit Med. 2021;4(1):32. 48. Prasad N, Cheng L -F, Chivers C, Draugelis M, Engelhardt BE. A reinforcement learning approach to weaning of mechanical ventilation in intensive care units. arXiv preprint arXiv:170406300. 2017. 25 Supplemental Materials Learning optimal treatment strategies for intraoperative hypotension using deep reinforcement learning Esra Adiyeke*, Tianqi Liu*, Venkata Sai Dheeraj Naganaboina*, Han Li, Tyler J. Loftus, Yuanfang Ren, Benjamin Shickel, Matthew M. Ruppert, Karandeep Singh, Ruogu Fang, Parisa Rashidi, Azra Bihorac#, Tezcan Ozrazgat -Baslanti# * These authors have contributed equally as first authors # These authors have contributed equally as senior authors This supplemental material has been provided by the authors to give readers additional information about their work. 26 Supplemental Material Table of Contents Supplemental Methods Supplemental Figure 1. Figure illustrating derivation of the study population. Supplemental Figure 2 . Workflow pipeline. Supplemental Figure 3. Comparison between doses suggested by RL model and physician's administration for a surgical session without postoperative AKI within the first 3 days after surgery. (Example surgery 1) Supplemental Figure 4. Comparison between doses suggested by RL model and physician's administration for a surgical session without postoperative AKI within the first 3 days after surgery. (Example surgery 2) Supplemental Figure 5. Comparison between doses suggested by RL model and physician's administration for a surgical session with postoperative AKI within the first 3 days after surgery. (Example surgery 3) Supplemental Figure 6. Comparison between doses suggested by RL model and physician's administration for a surgical session with postoperative AKI within the first 3 days after surgery. (Example surgery 4) Supplemental Table 1. Table listing characteristics of input variables. Supplemental Table 2. Detailed cohort characteristics. Supplemental Table 3. Norepinephrine conversion factors. Supplemental Table 4. IV fluid discretization Supplemental Table 5. Vasopressor discretization 27 Supplementary Methods A. Cohort Data from the full patient cohort were divided into a development cohort, comprising patients admitted between June 1, 2014 and November 29, 2018, and a validation cohort, from November 30, 2018 to September 20, 2020 in a 70% to 30% ratio. Patients were ex cluded if they had any of the following cases: 1) end stage kidney disease present, 2) age < 18 on admission, 3) not an inpatient encounter, 4) missing admission or discharge date or missing surgery stop time, 4) surgery performed for organ donation, a mi nor gastrointestinal procedure, or a minor pain management procedure, anesthesia administered outside of
https://arxiv.org/abs/2505.21596v1
the operating room, 5) the surgery was < 60 minutes, 6) 3 -day or 7 -day acute kidney injury (AKI) status was missing due to insufficient serum creatini ne data available, 7) cardiac surgeries. (Supplemental Figure 1). B. Dosage pre -processing and action space Intravenous fluids (IV) included boluses and continuous infusions of crystalloid and colloid solutions. Vasopressors included dopamine, epinephrine, norepinephrine, phenylephrine, and vasopressin. Outliers were capped to expert defined clinically plausible values identified utilizing maximum doses used in refractory shock (Supplemental Table 3). Vasopressor dosages were converted into norepinephrine -equivalents using dosage conversions as listed in Supplemental Table 3. Using norepinephrine -equivalent dosages normalized to rate and weight, we developed a discrete 5 x 5 action space of vasopressors and IV fluid dosages given in 15 -minute intervals. Thresholds for action bins were chosen by selecting percentile thresholds f or points of high rate of dosage increase and were validated through expert review. We converted each drug dosage at every timestep into an integer representing its bin. No drug dosage was encoded as bin 1, the lowest dosage category was encoded as bin 2 a nd the highest dosage category was encoded as bin 5. Thus, interventions were represented as tuples of total IV and vasopressor dosages per 15 -minute interval. C. Model Details Surgery can be conceptualized as a sequential decision -making process, and to train an agent, we used the Reinforcement Learning Algorithm with 16 clinical variables (age, sex, Charlson comorbidities index, body mass index, heart rate, systolic blood press ure, mean arterial pressure, body temperature, respiratory rate, minimum alveolar concentration, SF ratio, peak inspiratory pressure, end -tidal carbon dioxide, average mean arterial blood pressure of past 48 hours, cumulative IV and cumulative pressor administered during surgery). The physiological state of a patient was represented by these 16 variables, and thus the surgical decision -making process could be formulated as a partially observable Ma rkov Decision Process (MDP). The model free MDP was defined by a tuple {S,A,R,Ξ³} with: 1. S is the state of a patient (in our model, it contains 16 clinical variables) 2. A is the finite set of actions for state S (in our model, the doses intravenous fluids and vasopressors are discretized into 25 actions) 28 3. R(sβ€²) is the immediate reward received for transitioning to next state Sβ€². 4. Ξ“ is the discounting factor, that indicates the decay of influences for future rewards than an immediate reward. Our model operated on 15 -minute time intervals, within which multiple measurements were recorded. We considered resampling by averaging all measurements within each interval as a preprocessing step before inputting the data into the model. We set Ξ“ to 0.99 in the model. We utilized the K -means++ clustering algorithm to categorize the resampled data points into distinct clusters, each representing a unique patient state. Specifically, we used a total of 16 features derived from the patient data for clustering, to provide a comprehensive representation of the patient's condition. The K -means++ algorithm was chosen for its efficiency in selecting initial cluster centroids, minimizing the
https://arxiv.org/abs/2505.21596v1
chances of poor clustering due to suboptimal centroid initialization. Cluster membership was assigned based on the closest centroids, ensuring that each data point was grouped with the most similar patient states. We determined an optimal number of 200 clusters using the elbow method for silhouette analysis, which effectively discretized our s tate space into a manageable set of 200 potential states. We developed 50 distinct RL models for 50 different clustering outcomes obtained by different random initializations to account for the variability in policy values. This discretization allows us to group patients with similar medical conditions and tailor specific treatment strategies for each group. Additionally, the cluster number was included as a feature in the overall feature set before training, providing our model with an additional layer of insight into patient state transitions. We constructed the transition matrix T(sβ€²,s,a) by analyzing the frequency of observed transitions in the training dataset, then normalized these frequencies to obtain proper transition probabilities. Reward Function We developed a reward function as a combination of long - and short -term rewards with two major parts to consider. The reward function is the sum of rewards of two parts: π‘Ÿ=π‘Ÿπ‘Žπ‘˜π‘–+π‘Ÿβ„Žπ‘¦π‘π‘œ Eq. 1 Part 1: Postoperative AKI outcome: This acts as the long -term reward/penalty utilizing a dedicated reward of +15 if no AKI occurred within 3 days following surgery, and -15 for if AKI occurred within 3 days following surgery. π‘Ÿπ‘Žπ‘˜π‘–={ 15, π‘›π‘œ 𝐴𝐾𝐼 π‘€π‘–π‘‘β„Žπ‘–π‘› 3 π‘‘π‘Žπ‘¦π‘  π‘œπ‘“ π‘ π‘’π‘Ÿπ‘”π‘’π‘Ÿπ‘¦ βˆ’15, 𝐴𝐾𝐼 π‘€π‘–π‘‘β„Žπ‘–π‘› 3 π‘‘π‘Žπ‘¦π‘  π‘œπ‘“ π‘ π‘’π‘Ÿπ‘”π‘’π‘Ÿπ‘¦ Eq. 2 Part 2: Hypotension: The function adds a penalty of 1.75 ( 𝑐1) if the state is hypotensive, that is mean arterial pressure (MAP) is less than 65 mmHg or 20% lower than baseline MAP. The baseline MAP was calculated by taking the median non -invasive MAP within the 48 hours prior to the first surgery start date -time th at was between 60 and 110 mmHg. π‘Ÿβ„Žπ‘¦π‘π‘œ ={βˆ’1.75, 𝑀𝐴𝑃𝑑<65π‘šπ‘šπ»π‘” π‘œπ‘Ÿ 𝑀𝐴𝑃𝑑<0.8Γ—π‘€π΄π‘ƒπ‘π‘Žπ‘ π‘’π‘™π‘–π‘›π‘’ 0, π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’ Eq. 3 Evaluation of physicians’ action: 29 We used Dueling Double Deep Q Networks ( D3QN) as architecture for the RL model to learn an optimal policy (agent policy) and test it. We performed an evaluation of the real actions (the policy) of physicians using temporal difference (TD) for the Q function learning with observed states, actions and rewards tuples in surgeries. The Q function learning is computed iteratively by the following formula: Qπœ‹(𝑠,π‘Ž)β†π‘„πœ‹(𝑠,π‘Ž)+𝛼 (π‘Ÿ+π›Ύπ‘„πœ‹(𝑠′,π‘Žβ€²)βˆ’π‘„πœ‹(𝑠,π‘Ž)) , Eq. 4 where the 𝛼 is the learning rate and π‘Ÿ is the immediate reward. The loss function of the D3QN is denoted as: 𝐿𝐷𝑄𝑁 =((π‘Ÿ+𝛾max π‘Žπ‘‘+1𝑄(𝑠𝑑+1,π‘Žπ‘‘+1,;πœƒπ‘‘π‘Žπ‘Ÿπ‘”π‘’π‘‘))βˆ’π‘„(𝑠,π‘Ž;πœƒπ‘π‘Ÿπ‘’π‘‘))2 Eq. 5 The training experiences are recorded from physicians, therefore the action recommended by AI agent should have similar distribution with actions from physicians to ensure the safety of the agent policy. We added a KL divergence penalty to the loss function to constrain the action distribution for AI agent from diverging from physicians’ actions: 𝐿=𝐿𝐷𝑄𝑁 +𝛼𝐿𝐾𝐿 Eq. 6 𝐿𝐾𝐿=𝐷𝐾𝐿(π·π‘ƒβ„Žπ‘¦π‘ π‘–π‘π‘–π‘Žπ‘› ||𝐷𝐴𝑔𝑒𝑛𝑑 )=βˆ‘ [π‘π‘β„Žπ‘¦(π‘₯𝑖)π‘™π‘œπ‘”π‘π‘β„Žπ‘¦(π‘₯𝑖)βˆ’π‘π‘β„Žπ‘¦(π‘₯𝑖)π‘™π‘œπ‘”π‘žπ‘Žπ‘”π‘’π‘›π‘‘ (π‘₯𝑖)]𝑁 𝑖=1 Eq. 7 Estimation of the AI policy Our model learned an optimal policy, as we refer as agent policy,
https://arxiv.org/abs/2505.21596v1
in theory with the goal of maximizing the sum of rewards to avoid long -term AKI and short -term Hypotension. The agent policy will start with a random policy that was iteratively evaluated an d then improved until converging to an optimal solution. After convergence, the agent policy Ο€βˆ— corresponded to actions with the highest state -action value: πœ‹βˆ—(𝑠)β†π‘Žπ‘Ÿπ‘”π‘šπ‘Žπ‘₯ π‘Žπ‘„πœ‹βˆ—(𝑠,π‘Ž),βˆ€π‘  Eq. 8 D. Model Evaluation Weighted Importance Sampling (WIS) The traditional reinforcement learning applications have typically been tested in simulated environments where the evaluation of performance of reinforcement learning algorithms can be easily quantified, such as in video games. However, unlike synthetic en vironments, clinical settings do not permit real -time experimentation with unvalidated policies. Off -policy evaluation allows for retrospective assessment of a target policy (e.g., model suggested treatment strategy) under a different behavior policy (e.g. , physician decision making). Consequently, off - policy evaluation is adopted for our task to evaluate our trained AI agent ( Ο€e) based on physicians’ policies. Weighted Importance Sampling (WIS) takes the idea to reweight the rewards in the historical data (physicians’ policy Ο€b) by the importance sampling ratio between Ο€e and Ο€b. The per -step importance ratio is defined at step t as: ρt=πœ‹π‘’(π‘Žπ‘‘|𝑠𝑑) πœ‹π‘(π‘Žπ‘‘|𝑠𝑑) Eq. 9 The cumulative importance ratio up to step t is formulated as: ρ1:𝑑=∏ πœŒπ‘‘β€²π‘‘ 𝑑′=1 , Eq. 10 30 where 𝑑 is the end of a surgery. However, the cumulative importance ratio is not stable for long surgeries due to the large number of steps; that is 15 -min intervals. When most ρt are larger than 1, the product value will soon exponentially explode to a very large number, which makes the evaluation not stable. To solve this problem, we proposed a new cumulative ratio as follows: ρ′1:𝑑=𝑙𝑛ρ1:𝑑=π‘™π‘›βˆ πœŒπ‘‘β€²π‘‘ 𝑑′=1 Eq. 11 The 𝑙𝑛 can compress the range of input value. But if the ρ1:𝑑 is smaller than 1, the output of 𝑙𝑛 function is smaller than 0. Since a negative cumulative ratio is not meaningful in this context, we instead used the Softmax function, which maps values from the range (βˆ’ ∞,∞) to [0,1]. 𝜌1:𝑑′′ = 1 1+π‘’βˆ’Οβ€²1:𝑑 = 1 1+π‘’βˆ’π‘™π‘›πœŒ1:𝑑 = 1 1+1 𝜌1:𝑑 Eq. 12 We then calculated the average cumulative importance ratio for a dataset D with |D| number of trajectories, that is the number of surgeries in this case: 𝑀𝐷=βˆ‘ πœŒβ€²β€²1:𝑑𝑖(𝑖) |𝐷| 𝑖=1 |𝐷| Eq. 13 The trajectory -wise WIS estimator is given by: Vπ‘ŠπΌπ‘†(𝑖)=𝜌 1:𝑑𝑖′′(𝑖) 𝑀𝐷(βˆ‘ π›Ύπ‘‘βˆ’1π‘Ÿπ‘‘π‘‘π‘– 𝑑=1 ) Eq. 14 Then the WIS estimator was calculated as the average estimate over all trajectories: WIS =1 |𝐷|βˆ‘ π‘‰π‘ŠπΌπ‘†(𝑖) |𝐷| 𝑖=1 , Eq. 15 where Vπ‘ŠπΌπ‘†(𝑖) is WIS applied to the i-th trajectory. 31 Supplemental Figure 1. Figure illustrating derivation of the study population 32 Supplemental Figure 2. Workflow pipeline (Abbreviations: AKI, acute kidney injury ) 33 Supplemental Figure 3. Comparison between doses suggested by RL model and physician's administration for a surgical session without postoperative AKI within the first 3 days after surgery. (Example surgery 1) 34 Supplemental Figure 4. Comparison between doses suggested by RL model and physician's administration for a
https://arxiv.org/abs/2505.21596v1
surgical session without postoperative AKI within the first 3 days after surgery. (Example surgery 2) 35 Supplemental Figure 5. Comparison between doses suggested by RL model and physician's administration for a surgical session with postoperative AKI within the first 3 days after surgery. (Example surgery 3) 36 Supplemental Figure 6. Comparison between doses suggested by RL model and physician's administration for a surgical session with postoperative AKI within the first 3 days after surgery. (Example surgery 4) 37 Supplemental Table 1: Table listing characteristics of input variables. Variable Type of Variable Data Source Number of Categories Type of Preprocessing Demographic Variables Age (years) Continuous Derived Imputation of outliersa Sex Binary Raw 2 Body Mass Index Continuous Raw Imputation of outliersa Comorbidities Charlson's Comorbidity Index Nominal Derived 18 Optimization of categorical featuresb Intraoperative Medicationsc Vasopressors Continuous Raw Data cleaningd Intravenous (IV) fluid Continuous Raw Data cleaningd Physiological Intraoperative Time Series Systolic blood pressure, mmHg Continuous Raw Data cleaningd Mean Arterial Pressure, mmHg Continuous Raw Data cleaningd Minimum alveolar concentration Continuous Raw Data cleaningd Heart rate, bpm Continuous Raw Data cleaningd Temperature (oC) Continuous Raw Data cleaningd End-tidal CO2 (ETCO2) Continuous Raw Data cleaningd Peak Inspiratory Pressure (PIP) Continuous Raw Data cleaningd Respiratory Rate Continuous Raw Data cleaningd SPO2/FIO2 ratio Continuous Derived Data cleaningd A different set of variables was kept in final models (preoperative or intraoperative) from the input set provided in the table. a For continuous variables, observations that fell in the top and bottom 1% of the distribution were considered as outliers and imputed by neighborhood values (i.e., above 99%) are imputed randomly from a uniform distribution defined over [95%, 99.5%] perce ntiles and below 1% are imputed randomly from another uniform distribution defined over [0.5%, 5%] percentiles. c Medications were taken during surgery using RxNorms data grouped into drug classes according to the US, Department of Veterans Affairs National Drug File -Reference Terminology. d We used observations for the first surgery, in case multiple surgeries exist. We averaged values if multiple observations exist at a time point. 38 Supplemental Table 2: Detailed cohort characteristics. Features Development Cohort Test Cohort Number of encounters, n 34,186 15,835 Demographic information Age, years, mean (SD) 57 (17) 59 (17) Sex, n (%) Male, n (%) 17,031 (50) 7,909 (50) Female, n (%) 17,155 (50) 7,926 (50) Race, n (%) White 26,743 (78) 12,360 (78) African American 4,776 (14) 2,233 (14) Other 2,164 (6) 972 (6) Missing 503 (1) 270 (2) Ethnicity, n (%) Non-Hispanic 32,063 (94) 14,711 (93) Hispanic 1530 (4) 756 (5) Body Mass Index, median (IQR) 28 (24, 34) 28 (24, 34) Comorbidities, n (%) Charlson comorbidity index, median (IQR) 4 (2, 6) 4 (2, 6) Alcohol or drug abuse 5,353 (16) 2,587 (16) Myocardial Infarction 2,384 (7) 1,156 (7) Congestive Heart Failure 4,653 (14) 2,488 (16) Peripheral Vascular Disease 6,967 (20) 3,644 (23) Cerebrovascular Disease 5,642 (17) 2,854 (18) Chronic Pulmonary Disease 1,0516 (31) 5,333 (34) Cancer 1,0186 (30) 4,465 (28) Metastatic Carcinoma 3,451 (10) 1,678 (11) Liver Disease 5,232 (15) 2,549 (16) Diabetes 8,252 (24) 3,916 (25) Hypertension 21,843 (64)
https://arxiv.org/abs/2505.21596v1
10,642 (67) Obesity 11,393 (33) 7,070 (45) Fluid and electrolyte disorders 10,301 (30) 6,120 (39) Valvular Disease 3,561 (10) 2,048 (13) Coagulopathy 4,674 (14) 2,173 (14) Weight Loss 5,212 (15) 2,589 (16) Depression 9,544 (28) 4,805 (30) Chronic Anemia 6,792 (20) 4,183 (26) Chronic Kidney Disease 5,718 (17) 3,019 (19) Reference estimated glomerular filtration rate, median (IQR) 95.27 (81.86, 109.73) 93.44 (80.53, 107.71) 39 Features Development Cohort Test Cohort Baseline mean arterial pressure mmHg , median (IQR)a 86 (78, 94) 87 (79, 95) Surgical information, n (%) Time from Admission to Surgery, days, median (IQR) 3 (2, 22) 3 (2, 23) Emergency admission 12,243 (36) 5,763 (36) Admission type Surgery 13,080 (38) 6,521 (41) Medicine 16,796 (49) 6,920 (44) Transferred from another hospital 5,666 (17) 2,667 (17) Anesthesia Type General 31,281 (92) 14,941 (94) Local/regional 2,905 (8) 894 (6) Surgery Type Orthopedic Surgery 10,109 (30) 4,341 (27) Neurosurgery 3,744 (11) 2,546 (16) Vascular Surgery 3,420 (10) 1,704 (11) Other 3,234 (9) 2,029 (13) Urology 3,768 (11) 1,231 (8) Gastrointestinal Surgery 2,798 (8) 1,165 (7) Ear Nose Throat 1,873 (5) 855 (5) OB Gynecology 1,291 (4) 541 (3) Surgical Oncology 1,495 (4) 449 (3) Plastic Surgery 619 (2) 277 (2) Transplantation 575 (2) 170 (1) Burn Surgery 883 (3) 321 (2) Pediatric Surgery 270 (1) 122 (1) Ophthalmology 95 (0) 79 (0) Medicine Gastroenterology 12 (0) 5 (0) Intraoperative vitals Systolic blood pressure Measured value, mmHg, median IQR 114.0 (102.0, 130.0) 116.0 (104.0, 132.0) Total measurements, n, IQR 73.0 (47.0, 149.0) 81.0 (50.0, 179.0) Encounters missing, n (%) 21 (0.06) 11 (0.07) Mean arterial pressure Measured value, mmHg, median IQR 79.0 (70.0, 90.0) 82.0 (72.0, 93.0) Total measurements, n, IQR 73.0 (47.0, 150.0) 81.0 (50.0, 179.0) Encounters missing, n (%) 21 (0.06) 11 (0.07) Heart rate Measured value, bpm, median IQR 75.0 (65.50, 86.50) 75.0 (66.0, 86.50) Total measurements, n, IQR 175.0 (121.75, 257.0) 179.0 (124.0, 259.0) Encounters missing, n (%) 22 (0.06) 9 (0.06) Oxygen saturation (SpO2) Measured value, %, median IQR 99.10 (97.50, 10.0) 99.0 (97.20, 10.0) Total measurements, n, IQR 177.0 (120.0, 268.0) 184.0 (123.0, 277.0) Encounters missing, n (%) 43 (0.13) 18 (0.11) Fraction of inspired oxygen (FiO2) Measured value, %, median IQR 40.0 (40.0, 40.0) 40.0 (40.0, 40.0) 40 Features Development Cohort Test Cohort Total measurements, n, IQR 187.0 (130.0, 280.0) 195.0 (135.0, 291.0) Encounters missing, n (%) 19 (0.06) 8 (0.05) End-tidal carbon dioxide (EtCO2) Measured value, mmHg, median IQR 34.0 (32.0, 37.0) 35.0 (33.0, 38.0) Total measurements, n, IQR 149.0 (82.0, 238.0) 2.0 (0.0, 163.0) Encounters missing, n (%) 3,548 (10.38) 7,254 (45.81) Respiration rate Measured value, breaths/minute, median IQR 10.0 (8.0, 12.0) 12.0 (10.0, 14.0) Total measurements, n, IQR 183.0 (125.0, 275.0) 124.0 (3.0, 214.0) Encounters missing, n (%) 75 (0.22) 782 (4.94) Peak inspiratory pressure Measured value, mmHg, median IQR 18.0 (14.0, 23.0) 18.0 (14.0, 22.0) Total measurements, n, IQR 183.0 (126.0, 276.0) 176.0 (109.0, 271.0) Encounters missing, n (%) 396 (1.16) 1,567 (9.90) Minimum alveolar concentration Measured value, median IQR 0.62 (0.44, 0.81) 0.56 (0.31, 0.77) Total measurements, n, IQR 160.0 (102.0, 242.0)
https://arxiv.org/abs/2505.21596v1