text
string
source
string
up to 1.000 training epochs took several hours. Marking the eye outperformed all other tests, however it should be noted that while the eye is a meaningful concept, this dataset did not necessarily insure uniqueness of looking direction, but always a unique combination of eye coordinate and gaze vector. The interesting...
https://arxiv.org/abs/2505.21589v1
exception of the direction vector is able to include fewer features shared by badger and pigeon, such as more area around the pigeon’s eyes and where the fur of pigeon and badger overlap. The model also directly accounts for the eye gaze as well, which shows the usefulness of the feature in the learning process. We als...
https://arxiv.org/abs/2505.21589v1
and poses, for example moving, sitting, flying, and eating. Another principle we applied for prompting is a principle borrowed from the psychological domain. The design rules for optical illusions go back to fundamental problems of psychology, such as Gestalt theory introduced as early as 1935 Koffka [2013]. These laws...
https://arxiv.org/abs/2505.21589v1
was a distinguishing feature, but there were still two animals visible, again we include an example in the Appendix A. Ultimately, this work invites a broader perspective: What if pixel-wise saliency is not the most effective approach to explainability in the image domain? What kinds of concepts should models truly be ...
https://arxiv.org/abs/2505.21589v1
areas than the edges, which makes sense because the body of an animal is always around the eye. Figure 10: This scatter plot shows the spread of all eye coordinates from all classes. It is pretty diverse, considering the eyes are placed within the body of the animal and can therefore not occur on the immediate sides. 1...
https://arxiv.org/abs/2505.21589v1
annotation strategies. 14 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet18 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet34 0200 400 600 8001,0000204060 EpochsAccuracy (%)ResNet50 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG13 0200 400 600 8001,0000204060 EpochsAccuracy (%)VGG16 Direction No Directi...
https://arxiv.org/abs/2505.21589v1
seen with a higher likelihood, and what is it dependent on? The outer one? Do we prefer the color black? Is it the animal whose head “looks complete”? Does this change when we turn the angle of the picture, so another kind of “gaze direction”? References Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, and José...
https://arxiv.org/abs/2505.21589v1
Ba. Adam: A method for stochastic optimization, 2017. URL https: //arxiv.org/abs/1412.6980 . Kurt Koffka. Principles of Gestalt psychology . routledge, 2013. Gang Liu, Yu Yu, Kenneth A Funes Mora, and Jean-Marc Odobez. A differential approach for gaze estimation. IEEE transactions on pattern analysis and machine intell...
https://arxiv.org/abs/2505.21589v1
Recognition , 72:59–71, 2017. Xinming Wang, Jianhua Zhang, Hanlin Zhang, Shuwen Zhao, and Honghai Liu. Vision-based gaze estimation: A review. IEEE Transactions on Cognitive and Developmental Systems , 14(2):316–332, 2021. Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. Explainable ai: A brie...
https://arxiv.org/abs/2505.21589v1
Pioneering 4-Bit FP Quantization for Diffusion Models: Mixup-Sign Quantization and Timestep-Aware Fine-Tuning Maosen Zhao1∗, Pengtao Chen1∗, Chong Yu2, Yan Wen1, Xudong Tan1, Tao Chen1† 1School of Information Science and Technology, Fudan University 2Academy for Engineering and Technology, Fudan University 20307130202@...
https://arxiv.org/abs/2505.21591v1
a uni- versally effective and scalable fine-tuning method for 4-bit quantization in diffusion models remains an open challenge. To be noticed, existing quantization methods for diffu- sion models primarily rely on integer (INT) quantization, which has long been the dominant approach. However, recent developments have d...
https://arxiv.org/abs/2505.21591v1
zation struggles with asymmetric activations, whicharise from the nonlinear behavior of activation func- tions. To address this, we introduce the MSFP frame- work, which is also the first effective application of unsigned FP quantization in quantization, offering a novel approach for achieving low-bit quantization. (ii...
https://arxiv.org/abs/2505.21591v1
fine-tuning, focusing on adjusting only a subset of parameters while keeping the ma- jority frozen, thereby reducing storage overhead. Low-rank adapters (LoRA) [12], originally developed for large lan- guage models, have become one of the most widely used PEFT methods. Leveraging LoRA’s strong transferability, QLoRA [5...
https://arxiv.org/abs/2505.21591v1
LossOriginal MSE Loss Aligned MSE Loss Performance Deterioration Metric 106 105 104 Performance DeteriorationFigure 3. Two loss, and performance degrada- tion between the quantized and full-precision models across steps. Compared with metric, the original loss shows an inverse trend, while the aligned loss remains cons...
https://arxiv.org/abs/2505.21591v1
SiLU , defined as SiLU (x) =x 1 +e−x, is commonly situated between layers. SiLU causes the abnormal activations for the subsequent layer. As depicted in Panel (b) of Figure 1, all values be- low 0 are compressed into the range of [−0.278,0). In this paper, we refer to layers with such asymmetric activations as Anomalou...
https://arxiv.org/abs/2505.21591v1
that quantization error is most significant at this stage, con- tradicting the expectation that its influence should diminish over time. To highlight the discrepancy between the loss and ac- tual quantization errors, we define the performance gap at each step as the difference in denoising quality between the quantized...
https://arxiv.org/abs/2505.21591v1
❌×LayerStep 1 …Denoised Sample 1 Add Flow MSFP Quantization ActivationAAL00NormalDistributionAnomalousDistribution SignedQuantizationNAL Mixup-SignQuantizationBest(,) DF Alignment 1𝛼!$1−𝛼!1−𝛼&!tLoRA HubFigure 5. The pipeline of our proposed method. UNets are applied to the Mixup-Sign Floating-Point Quantization (MSF...
https://arxiv.org/abs/2505.21591v1
noise: Lt=γt·Lt εθ. (9) 6 By introducing γt, which accurately reflects the utiliza- tion of the predicted noise at each time step, we achieve a preliminary alignment between the loss and the actual quan- tization error, as shown in Figure 3. This facilitates more accurate fine-tuning, leading to better performance reco...
https://arxiv.org/abs/2505.21591v1
N/A EDA-DM 4/4 N/A N/A QuEST 4/4 N/A N/A EfficientDM 4/4 36.36 2.69 Ours ( h=2) 4/4 12.21 2.47 Ours ( h=4) 4/4 12.34 2.48 LSUN (Church) 256x256 LDM-8 steps = 100 eta = 0.0FP 32/32 4.06 2.70 Q-Diffusion 6/6 10.90 2.47 EDA-DM 6/6 10.76 2.43 QuEST 6/6 6.83 2.65 EfficientDM 6/6 7.45 2.80 Ours ( h=2) 6/6 6.24 2.73 Ours ( h=...
https://arxiv.org/abs/2505.21591v1
FID score is 9.53 higher than that of the full-precision model (6.49). By ap- plying our technique, we reduce the FID by 8.18 compared to the baseline. More interesting is that we visualize the LoRA allocation distribution learned by the router as shown in Figure 7. We find that the distribution of the router- learned ...
https://arxiv.org/abs/2505.21591v1
4 [9] Yefei He, Luping Liu, Jing Liu, Weijia Wu, Hong Zhou, and Bohan Zhuang. Ptqd: Accurate post-training quantization for diffusion models. Advances in Neural Information Pro- cessing Systems , 36, 2024. 2 [10] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by ...
https://arxiv.org/abs/2505.21591v1
2021. 4 [24] Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. arXiv preprint arXiv:2202.09778 , 2022. 4 9 [25] Shih-yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, and Kwang-Ting Cheng. Llm-fp4: 4-bit floating-point quan- tized transformers. arXiv preprint ar...
https://arxiv.org/abs/2505.21591v1
later. arXiv preprint arXiv:2303.02490 , 2023. 2, 8, 4 [41] Changyuan Wang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie Zhou, and Jiwen Lu. Towards accurate data- free quantization for diffusion models. arXiv preprint arXiv:2305.18723 , 2(5), 2023. 7, 2 [42] Haoxuan Wang, Yuzhang Shang, Zhihang Yuan, Junyi Wu, Junchi Yan,...
https://arxiv.org/abs/2505.21591v1
the sec- ond stage, the search for unsigned FP quantization param- eters is specifically applied to the activation initializationAlgorithm 1 Initialization of Quantization Parameters 1:Input: format options ,maxval options ,(zpoptions ), (unsigned format options ) 2:Output: format ,maxval ,(zp) 3: 4:#10000 is huge enou...
https://arxiv.org/abs/2505.21591v1
the main text, we employ signed FP quantization for NALs with distribution approximately following a normal distribution, and adopt a mixup-sign FP quantization strategy for AALs with asymmetric distribu- tions. Unlike weight initialization, where weights remain static, activation initialization needs to account for po...
https://arxiv.org/abs/2505.21591v1
6-bit quan- tization for diffusion models. This highlights that FP quan- tization is a more effective choice for handling low-bit ac- tivation quantization in diffusion models, a task that is both 2 0.6 0.4 0.2 0.0 0.2 0.4 0.6 Weight Value01234log(Frequency) (a)down.2.attn.1.q.weight 4 2 0 2 4 Weight Value012345log(Fre...
https://arxiv.org/abs/2505.21591v1
effec- tively, while the introduction of additional TALoRAs could reduce the training opportunities for the most impactful Lo- RAs, ultimately compromising fine-tuning performance. F. Supplementary Performance Evaluation To further validate the effectiveness of our approach, we conduct supplementary experiments. For th...
https://arxiv.org/abs/2505.21591v1
4/4 11.76 Ours(h=2) 4/4 - Layer-wise for ActivationQuEST 4/4 13.03 Ours(h=2) 4/4 8.81 Table 11. Comparison with EfficientDM and QuEST under spe- cific settings. ‘Prec. (W/A)’ denotes the quantization bit-width. h denotes the size of LoRA Hub. tute a significant portion of the model, and their quanti- zation significant...
https://arxiv.org/abs/2505.21591v1
arXiv:2505.21593v1 [cs.CV] 27 May 2025Any-to-Bokeh: One-Step Video Bokeh via Multi-Plane Image Guided Diffusion Yang Yang1,2∗Siming Zheng2∗Jinwei Chen2Boxi Wu1† Xiaofei He1Deng Cai1Bo Li2Peng-Tao Jiang2† 1Zhejiang University2vivo Mobile Communication Co., Ltd Project Page: https://vivocameraresearch.github.io/any2bokeh...
https://arxiv.org/abs/2505.21593v1
shallow depth-of-field effects from a single im- age [ 8,9,10,11]. In contrast, video bokeh remains in its early stages. Naively extending image-based methods [ 12,13,14,15] to video often leads to undesirable artifacts, such as temporal flickering and inconsistent blur, due to the lack of temporal modeling and robust ...
https://arxiv.org/abs/2505.21593v1
realistic bokeh outputs and achieving state-of-the-art performance across multiple evaluation benchmarks. 2 Related work 2.1 Camera Simulation Diffusion Models Reference Guidance Models . A line of work [ 20,21] encodes motion cues from reference videos via LoRA [ 22], enabling the diffusion model to replicate specific...
https://arxiv.org/abs/2505.21593v1
Fig. 2 (b), which effectively separates depth-aware regions and facilitates improved bokeh effect. Additionally, we introduce a progressive training strategy in Fig. 3 designed to enhance temporal consistency, 3 𝑽𝑨 V AE Decoder V AE Encoder C(a) Model Architecture MPI Spatial Block Temporal Block CConcatenate HMPI Th...
https://arxiv.org/abs/2505.21593v1
inject MPI-derived attention masks into the diffusion process. The base model follows a U-Net architecture initialized from SVD and is conditioned on three explicit control signals: (1) a normalized disparity difference between the video disparity map and the focal plane disparity VD, (2) a scalar blur strength paramet...
https://arxiv.org/abs/2505.21593v1
referred to as MPI Attention, and injecting it into the spatial attention blocks of the U-Net. Formally: ˆQ=Q+ tanh( γ)·TS(Attn([ Q+ ΦM(E(K),ΦA(VA)],¯M)), (3) where Q={Q1,···, Qi}denotes the feature tokens from the current U-Net block, VArepresents visual tokens from the input video, and γis a learnable gating paramete...
https://arxiv.org/abs/2505.21593v1
noise, the model learns to be less dependent on precise depth values and becomes more resilient to real-world variations in depth estimation. Additionally, training with longer temporal sequences allows the model to leverage longer temporal memory, reducing bokeh flickering caused by depth noise. Stage 3: V AE Decoder ...
https://arxiv.org/abs/2505.21593v1
containing 25 frames. All videos 6 Method FD ↓ RM↓ VFID-I ↓ FVD↓ SSIM↑ PSNR↑ Time↓ DeepLens [15] 1.162 0.030 16.042 125.338 0.819 24.574 0.226 BokehMe [14] 0.536 0.013 8.633 39.102 0.936 27.992 0.103 Dr.Bokeh [12] 0.522 0.011 6.097 32.710 0.950 31.273 2.729 MPIB [13] 0.481 0.011 5.444 35.766 0.950 31.390 0.521 Any-to-B...
https://arxiv.org/abs/2505.21593v1
results on the real video . The focal plane is located on the football. For each method, we only present the middle frame. Please zoom in to view the image details. the effectiveness of the SVD pre-trained prior in reducing flickering and inconsistent blur, resulting in a temporally consistent bokeh effect. Additionall...
https://arxiv.org/abs/2505.21593v1
our method was preferred over the others, demonstrating a higher human preference for the bokeh effects generated by our approach. 4.3 Ablation Studies We evaluate the effectiveness of each component in the Any-to-Bokeh framework in Tab. 3 and assess the V AE’s contribution to video detail quality in Tab. 4. MPI Block ...
https://arxiv.org/abs/2505.21593v1
using elastic transform [ 48], Gaussian blur, and morphological transformations. As shown in the last two rows in Tab. 3, TR leads to improvements across all metrics, with particularly noticeable gains in temporal consistency (FD and RM) and video quality (VFID-I and FVD). These results demonstrate that training the te...
https://arxiv.org/abs/2505.21593v1
an expert transformer. arXiv preprint arXiv:2408.06072 , 2024. [8]Pratul P Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, and Jonathan T Barron. Aperture supervision for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 6393–6401, 2018. [9]Lei Xiao, Anton...
https://arxiv.org/abs/2505.21593v1
adaptation of large language models. In International Conference on Learning Representations , 2022. [23] Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, and Yi Jin. Motionclone: Training-free motion cloning for controllable video generation. InThe Thirteenth Internationa...
https://arxiv.org/abs/2505.21593v1
, pages 245–261. Springer, 2020. [39] Guangkai Xu, Yongtao Ge, Mingyu Liu, Chengxiang Fan, Kangyang Xie, Zhiyue Zhao, Hao Chen, and Chunhua Shen. What matters when repurposing diffusion models for general dense perception tasks? arXiv preprint arXiv:2403.06090 , 2024. [40] Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, and Lei...
https://arxiv.org/abs/2505.21593v1
Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 6299–6308, 2017. [56] Songwei Ge, Aniruddha Mahapatra, Gaurav Parmar, Jun-Yan Zhu, and Jia-Bin Huang. On the content bias in fréchet video distance. In Proceeding...
https://arxiv.org/abs/2505.21593v1
presented in random order to avoid bias. As shown in the interface (Fig. 8), participants were asked to select the method that produced the most consistent and aesthetically pleasing bokeh effect. 14 Input Disparity Result s Figure 9: Any-to-Bokeh struggles to recover missing structures in the disparity map. C Limitati...
https://arxiv.org/abs/2505.21593v1
arXiv:2505.21594v1 [cs.RO] 27 May 2025Fast and Cost-effective Speculative Edge-Cloud De- coding with Early Exits Yeshwanth Venkatesha yeshwanth.venkatesha@yale.edu Department of Electrical Engineering Yale University Souvik Kundu souvikk.kundu@intel.com Intel Labs Priyadarshini Panda priya.panda@yale.edu Department of ...
https://arxiv.org/abs/2505.21594v1
smaller organizations and researchers who rely on expensive cloud-based APIs; for example, GPT-4.1 text generation costs $2.00/1M input tokens and $8.00/1M output tokens at the time of writing this paper.1 A potential solution is deploying LLMs on edge devices, which offers benefits like low latency, faster customizati...
https://arxiv.org/abs/2505.21594v1
a preemptive drafting mechanism to maximize client-server utilization. As shown in Fig. 1(d), we introduce early exits in the target model to produce verified tokens before full verification. These early tokens enable the client to draft the next set preemptively, a process we call pre- drafting . If the final verifica...
https://arxiv.org/abs/2505.21594v1
the input embedding. At each layer l, the model calculates logits by passing the hidden state through a language model (LM) head, denoted as z(l)=LMH EAD(h(l)). It also computes a confidence scoreS(l)based on the softmax probability: S(l)= max/parenleftig softmax (z(l))/parenrightig . (4) The model exits early at lay...
https://arxiv.org/abs/2505.21594v1
pre-drafting and reducing the server’s idle time be- tween verification rounds whenever there is a pre-draft cache hit. Importantly, the output is identical to that of standard 5 Table 2: Early Exit training details. # Params and % Params denote the total number of trainable adapter parameters and their fraction compar...
https://arxiv.org/abs/2505.21594v1
demonstrate our system on two types of client devices: 1.NVIDIA Jetson Nano: A compact AI development board tailored for edge computing. It includes a quad-core ARM Cortex-A57 CPU, a 128-core Maxwell GPU, and 4GB of LPDDR4 RAM shared between the CPU and GPU. With a performance of up to 472 GFLOPs, the Jetson Nano is id...
https://arxiv.org/abs/2505.21594v1
fast decoding method with early exit is exact, with outputs identical to standard speculative decoding, ensuring no loss in accuracy . We define the following metrics to evaluate our method. •Speedup AR→SD: Latency savings of vanilla speculative edge-cloud decoding (SD) compared to cloud based autoregressive (AR) basel...
https://arxiv.org/abs/2505.21594v1
41.87% 9.55% Avg EE 8 13 9 14 7 10 AlpacaSpeedup AR→SD 0.63x 1.06x 0.42x 0.74x 1.42x 1.99x Speedup SD→FSD 1.04x 1.07x 1.05x 1.12x 1.10x 1.18x Avg Tokens τ 2.09 1.96 2.32 2.36 4.29 3.62 Cache miss rate 64.60% 19.29% 60.94% 25.45% 36.60% 4.28% Avg EE 8 14 8 14 9 2 CNN/DMSpeedup AR→SD 0.72x 1.20x 0.38x 0.73x 1.41x 1.91x S...
https://arxiv.org/abs/2505.21594v1
Priority Queue: Since our system is asynchronous, we need queues for graceful operation. Further, we organize the queues in priority, determined by the confidence score of the generated token (Eq. 4). This prioritization is especially beneficial when the number of threads is limited. Table 7 presents an ablation study ...
https://arxiv.org/abs/2505.21594v1
introduce a distractor object of similar appearance. The robot successfully navigates the environment and identifies the correct object, demonstrating the effectiveness of our method on a vision-language-based control task. Table 8(a) reports key system-level metrics from our deployment, including drafting and verifica...
https://arxiv.org/abs/2505.21594v1
cost-effective alternative to traditional cloud-based deployment. By distributing the draft and target models between edge and server environments, our solution significantly reduces high API costs. Early exits and pre-drafting allow us to enhance parallelism by leveraging idle client time and reducing server idle time...
https://arxiv.org/abs/2505.21594v1
Zaitian Gongye, Xueyan Zou, Jan Kautz, Erdem Bıyık, Hongxu Yin, Sifei Liu, and Xiaolong Wang. Navila: Legged robot vision-language-action model for navigation. arXiv preprint arXiv:2412.04453 , 2024. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, J...
https://arxiv.org/abs/2505.21594v1
, 1(8):9, 2019. David Raposo, Sam Ritter, Blake Richards, Timothy Lillicrap, Peter Conway Humphreys, and Adam Santoro. Mixture-of-depths: Dynamically allocating compute in transformer-based language models. arXiv preprint arXiv:2404.02258 , 2024. 14 Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas,...
https://arxiv.org/abs/2505.21594v1
Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 , 2022. Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao, Zi Gong, Hang Yu, Jianguo Li, and Rui Wang. A survey on language models for code. arXiv preprint arXiv:2311.07989 , 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zh...
https://arxiv.org/abs/2505.21594v1
for 22: end if 23: SEND(yt′:t′+γ,p1:γ) 24:t←t′,x←y,C.reset(),Qp.reset() 25:end while 26:function PREDRAFT 27: Input: Prefixx1:t, Tokensx′ t+1:t+δ′+1 28:y1:t′←Concat (x1:t,x′ t+1:t+δ′+1) 29: fori= 1toγdo 30:yt′+i,p′ i←DRAFT (Mp,x′ 1:t′+i−1) 31: end for 32: returnyt′:t′+γ,p′ 1:γ 33:end function 34:function RECEIVER 35: I...
https://arxiv.org/abs/2505.21594v1
for Vicuna-7B (A100) and Vicuna-68M (Jetson Nano). Batch processing im- proves throughput but not always latency; e.g., batch size 32 increases A100 latency by over 5x. However, API providers often offer discounts for batch processing (e.g., OpenAI provides 50% discount OpenAI Pricing), making it a cost-saving approach...
https://arxiv.org/abs/2505.21594v1
arXiv:2505.21595v1 [cs.LG] 27 May 2025Relevance-driven Input Dropout: an Explanation-guided Regularization Technique Shreyas Gururaj1,2Lars Grüne2Wojciech Samek1,3,4 Sebastian Lapuschkin1,5,†Leander Weber1,† 1Fraunhofer Heinrich Hertz Institute, Department of Artificial Intelligence, Berlin, Germany 2Lehrstuhl für Ange...
https://arxiv.org/abs/2505.21595v1
parts of the input (Panel (b)). The impact of training with RelDrop is clear in Panel (c), where heatmaps at the bottom reflect how the Regularized Model utilizes a larger set of features to make predictions. as the training data [ 11], or based on random perturbations [ 58,34,77,67,74,79], which may not align with the...
https://arxiv.org/abs/2505.21595v1
on. For this reason, we propose to leverage XAI attributions as a signal to guide data augmentation, as they provide a more informed approach to examine how these augmentations affect model predictions. Originally, the field of XAI research aims to reveal the prediction mechanisms underlying otherwise black-box models....
https://arxiv.org/abs/2505.21595v1
[73,19,66]. The replacement value for all the masked features s(e.g., dataset mean or zero) can vary with the data modality. Equation 1 is a general formulation of RelDrop that can be adapted to different data modalities, such as 2D images and 3D point clouds. To balance performance and regularization, controlling how ...
https://arxiv.org/abs/2505.21595v1
region (i.e., the number of pixels to be masked), and rOis the aspect ratio, determining the relative dimensions of the height and width of the rectangular block. The occlusion region Ois then determined by width WOand height HO: O= (x, y)|x∈ xcen−WO 2, xcen+WO 2 , y∈ ycen−HO 2, ycen+HO 2 (6) Since only pixels in...
https://arxiv.org/abs/2505.21595v1
RE [79]. As shown in Table 1 (Blue Columns) , models trained with RelDrop consistently outperform both the RE and the baseline. Our proposed approach further improves the test performance over the RE by almost doubling the gain over the baseline in all the considered models and datasets. The average margin of improveme...
https://arxiv.org/abs/2505.21595v1
ranges of training and test accuracies. the regularization effect of RelDrop results in consistent improvement in the model’s generalization ability compared to both RE and the baseline. Investigating this effect in more detail in Figure 2, we observe that RelDrop-trained models have the smallest difference between tra...
https://arxiv.org/abs/2505.21595v1
opposed to RE. In the previous paragraphs, we investigated the effects of RelDrop on (estimated) model generalization ability. However, RelDrop functions by removing the input features that are (currently) most relevant to a model’s decision-making. As such, while we expect model decisions to be based on a larger 7 Tab...
https://arxiv.org/abs/2505.21595v1
example of "Dhole" where our method fails to improve the feature distribution. While the improved decision-making does not seem to hold for every example, the mean RRA quantifies this effect on a dataset level, demonstrating the overall effectiveness of our approach. 4.2 3D Point Cloud Classification After demonstratin...
https://arxiv.org/abs/2505.21595v1
generalization ability) when applying RelDrop in the previous section, this does not necessarily imply increased robustness, i.e. that the model utilizes more features for predicting. Therefore, we perform an ablation study in the following, using the feature perturbation [ 6,56,26] metric. Originally conceptualized fo...
https://arxiv.org/abs/2505.21595v1
accuracies against the baseline for both the 2D image and 3D point cloud classification tasks, consistent across different models and datasets. We observe a similar trend for zero-shot tests, where our method increases the model’s generalization capabilities. RelDrop achieves this by nudging the model to focus on more ...
https://arxiv.org/abs/2505.21595v1
2010. [8]David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quanti- fying interpretability of deep visual representations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 , pp. 3319–3327. IEEE Computer Society, 2017. [9]Daniel Becking, Maximilian Dr...
https://arxiv.org/abs/2505.21595v1
[19] Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, and Sebastian Lapuschkin. Explain to not forget: Defending against catastrophic forgetting with XAI. In Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar R. Weippl (eds.), Machine Learning and Knowledge Extraction - 6th IFI...
https://arxiv.org/abs/2505.21595v1
2024. [33] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015 , volume 37 of JMLR Wor...
https://arxiv.org/abs/2505.21595v1
pp. 4765–4774, 2017. [45] Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition , 65: 211–222, 2017. [46] Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Sa...
https://arxiv.org/abs/2505.21595v1
pp. 1135–1144. ACM, 2016. [55] Muhammad Sabih, Frank Hannig, and Jürgen Teich. Utilizing explainable AI for quantization and pruning of deep neural networks. CoRR , abs/2008.09072, 2020. [56] Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization...
https://arxiv.org/abs/2505.21595v1
Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 , pp. 1912–1920. IEEE Computer Society, 2015. [70] Lingxi Xie, Jin...
https://arxiv.org/abs/2505.21595v1
Jianming Zhang, Stan Sclaroff, and Vittorio Murino. Excitation dropout: Encouraging plasticity in deep neural networks. Int. J. Comput. Vis. , 129(4):1139–1152, 2021. A Technical Appendix A.1 Details on Attribution Computation For LRP, several rules exist that affect the obtained attributions [ 6,46]. In our experiment...
https://arxiv.org/abs/2505.21595v1
Unsupervised Semantic Segmentation (LUSS) benchmark dataset that collects and annotates pixel-level labels from the ImageNet-1k dataset. Following the removal 16 of some unsegmentable categories, such as bookshops, there are 919 categories with 1,183,322 training images and 12,419 validation images, each with its segme...
https://arxiv.org/abs/2505.21595v1
of Kpixels with Rtop-Krelevance scores Rp1,Rp2, . . . ,RpK. Then, we divide the number of these values that fall within the ground truth locations by the ground truth’s size:, obtaining the RRA of a single sample as follows: Relevance Rank Accuracy (RRA) =|Ptop-K∩GT| |GT|(A.2) Taking the average RRA over all samples in...
https://arxiv.org/abs/2505.21595v1
to be reliable and matches the results of the official implementation. All the experiments reported in Table 2 are conducted by training the PointNet++ model from scratch for 50 epochs with different dropout parameters. Figure A.1: Attribution maps indicate the dropout strategy for different dropout parameters during t...
https://arxiv.org/abs/2505.21595v1
a higher proportion of relevance is distributed to fewer neurons. This reduced feature distribution can harm semantic expressiveness, Especially in later layers where high-level complex features are formed, a low AUC indicates lower semantic expressiveness and robustness. In the Figure, we observe that for the initial ...
https://arxiv.org/abs/2505.21595v1
Learning optimal treatment strategies for intraoperative hypotension using deep reinforcement learning Esra Adiyekea,b,*, Tianqi Liua,c,*, Venkata Sai Dheeraj Naganaboina a,d,*, Han Lia, Tyler J. Loftusa,e, Yuanfang Rena,b, Benjamin Shickela,b, Matthew M. Rupperta,b , Karandeep Singhf, Ruogu Fanga,g, Parisa Rashidia,g,...
https://arxiv.org/abs/2505.21596v1
intravenous fluids, the model’s recommendations were within 0.05 ml/kg/15 min of the actual dose in 41% of the cases , with higher or lower dose s recommended for 27% and 32% of the treatments , respectively . The RL policy resulted in a higher estimated policy value compared to the physicians ’ actual treatments , as ...
https://arxiv.org/abs/2505.21596v1
the honest broker to assemble a single center longitudinal perioperative cohort for all patients admitted to the University of Florida Health (UFH) for patients of age 18 years or older during admission, following any type of major operative procedure between June 1st, 2014 through September 20th, 2020 by integrating e...
https://arxiv.org/abs/2505.21596v1
prevent short term outcome of intraoperative hypotensi on (MAP<65 mmHg) and long -term outcome of AKI in the first three days following the surgery.10 7 The r esulting action s (agent polic y) are assessed compared to the actions taken by the physician s based on their experience (Fig. 1) . This workflow simulates clin...
https://arxiv.org/abs/2505.21596v1
empirical distribution of historical actions to ensure sufficient representation for each action category for IV fluids and vasopressors separately, one group corresponding to dosage of 0 (Supplemental Tables 4 and 5). We defined the action space with a total of 25 discrete actions, combining 5 different levels of IV f...
https://arxiv.org/abs/2505.21596v1
postoperative complications in the development cohort was 11% for AKI within the first three days following surgery , 2% for 30 day -mortality and 4% for 90 day -mortality . In test cohort, percentage was 12% for postoperative AKI, 2% for 30 day -mortality and 3% for 90 day -mortality. Table 1. Clinical characteristics...
https://arxiv.org/abs/2505.21596v1
the agent and actions taken by the physicians have a high degree of similarity . The average Q -value distributions for surgeries that were grouped by the presence or absence of postoperative AK I were illustrated in Fig. 3 (A) . We observe d that surgeries with postoperative AKI tend ed to have lower return values , a...
https://arxiv.org/abs/2505.21596v1
fluid administration on average. (Supplemental Figures 5 and 6). 13 Figure 2. Comparison of actions proposed by the trained agent (A) and taken by physician agent (B). Each bin represents the tuple for discretized IV fluids and vasopressor actions. A B A B Figure 3. Average return per surgery (A) and the relationship b...
https://arxiv.org/abs/2505.21596v1
for patients ’ with varying characteristics , a collaborative approach to shared decision -making involving the patient and all members of a clinical care team can improve patient satisfaction and may reduce costs associated with unnecessary treatments. Although our results show promise, there are numerous challenges a...
https://arxiv.org/abs/2505.21596v1
and Translational Science Awards UL1 TR000064 and UL1TR001427. Funding T.O.B. was supported by K01 DK120784 from the National Institute of Diabetes and Digestive and Kidney Diseases (NIH/NIDDK). TOB received grant (97071) from Clinical and Translational Science Institute, University of Florida and Research Opportunity ...
https://arxiv.org/abs/2505.21596v1
-guided intervention to prevent acute kidney injury after major surgery: the prospective randomized BigpAK study. In: LWW; 2018. 9. Meersch M, Schmidt C, Hoffmeier A, et al. Prevention of cardiac surgery -associated AKI by implementing the KDIGO guidelines in high risk patients identified by biomarkers: the PrevAKI ran...
https://arxiv.org/abs/2505.21596v1
optimal critical care pain management with morphine using dueling double -deep Q networks. Paper presented at: 2019 41st annual international conferenc e of the IEEE engineering in medicine and biology society (EMBC)2019. 26. Sun Q, Jankovic MV, Budzinski J, et al. A dual mode adaptive basal -bolus advisor based on rei...
https://arxiv.org/abs/2505.21596v1
hypotension with acute kidney injury after elective noncardiac surgery. Anesthesiology. 2015;123(3):515 -523. 44. Keuffel EL, Rizzo J, Stevens M, Gunnarsson C, Maheshwari K. Hospital costs associated with intraoperative hypotension among non -cardiac surgical patients in the US: a simulation model. Journal of Medical E...
https://arxiv.org/abs/2505.21596v1
the operating room, 5) the surgery was < 60 minutes, 6) 3 -day or 7 -day acute kidney injury (AKI) status was missing due to insufficient serum creatini ne data available, 7) cardiac surgeries. (Supplemental Figure 1). B. Dosage pre -processing and action space Intravenous fluids (IV) included boluses and continuous in...
https://arxiv.org/abs/2505.21596v1
chances of poor clustering due to suboptimal centroid initialization. Cluster membership was assigned based on the closest centroids, ensuring that each data point was grouped with the most similar patient states. We determined an optimal number of 200 clusters using the elbow method for silhouette analysis, which effe...
https://arxiv.org/abs/2505.21596v1
in theory with the goal of maximizing the sum of rewards to avoid long -term AKI and short -term Hypotension. The agent policy will start with a random policy that was iteratively evaluated an d then improved until converging to an optimal solution. After convergence, the agent policy π∗ corresponded to actions with th...
https://arxiv.org/abs/2505.21596v1
surgical session without postoperative AKI within the first 3 days after surgery. (Example surgery 2) 35 Supplemental Figure 5. Comparison between doses suggested by RL model and physician's administration for a surgical session with postoperative AKI within the first 3 days after surgery. (Example surgery 3) 36 Supple...
https://arxiv.org/abs/2505.21596v1
10,642 (67) Obesity 11,393 (33) 7,070 (45) Fluid and electrolyte disorders 10,301 (30) 6,120 (39) Valvular Disease 3,561 (10) 2,048 (13) Coagulopathy 4,674 (14) 2,173 (14) Weight Loss 5,212 (15) 2,589 (16) Depression 9,544 (28) 4,805 (30) Chronic Anemia 6,792 (20) 4,183 (26) Chronic Kidney Disease 5,718 (17) 3,019 (19)...
https://arxiv.org/abs/2505.21596v1