text
string
source
string
heatmaps capture the suspicious lesions (despite false positives) later diagnosed as cancer as anno- tated by radiologists; notably in the right panel, Subject 3’s highlighted imaging features over longitudinal exams demonstrates our model is able to attend to subtle tissue asymmetrical progression consistently focused...
https://arxiv.org/abs/2505.21699v1
Kovacs, K.,Chew,E.Y.,Lu,Z.,etal.:Harnessingthepoweroflongitudinalmedicalimaging for eye disease prognosis using transformer-based sequence modeling. NPJ Digital Medicine 7(1), 216 (2024) 6. Karaman, B.K., Dodelzon, K., Akar, G.B., Sabuncu, M.R.: Longitudinal mammo- gram risk prediction. In: International Conference on ...
https://arxiv.org/abs/2505.21699v1
arXiv:2505.21703v1 [cs.CR] 27 May 2025IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 1 A Joint Reconstruction-Triplet Loss Autoencoder Approach Towards Unseen Attack Detection in IoV Networks Julia Boone, Graduate Student Member, IEEE , Tolunay Seyfi, Fatemeh Afghah, Senior Member, IEEE Abstract —Internet of...
https://arxiv.org/abs/2505.21703v1
by which we can create intelligent transportation systems (ITS) to provide this persistent and data-rich inter- connectivity between vehicles. Despite the advantages of such systems, security for the IoV is an open challenge [3]. Given the physical safety and sensitive data risks that can be caused by attacks on interc...
https://arxiv.org/abs/2505.21703v1
time, creating a complicated and unpredictable attack landscape. In this scenario, it is clear that being able to leverage known benign data and/or the performance of a model from another domain in a new domain is critical in ensuring the safety of newly deployed systems with no pre-existing attack knowledge. To this e...
https://arxiv.org/abs/2505.21703v1
a high degree of accuracy without overfitting to the benign set via the addition of the triplet loss. •We present a novel method specifically for the task of unseen attack detection in IoV networks. By training entirely on the benign set of traffic data, our method is entirely unsupervised and performs detection indepe...
https://arxiv.org/abs/2505.21703v1
the loss distribution, as demonstrated by [18]. B. Contrastive Learning for AI-Based AD and Its Relevance to IoV Networks Building on these techniques, contrastive learning has emerged as a self-supervised approach that aims to ex- tract meaningful representations from unlabeled data using proxy tasks. This method has ...
https://arxiv.org/abs/2505.21703v1
global insights by quantifying the contribution of each feature to the overall model predictions, while LIME offers local interpretations by explaining individual model decisions on a per-sample basis. The framework is evaluated on two real-world autonomous driving datasets: the VeReMi dataset and a custom sensor datas...
https://arxiv.org/abs/2505.21703v1
this framework employs the same dataset but focuses on unsupervised anomaly detection, lever- aging the entire feature set to optimize detection accuracy across multiple objectives. III. I OV N ETWORK TRAFFIC DATASETS Analyzing the landscape of intrusion detection in the Inter- net of Vehicles (IoV), we observe a signi...
https://arxiv.org/abs/2505.21703v1
limitation of existing vehicular anomaly detection datasets is that they often assume an attacker operates exter- nally, either by injecting fake GPS signals or spoofing nearby vehicles. However, a sophisticated attacker could compromise the IoV network itself, blending into the system while exe- cuting malicious actio...
https://arxiv.org/abs/2505.21703v1
these varying IoV environments. Fig. 1. t-SNE visualization of the ACI-IoT-2023 dataset Fig. 2. t-SNE visualization of the WUSTL-2021 dataset IV. M ETHODOLOGY A. Problem Definition We consider the development of a network attack detection system for an IoV system. In this system, we collect network data in the form of ...
https://arxiv.org/abs/2505.21703v1
of robust defense strategies. A brute force attack systematically attempts all possible com- binations to guess passwords or cryptographic keys. Let: •Nbe the total number of possible combinations: N= Ak, where Ais the alphabet size and kis the password length. •Tbe the time to attempt one guess. •pbe the number of par...
https://arxiv.org/abs/2505.21703v1
fine-grain discrimination to these subtle changes. To achieve this, we utilize a contrastive loss (here, triplet margin loss, denoted as LTML ). This loss explicltly encourages the model to learn more discriminative latent space representations by causing separation between benign representations that were collected at...
https://arxiv.org/abs/2505.21703v1
taking into account the necessity for clear boundaries between benign and anomalous behavior, we include a triplet margin loss factor, LTML , in our training process to help guide the training process. This loss has been used in the vision domain for tasks such as face recognition [28]–[30], but we adopt it for the tim...
https://arxiv.org/abs/2505.21703v1
sample is considered anomalous. Here, we utilize the L2 norm/mean squared error (MSE) as our reconstruction error metric. V. E VALUATION 1) Pre-processing: We utilize Min-Max normalization on both datasets. Given the data imbalance favoring the per- centage of the ACI attack data, which is an inverse of the traditional...
https://arxiv.org/abs/2505.21703v1
One-Class Classifier with AutoEncoder (DeepSVDD) [33] model and a Gaussian Mixture Model (GMM) [34] in Table IV. These models are trained in the same manner as our approach, where we use only benign samples for training and both benign and attack samples for testing. We also provide results on an autoencoder utilizing ...
https://arxiv.org/abs/2505.21703v1
etc), oversampling available data or generating synthetic samples can help diversify the benign training set. Here, we evaluate the use of Synthetic Minority Oversampling Technique (SMOTE) [31] to develop a more robust data distribution for the benign training data with our method. Table VI shows the overall accuracy m...
https://arxiv.org/abs/2505.21703v1
SMOTE? Anom. Acc. Precision Recall Brute Force N 100.0000 1.000 0.9961 Brute Force Y 98.3408 0.9834 0.9915 DoS N 91.7481 0.9175 0.9998 DoS Y 98.3408 0.9834 0.9915 Recon N 99.0946 0.9909 1.0000 Recon Y 98.3408 0.9834 0.9915 TABLE VI ACI (SMOTE) METRICS FOR JOINT AUTOENCODER . λREC = 0.8, λTML = 0.9,THRESHOLD = 99 TH PER...
https://arxiv.org/abs/2505.21703v1
while we see a decrease in the anomaly accuracy and the pre- cision value. This indicates the transfer learning was valuable for decreasing the number of false negatives at the trade- off of increased false positives. Freezing only the encoder yields slightly higher results between the two pre-trained cases, indicating...
https://arxiv.org/abs/2505.21703v1
0.9992 0.9899 95% 86.79 97.79 0.9779 0.9994 0.9886 99% 99.06 97.29 0.9729 1.000 0.9862 Fig. 6. WUSTL precision-recall curve across percentile values for joint AE and joint V AE Fig. 7. Benign representations with and without contrastive loss. 4) Impact of Contrastive Loss on Benign Representations: Given our utilizatio...
https://arxiv.org/abs/2505.21703v1
6–12. [5] G. Kambourakis, C. Kolias, and A. Stavrou, “The mirai botnet and the iot zombie armies,” in MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM) , 2017, pp. 267–272. [6] C. Osborne, “Mirai ddos attack against kreb- sonsecurity cost device owners $300,000,” May 2018. [Online]. Available: https:/...
https://arxiv.org/abs/2505.21703v1
2024. [Online]. Available: https://www.sciencedirect.com/science/ article/abs/pii/S0957417423035121 [20] H. Zhou, K. Yu, X. Zhang, G. Wu, and A. Yazidi, “Contrastive autoencoder for anomaly detection in multivariate time series,” Information Sciences , vol. 610, pp. 266–280, 2022. [Online]. Available: https://www.scien...
https://arxiv.org/abs/2505.21703v1
arXiv:2505.21715v1 [eess.IV] 27 May 2025Privacy-Preserving Chest X-ray Report Generation via Multimodal Federated Learning with ViT and GPT-2 Md. Zahid Hossain1*†, Mustofa Ahmed1†, Most. Sharmin Sultana Samu2†, Md. Rakibul Islam1† 1Department of Computer Science and Engineering, Ahsanullah University of Science and Tec...
https://arxiv.org/abs/2505.21715v1
[7]. While FL has been extensively applied to disease classification from chest X-rays [8] [9] [10] [11] [12], its application to radiology report generation remains largely unexplored. This is due to several inherent challenges, including handling non-independent and identically distributed (non-IID) data, com- munica...
https://arxiv.org/abs/2505.21715v1
biases. [11] focuses on FL for chest X-ray analysis using the RSNA 2018 dataset. It employs UNet++ with EfficientNet-B4 for segmentation, ResNet50 and DenseNet121 for clas- sification. The study finds that FL improves generalizability. ResNet50 achieved 0.757 accuracy but highlights challenges in optimizing client sele...
https://arxiv.org/abs/2505.21715v1
and Inception. It shows that FL can achieve comparable or better performance while preserving privacy. [27] presents a scalable FL framework that incorporates data augmentation techniques to address imbalanced datasets by achieving 98.14% accuracy but facing challenges with client participation variability. [28] propos...
https://arxiv.org/abs/2505.21715v1
dataset diversity, enhancing model robustness and testing the frameworks across varied real-world healthcare scenarios are commonly recommended [36, 39, 41, 42]. Notably, the integration of emerging technologies such as 5G [37], GANs [40] and zero- shot learning [42] further distinguishes these studies with future rese...
https://arxiv.org/abs/2505.21715v1
0.01 Each client is allocated a subset of the training and validation data, along with pretrained model parameters for the Vision Transformer (ViT) [48] and GPT-2 [49]. These pretrained models are retrieved from the Hugging Face library. Clients are simulated independently using Google Colab notebooks. Upon receiving t...
https://arxiv.org/abs/2505.21715v1
dataset they have. The clients that have the largest data size during training contribute the highest to the global model. All the clients send their model parameters to the global model, the global model will aggregate the model parameters by averaging all the client parameters while assigning more weights to the clie...
https://arxiv.org/abs/2505.21715v1
a result, clients that are better aligned with the global objective have a greater influence on the global model update, improving convergence and overall performance. Table 3 : FL Aggregation Hyperparameters ApproachParameter NameValue Purpose L-FedAvg alpha 0.5 Controls weightage of validation loss and training data ...
https://arxiv.org/abs/2505.21715v1
for training, testing and validation. Training set, test set and validation set contains 4138, 1180 and 592 images and their corresponding reports respectively. Fig. 2 : Sample X-ray images and corresponding findings in form of report from the IU-Xray dataset. This report is being treated as the ground truth. Table 4 :...
https://arxiv.org/abs/2505.21715v1
truth. In Table 6, Krum Aggregation achieved the highest BERTScore F1 of 14 (a) Client 1 (b) Client 2 (c) Client 3 (d) Client 4 Fig. 5 : Training Loss for Clients 1–4 in L-FedAvg (a) Client 1 (b) Client 2 (c) Client 3 (d) Client 4 Fig. 6 : Validation Loss for Clients 1–4 in L-FedAvg 15 0.8731. It also demonstrated bala...
https://arxiv.org/abs/2505.21715v1
training loss for Client 1, Figure 7b for Client 2, Figure 7c for Client 3 and Figure 7d for Client 4. Again, we can see some spikes after each round in the training loss plot of client 1. The same pattern can be seen in client 2 training. The training loss for client 3 gradually decreased. However, since client 4 had ...
https://arxiv.org/abs/2505.21715v1
we have evaluated different Federated Aggregation techniques for gen- erating reports from chest x-ray images. Our experiment finds the best performance from the Krum Aggregation approach in the task of accurate and coherent report generation from input x-ray images. Due to limited number of data, we had to perform sim...
https://arxiv.org/abs/2505.21715v1
R.R., et al. : Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific reports 10(1), 12598 (2020) [11]´Slazyk, F., Jab lecki, P., Lisowska, A., Malawski, M., P lotka, S.: Cxr-fl: deep learning-based chest x-ray image analysis using federated learning. In...
https://arxiv.org/abs/2505.21715v1
different lung diseases. In: Proceedings of the 2023 9th International Conference on Computer Technology Applications, pp. 60–66 (2023) [27] Ullah, F., Srivastava, G., Xiao, H., Ullah, S., Lin, J.C.-W., Zhao, Y.: A scalable federated learning approach for collaborative smart healthcare systems with inter- mittent clien...
https://arxiv.org/abs/2505.21715v1
learning for covid- 19 detection with chest x-ray images: Implementations and analysis. IEEE Transactions on Emerging Topics in Computational Intelligence (2024) [44] Muthalakshmi, M., Jeyapal, K., Vinoth, M., Dinesh, P., Murugan, N.S., Sheela, K.S.: Federated learning for secure and privacy-preserving medical image an...
https://arxiv.org/abs/2505.21715v1
arXiv:2505.21717v1 [cs.LG] 27 May 2025Scaling Up Liquid-Resistance Liquid-Capacitance Networks for Efficient Sequence Modeling Mónika Farsang1Ramin Hasani2,3Radu Grosu1 Abstract We present LrcSSM, a nonlinear recurrent model that processes long sequences as fast as today’s linear state-space layers. By forcing the stat...
https://arxiv.org/abs/2505.21717v1
length Tand input dimension pis first passed through an input encoder, followed by a normal- ization layer. The core component is a non-linear, state-and-input dependent LRC with hidden dimension Dand sequence length T. This NSSM is computed by a parallelizable iterative lin- earization method. The final state values a...
https://arxiv.org/abs/2505.21717v1
Equation (2), kmax j i=gmax j ier ev j i/el eak i, where er ev j iis the synaptic reversal potential (equilibrium membrane po- tential) and el eak iis the leaking potential. Since gmax j i≥0, the sign of kmax j idepends on er ev j i/eleak i. LTC-Equation (4)states that the rate of change of xiof neuron i, is the sum of...
https://arxiv.org/abs/2505.21717v1
parallel Kalman smoother, with a running time that is logarithmic in the length of the sequence. Algorithm 1 below, presents this method [7]. Algorithm 1 ELK 1:procedure ELK( f,s0,init_guess,tol,method,quasi) 2: diff← ∞ 3: states ←init_guess 4: while diff>toldo 5: shifted_states ←[s0,states[: −1]] 6: fs←f(shifted_state...
https://arxiv.org/abs/2505.21717v1
i(8) z∗ i(xi,u)=kmax ,x iσ(ax ixi+bx i) |{z } xistate-dependent+kmax ,u iσ(nX j=1au j iuj+bu j) | {z } uinput-dependent+gleak i(9) ϵ∗ i(xi,u)= wx ixi+vx i|{z} xistate-dependent+nX j=1wu j iuj+vu j |{z} uinput-dependent(10) LrcSSM: ˙xi= −σ(f∗ i(xi,u))σ(ϵ∗ i(xi,u))xi+τ(z∗ i(xi,u))σ(ϵ∗ i(xi,u))eleak i(11) For the final fo...
https://arxiv.org/abs/2505.21717v1
at every time step, the full sequence can be solved in parallel with a single prefix-scan, giving O(T D) time and memory and only O(log T) sequential depth, where Tis the input-sequence length, and Dis the state dimension. Secondly, LrcSSMs offer a formal gradient-stability guarantee that other input-varying systems su...
https://arxiv.org/abs/2505.21717v1
recent work on parallel state-free inference [29] can be also combined with LrcSSMs to further enhance their efficiency. 5 Experiments We compare LrcSSMs against nine models representing the state of the art for a range of long- sequence tasks. These include the Neural Controlled Differential Equations (NCDE) [ 20], Ne...
https://arxiv.org/abs/2505.21717v1
the MotorImagery and EigenWorms datasets. We believe that EthanolConcentration contains interesting input correlations, which LrcSSMs can capture through its state dependence. Average Performance Across Datasets. In Table 3, we report the average accuracy across all six datasets considered from the UEA-MTSCA archive. L...
https://arxiv.org/abs/2505.21717v1
RNNs compared to sequential computation costs. Linear SSMs have also the same costs. However, we also have to take into account that LRCs solved by ELK need more Newton steps to converge at each iteration, which linear SSMs do not require. The number of iterations depends on the convergence of the state updates, which ...
https://arxiv.org/abs/2505.21717v1
Warrington, Jimmy Smith, and Scott Linderman. Towards scalable and stable parallelization of nonlinear rnns. Advances in Neural Information Processing Systems , 37:5817–5849, 2024. [8] Riccardo Grazzi, Julien Siems, Arber Zela, Jörg K. H. Franke, Frank Hutter, and Massimiliano Pontil. Unlocking state-tracking in linear...
https://arxiv.org/abs/2505.21717v1
Learning Representations , 2024. [25] Eric Martin and Chris Cundy. Parallelizing linear recurrent neural nets over sequence length. arXiv preprint arXiv:1709.04057 , 2017. [26] James Morrill, Cristopher Salvi, Patrick Kidger, and James Foster. Neural rough differential equations for long time series. In International C...
https://arxiv.org/abs/2505.21717v1
Theorem 1 (Gradient stability) .Let a loss Ldepend only on the final state xT. Then for any 0≤τ<T °°∇xτL°° 2≤ρT−τ°°∇xTL°° 2, hence the Jacobian product norm is ≤1and cannot explode. Proof. The Jacobian of one step is Jt=λt, so∥Jt∥2≤ρ. Back-propagation multiplies T−τsuch Jacobians: ∇xτL=J⊤ τ···J⊤ T−1∇xTL. Sub-multiplica...
https://arxiv.org/abs/2505.21717v1
asymptotic or empirical relation of the form Loss( C)=A C−β+E, C=compute (FLOPs), β>0, (19) or a closed-form complexity identity such as FLOPs ∝T D. Recent large-scale studies like [ 31] show that βdepends on the operator’s per-token cost: Dense attention :β≈0.48–0 .50 [18].Linear-time RNN/SSM (Mamba, Hyena): β≈0.42–0 ...
https://arxiv.org/abs/2505.21717v1
accuracy across five data splits. The splits were generated using the same random seeds as in [ 32] to ensure full comparability. The final hyperparameters used to report the test accuracies are listed in Table 6. Table 6: Hyperparameters used for LrcSSM per dataset. lr hidden dim. state-space dim. #blocks Heartbeat 10...
https://arxiv.org/abs/2505.21717v1
85.6 ±5.4 85.0 ±5.5 86.7±5.4 Average 65.4±17.9 65.0±18.0 63.8 ±18.7 Please note that results reported here for LrcSSM, do not match the results of the previous tables because we used a fix setup without hyperparameter tuning, to only focus on the importance of state-dependency and changed the underlying matrix Aand bof...
https://arxiv.org/abs/2505.21717v1
Responsible Data Stewardship: Generative AI and the Digital Waste Problem Vanessa Utz Simon Fraser University vutz@sfu.ca Abstract As generative AI systems become widely adopted, they ena- ble unprecedented creation levels of synthetic data across text, images, audio, and video modalities. While research has addressed ...
https://arxiv.org/abs/2505.21720v1
digital waste management ap- proaches from other fields, we introduce sp ecific recom- mendations on how to tackle digital waste and aim to cata- lyze new thinking on how the AI community addresses these challenges. This work contributes to the growing field of sustainable AI research by broadening the scope beyond com...
https://arxiv.org/abs/2505.21720v1
can persist in ecosyste ms for decades. The production of toxic waste as- sociated with chip manufacturing has quadrupled over the last decade, reaching 874 kilotons in 2021 (Ruberti 2024) . The energy demands of hardware manufacturing are sim- ilarly substantial. A life -cycle analysi s of the six leading global chip ...
https://arxiv.org/abs/2505.21720v1
systems rely on complex neural network archi- tectures that perform numerous computationally expensive operations to transform input prompts into coherent outputs. Recent research by Luccioni et . al. (2024) has provided in- sights into the energy requirements of different generative and classification tasks. Generatio...
https://arxiv.org/abs/2505.21720v1
of this potential burden becomes particularly concerning when we consider the current growth trajectory of generative AI adoption. Even if individual users only gen- erate modest amounts of content that persists in storage, the cumulative environmental burden still grows significantly (Utz & DiPaola 202 3). Unlike phys...
https://arxiv.org/abs/2505.21720v1
ss digital waste, the main challenge that has been identified is the problem of attempting to transfer waste reduction principles used for physical systems (i.e. the manufacturing of a physical product) for digital environ- ments (Yarbrough, Harris & Purdy 2 022). One strategy, however, that appears to translate from t...
https://arxiv.org/abs/2505.21720v1
Manuf acturing in transforming resource management across diverse industries suggests that its core principles, properly adapted, could contribute significantly to addressing digital waste in AI contexts. Future Directions for Sustainable Data Prac- tices Having identified translatable principles from ILM and DLM , we ...
https://arxiv.org/abs/2505.21720v1
responsibilities do we have to future generations regarding the digital infra- structure we create? What constitutes fair distribution of benefits and burdens across generations? How should we balance present convenience against future environmental costs? 3. The psychological and social dimensions of generative AI usa...
https://arxiv.org/abs/2505.21720v1
management decisions among us- ers. End-User and Organi zation s Sustainable management of generative AI content ulti- mately requires cultural and operational shifts among end - users and organizations. These shifts represent a move to- ward s responsible digital stewardship , taking accountability for the environment...
https://arxiv.org/abs/2505.21720v1
particular at- tention to intergenerational justice. By framing digital waste as an ethical issue with multigenerational implications, we establish that responsible AI development must account for the complete environmental footprint across extended timeframes. The paper bridges disciplines by introducing concepts from...
https://arxiv.org/abs/2505.21720v1
; Gao, P. ; and Vivas -Valencia, C. 2025. A social -en- vironmental impact perspective of generative artificial intelli- gence . Environmental Science and Ecotechnology , 15(23): 10052 0. doi.org/ 10.1016/j.ese.2024.100520 Hsu, S. ; Hsieh, H. ; Chen, C. ; Tseng, C. ; Huang, S. ; Huang, C .; Huang, Y. ; Radashevsky, V. ...
https://arxiv.org/abs/2505.21720v1
M. 2023. The chip manufacturing industry: Environmen- tal impacts and eco -efficiency analysis. Science of The Total Envi- ronment, 858:159873 . doi.org/10.1016/j.scitotenv.2022.159873 Ruberti, M. 2024. Environmental performance and trends of the world’s semiconductor f oundry industry. Journal of Industrial Ecology , ...
https://arxiv.org/abs/2505.21720v1
Saddle-To-Saddle Dynamics in Deep ReLU Networks: Low-Rank Bias in the First Saddle Escape Ioannis Bantzis∗ EPFL Lausanne, Switzerland ioannis.bantzis@epfl.chJames B. Simon UC Berkeley and Imbue Berkeley and San Francisco, USA james.simon@berkeley.edu Arthur Jacot Courant Institute, NYU New York, USA arthur.jacot@nyu.ed...
https://arxiv.org/abs/2505.21722v1
to the deep case [ 4,11]. A limitation of these approaches is that the limiting dynamics remain complex, especially in the deep case where they are described by algorithms that are not only very costly in the worst case [ 11,50], but also difficult to interpret and reason about. This high complexity could be explained ...
https://arxiv.org/abs/2505.21722v1
is arguably secondary. To our knowledge, the only prior theoretical analysis of saddle-to-saddle dynamics in deep nonlinear networks is for nonlinearities that are differentiable at zero (such as the arctan), in which case the dynamics around the saddle at the origin can be approximated by those of a linear network (be...
https://arxiv.org/abs/2505.21722v1
matrix Wℓand activations Zσ ℓover the training set for ℓ= 1, . . . , L areℓ−1 4-approximately rank 1in the sense that their second singular value is O(ℓ−1 4)times smaller than the first. Furthermore, deeper layers are also more linear, i.e. the effect of the ReLU becomes weaker. Finally, we provide an example of a simp...
https://arxiv.org/abs/2505.21722v1
the normalized parameters are doing projected GF over the unit sphere on the L0loss (up to a prefactor of∥θ∥L−2which can be interpreted as a speed up of the dynamics for larger norms). Therefore, we may reparametrize time t(s), such that s(t) =Rs 0∥θ(s1)∥L−2ds1, which correspond to switching to a time-dependent learnin...
https://arxiv.org/abs/2505.21722v1
Therefore the time s1of convergence to an escape direction is independent of α, and at the time s1, the parameter norm will depend linearly on α:∥θ(s1)∥=Cα for some C > 0. We can therefore always choose a small enough αso that the Taylor approximation (Equation 1) remains valid up to the time of convergence s1. 3 Low R...
https://arxiv.org/abs/2505.21722v1
implies that all layers after ℓ0must be approximately rank 1, including the ℓ-th layer. The two propositions are also of independent interest. Proposition 3.3 gives an example of inputs where all layers are low-rank, not just the deeper layers. Proposition 3.4 applies to any parameter with fast enough escape speed, not...
https://arxiv.org/abs/2505.21722v1
better rank-two escape direction with speed s2=1 2. Proof. Our network has weight matrices W1, W2, W3which parameterize the network function as fθ(X) =W3σ◦W2σ◦W1X. As discussed in Subsection 2.2, we wish to minimize the escape speed s=−Tr[G⊤fθ(X)]such thatP ℓ∥Wℓ∥2 F= 3. We know from homogeneity that the minimizer will ...
https://arxiv.org/abs/2505.21722v1
neighborhood of multiple saddles, as is the case for linear networks [ 27,32] or shallow ReLU networks [ 2,1]. We now state a few conjectures/hypotheses, which should be viewed as possible next steps towards the goal of describing the complete Saddle-to-Saddle dynamics: (1) Large width GD finds the optimal escape direc...
https://arxiv.org/abs/2505.21722v1
The final goal is to prove that these Saddle-to-Saddle dynamics allow ReLU networks to implement a form of greedy low BN-rank search where a minimal BN-rank interpolator is greedily searched by first searching among BN-rank 1 functions then gradually among higher rank functions, stopping at the smallest BN-rank suffici...
https://arxiv.org/abs/2505.21722v1
Brennan, Guy Bresler, and Dheeraj Mysore Nagaraj. The staircase property: How hierarchical structure can guide deep 9 learning. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , 2021. [3]Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergen...
https://arxiv.org/abs/2505.21722v1
Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems , 33:14820–14830, 2020. [21] Gauthier Gidel, Francis Bach, and Simon Lacoste-Julien. Implicit regularization of discrete gradient dynamics in linea...
https://arxiv.org/abs/2505.21722v1
nets: The multivariate case. In International Conference on Learning Representations , 2020. [37] Scott Pesme and Nicolas Flammarion. Saddle-to-saddle dynamics in diagonal linear networks. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems ...
https://arxiv.org/abs/2505.21722v1
Homogeneous Losses Proposition A.1. On a homogeneous loss, the GF dynamics decompose into dynamics of the norm ∥θ∥and of the normalized parameters ¯θ=θ/∥θ∥: ∂t∥θ(t)∥=−¯θ(t)T∇L0(θ(t)) =−L∥θ(t)∥L−1L0(¯θ(t)) ∂t¯θ(t) =−∥θ(t)∥L−2 I−¯θ(t)¯θ(t)T ∇L0(¯θ(t)). Proof. SinceL0satisfies gradient flow with respect to θ, we have dθ...
https://arxiv.org/abs/2505.21722v1
G⊤¯WL,·iσ(¯WL−1,i·ZL−2) where ¯xdenotes the normalized vector ¯x=x ∥x∥2. We define a new network of depth L+kusing the following matrices ˜Wℓ: ˜WL−1=sX i∥WL,·i∥∥WL−1,i·∥ ¯WL−1,i∗· 0 0 ... 0 , ˜WL+k=sX i∥WL,·i∥∥WL−1,i·∥¯WL,·i∗0 0 ··· 0 , ˜Wℓ= 1 0 0 ··· 0 0 0 ··· ............ 0 0 ··· 0 , ℓ =L, . . ...
https://arxiv.org/abs/2505.21722v1
≤Tr G⊤Yˆθ(uv⊤+X) ≤ −∥ Gv∥∥u∥+∥G∥Fϵ where ˆθ= arg min Tr G⊤Yθ(uv⊤) . On the other direction we get Tr G⊤Yθ⋆(uv⊤+X)) ≥ −Tr G⊤Yθ⋆(uv⊤) − ∥G∥Fϵ ≥ −∥ Gv∥∥Yθ⋆(u)∥ − ∥ G∥Fϵ (4) where we used the Cauchy-Schwarz inequality in the last line. Combining the two, we get 17 ∥Yθ⋆(u)∥ ∥u∥≥1−2∥G∥F ∥Gv∥∥u∥ϵ and since ∥θ∥2≤Lit is...
https://arxiv.org/abs/2505.21722v1
of those, ℓ0. Because the argument is valid for at least (1−p)Lof the Ltotal layers, the earliest layer ℓ0must occur on or before the pL-th layer. It is true that Zσ ℓ0is non-negative entry-wise and so we can apply lemma A.6 to find non-negative singular vectors u1, v1that additionally satisfy ∥Zσ ℓ0−s1(Zσ ℓ0)u1v⊤ 1∥2 ...
https://arxiv.org/abs/2505.21722v1
have nonzero function value at a given time, one finds an escape speed is equal to s= cos ξ+π 4 −cos(ξ) + cos ξ−π 4 −cos ξ−π 2 , (6) where ξ=ϕmod(π 4). See Figure 5 for a depiction of this periodic function. Its maximal value of s=√ 2−1falls at multiples ofπ 4. 22 0 100000 300000 500000 step0.00.51.01.52.0Singula...
https://arxiv.org/abs/2505.21722v1
if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly...
https://arxiv.org/abs/2505.21722v1
is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instru...
https://arxiv.org/abs/2505.21722v1
(appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Ye...
https://arxiv.org/abs/2505.21722v1
research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsG...
https://arxiv.org/abs/2505.21722v1
to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do n...
https://arxiv.org/abs/2505.21722v1
the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: No crowdsour...
https://arxiv.org/abs/2505.21722v1
arXiv:2505.21724v1 [cs.CV] 27 May 2025OmniResponse: Online Multimodal Conversational Response Generation in Dyadic Interactions Cheng Luo1, Jianghui Wang1, Bing Li1∗, Siyang Song2, Bernard Ghanem1 1King Abdullah University of Science and Technology,2University of Exeter Project Page: https://omniresponse.github.io/ Abs...
https://arxiv.org/abs/2505.21724v1
interactions. Recent studies [ 37,43,54] propose to generate facial reactions for a listener; however, these methods overlook verbal responses, which are essential to engage in dialogue fully. The OMCRG task is complex and poses major challenges in three aspects. First, it is non-trivial to directly achieve synchroniza...
https://arxiv.org/abs/2505.21724v1
The early approaches to FRG [ 21,22] relied on Generative Adversarial Networks (GANs) [ 39,16] typically conditioned the generation process on the speaker visual-speech behaviors. Since FRG is a non-deterministic process ( i.e., different facial reactions can be triggered by the same speaker behavior [ 56]), recent adv...
https://arxiv.org/abs/2505.21724v1
successful paradigm of language modeling by employing decoder- only transformers to predict sequences of image tokens. Subsequent research [ 11] has focused on enhancing both the efficiency of tokenization processes [ 38,31] and sampling procedures [ 68], while simultaneously scaling up model architectures to handle in...
https://arxiv.org/abs/2505.21724v1
prior to time τ(τ < t ), denoted Whistory ,<τ; and (2) Temporal inputs : the previously generated facial features of the listener ˆFl τ:t−1, the facial features of the speaker Fs τ:t−1and the accumfd udfsalated text sequences from both participants ( Ws τ:t−1,ˆWl τ:t−1) over the interval [τ, t−1]. Using these inputs, O...
https://arxiv.org/abs/2505.21724v1
every dynamic update remains guided by the overarching instructions. Audio De-Tokenizer Positional EncodingTrans Decoder LayerLinear Projection Next Audio Response: Generated TextHidden States VoiceprintPositional EncodingQueryKey & ValueZero-Initialized Placeholders Figure 3: Architecture of TempoVoice. TempoV oice tr...
https://arxiv.org/abs/2505.21724v1
resolution [ 74,9,54], or exhibit inconsistent spoken languages [ 54]. To fill the dataset gap, we introduce ResponseNet that comprises 696 temporally synchronized dyadic video pairs, totaling over 14 hours of natural conversational exchanges. Each pair provides high-resolution ( 1024×1024 ) frontal-face streams for bo...
https://arxiv.org/abs/2505.21724v1
and broader real-world topics (e.g., "world," "market," "history," "school") are prominent. 5 Experiments Implementation Details. Our framework was implemented using PyTorch [ 48] and trained on four NVIDIA Tesla A100 GPUs. The model optimization was performed using the AdamW optimizer [ 26] with a learning rate of 2×1...
https://arxiv.org/abs/2505.21724v1
quality (UTMOSv2), audio–visual syn- chronization (LSE-D), as well as temporal consistency and visual quality (FVD). Although the LSTM baseline achieves a lower FD owing to its tendency to produce repetitive static visual output, it fails to generate rich, synchronized multimodal responses. Audio-Visual LLM achieves mu...
https://arxiv.org/abs/2505.21724v1
Oh yeah, I was just going to say that you know we need them in the world they do a lot of good stuff and then there’s like this other thing.0.00 91.50 96.43 100.3797.31 98.51 100.97 132.05<PAUSE><PAUSE>… <PAUSE><PAUSE>… <PAUSE><PAUSE>… …. …. …. …. …. …. …. …. …. …. <PAUSE><PAUSE>… <PAUSE><PAUSE>… <PAUSE><PAUSE>… 120.13...
https://arxiv.org/abs/2505.21724v1
award number 5940. The computational resources are provided by IBEX, which is managed by the KAUST Supercomputing Core Laboratory. References [1]Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Benhaim, Misha Bilenko,...
https://arxiv.org/abs/2505.21724v1
Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation , 42:335–359, 2008. [10] Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torre...
https://arxiv.org/abs/2505.21724v1