text
string | source
string |
|---|---|
heatmaps capture the suspicious lesions (despite false positives) later diagnosed as cancer as anno- tated by radiologists; notably in the right panel, Subject 3’s highlighted imaging features over longitudinal exams demonstrates our model is able to attend to subtle tissue asymmetrical progression consistently focused on the temporally evolving regions of potential caner development. 4 Conclusion In this work, we presented a novel model structure for asymmetry-aware breast cancer risk prediction using longitudinal screening mammograms. By combining temporal encoding to account for sequential data, and a learnable side encoder to retain nuance details between left–right breast, our attention-based model circumvents some of the limitations of previous models. Also, the asymmetric loss explicitly regularizes bilateral differences to adaptively learn risk-predictive imagingdissimilarities.Experimentalresultsshowedsuperiorriskpredictionper- formance that outperformed multiple SOTA models. Future work includes fur- ther evaluation of our model on larger and multi-center datasets. Notably, our model can account for a varying number of sequential exams for prediction the risk of from 1 to 5 years, which provides strong capacities for real-world uses 10 while many patients have multiple screening exams often time not acquired with fixed time intervals. Our model can inform risk-stratified breast cancer screen- ing to guide risk-reduction interventions and personalized screening strategies, towards detecting breast cancer early and saving lives. 5 Acknowledgments This work was supported in part by a NIH Other Transaction research con- tract #1OT2OD037972-01, the grant #1R01EB032896 (and a Supplement grant #3R01EB032896-03S1) as part of the NSF/NIH Smart Health and Biomedical Research in the Era of Artificial Intelligence and Advanced Data Science Pro- gram, a NSF grant (CICI: SIVD: #2115082), an Amazon Machine Learning Research Award, and the University of Pittsburgh Momentum Funds (a scaling grant) for the Pittsburgh Center for AI Innovation in Medical Imaging. This work used Bridges-2 at Pittsburgh Supercomputing Center through allocation [MED200006] from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by NSF grants #2138259, #2138286, #2138307, #2137603, and #2138296. This research was also supported in computing resources by the University of Pittsburgh Center for Research Computing and Data, RRID:SCR_022735, through the resources provided by the H2P cluster, which is supported by NSF award number OAC- 2117681. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the NIH or NSF. References 1. Dadsetan, S., Arefan, D., Berg, W.A., Zuley, M.L., Sumkin, J.H., Wu, S.: Deep learning of longitudinal mammogram examinations for breast cancer risk predic- tion. Pattern recognition 132, 108919 (2022) 2. Donnelly, J., Moffett, L., Barnett, A.J., Trivedi, H., Schwartz, F., Lo, J., Rudin, C.: Asymmirai: Interpretable mammography-based deep learning model for 1–5-year breast cancer risk prediction. Radiology 310(3), e232780 (2024) 3. Evans, D.G.R., Howell, A.: Breast cancer risk-assessment models. Breast cancer research 9, 1–8 (2007) 4. Himes, D.O., Root, A.E., Gammon, A., Luthy, K.E.: Breast cancer risk assess- ment: calculating lifetime risk using the tyrer-cuzick model. The Journal for Nurse Practitioners 12(9), 581–592 (2016) 5. Holste, G., Lin, M., Zhou, R., Wang, F., Liu, L., Yan, Q., Van Tassel, S.H.,
|
https://arxiv.org/abs/2505.21699v1
|
Kovacs, K.,Chew,E.Y.,Lu,Z.,etal.:Harnessingthepoweroflongitudinalmedicalimaging for eye disease prognosis using transformer-based sequence modeling. NPJ Digital Medicine 7(1), 216 (2024) 6. Karaman, B.K., Dodelzon, K., Akar, G.B., Sabuncu, M.R.: Longitudinal mammo- gram risk prediction. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 437–446. Springer (2024) 11 7. Keller, B.M., Chen, J., Daye, D., Conant, E.F., Kontos, D.: Preliminary evaluation of the publicly available laboratory for breast radiodensity assessment (libra) soft- ware tool: comparison of fully automated area and volumetric density measures in a case–control study with digital mammography. Breast cancer research 17, 1–17 (2015) 8. Keller, B.M., Nathan, D.L., Wang, Y., Zheng, Y., Gee, J.C., Conant, E.F., Kontos, D.: Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation. Medical physics 39(8), 4903–4917 (2012) 9. Lee, H., Kim, J., Park, E., Kim, M., Kim, T., Kooi, T.: Enhancing breast cancer riskpredictionbyincorporatingpriorimages.In: InternationalConferenceonMed- ical Image Computing and Computer-Assisted Intervention. pp. 389–398. Springer (2023) 10. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer:Hierarchicalvisiontransformerusingshiftedwindows.In:Proceedings of the IEEE/CVF international conference on computer vision. pp. 10012–10022 (2021) 11. Quillin, J.M., Fries, E., McClish, D., DeParedes, E.S., Bodurtha, J.: Gail model risk assessment and risk perceptions. Journal of behavioral medicine 27, 205–214 (2004) 12. Scutt, D., Lancaster, G.A., Manning, J.T.: Breast asymmetry and predisposition to breast cancer. Breast cancer research 8, 1–7 (2006) 13. Shimokawa, D., Takahashi, K., Kurosawa, D., Takaya, E., Oba, K., Yagishita, K., Fukuda,T.,Tsunoda,H.,Ueda,T.:Deeplearningmodelforbreastcancerdiagnosis based on bilateral asymmetrical detection (bilad) in digital breast tomosynthesis images. Radiological Physics and Technology 16(1), 20–27 (2023) 14. Siegel, R.L., Miller, K.D., Wagle, N.S., Jemal, A.: Cancer statistics, 2023. CA: a cancer journal for clinicians 73(1), 17–48 (2023) 15. Sriram, A., Muckley, M., Sinha, K., Shamout, F., Pineau, J., Geras, K.J., Azour, L., Aphinyanaphongs, Y., Yakubova, N., Moore, W.: Covid-19 prognosis via self- supervised representation learning and multi-image prediction. arXiv preprint arXiv:2101.04909 (2021) 16. Strand, F.: Csaw-cc (mammography) – a dataset for ai research to im- prove screening, diagnostics and prognostics of breast cancer. Dataset (2022), https://doi.org/10.5878/45vm-t798 17. Yala, A., Mikhael, P.G., Strand, F., Lin, G., Smith, K., Wan, Y.L., Lamb, L., Hughes, K., Lehman, C., Barzilay, R.: Toward robust mammography-based models for breast cancer risk. Science Translational Medicine 13(578), eaba4373 (2021) 18. Yeoh, H.H., Liew, A., Phan, R., Strand, F., Rahmat, K., Nguyen, T.L., Hopper, J.L., Tan, M.: Radifusion: A multi-radiomics deep learning based breast cancer risk prediction model using sequential mammographic images with image attention and bilateral asymmetry refinement. arXiv preprint arXiv:2304.00257 (2023) 19. Zheng, B., Sumkin, J.H., Zuley, M.L., Wang, X., Klym, A.H., Gur, D.: Bilateral mammographic density asymmetry and breast cancer risk: a preliminary assess- ment. European journal of radiology 81(11), 3222–3228 (2012) 20. Zheng, B., Tan, M., Ramalingam, P., Gur, D.: Association between computed tissue density asymmetry in bilateral mammograms and near-term breast cancer risk. The breast journal 20(3), 249–257 (2014)
|
https://arxiv.org/abs/2505.21699v1
|
arXiv:2505.21703v1 [cs.CR] 27 May 2025IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 1 A Joint Reconstruction-Triplet Loss Autoencoder Approach Towards Unseen Attack Detection in IoV Networks Julia Boone, Graduate Student Member, IEEE , Tolunay Seyfi, Fatemeh Afghah, Senior Member, IEEE Abstract —Internet of Vehicles (IoV) systems, while offering significant advancements in transportation efficiency and safety, introduce substantial security vulnerabilities due to their highly interconnected nature. These dynamic systems produce massive amounts of data between vehicles, infrastructure, and cloud services and present a highly distributed framework with a wide attack surface. In considering network-centered attacks on IoV systems, attacks such as Denial-of-Service (DoS) can prohibit the communication of essential physical traffic safety information between system elements, illustrating that the security concerns for these systems go beyond the traditional confidentiality, in- tegrity, and availability concerns of enterprise systems. Given the complexity and volume of data generated by IoV systems, traditional security mechanisms are often inadequate for accu- rately detecting sophisticated and evolving cyberattacks. Here, we present an unsupervised autoencoder method trained entirely on benign network data for the purpose of unseen attack detection in IoV networks. We leverage a weighted combination of reconstruc- tion and triplet margin loss to guide the autoencoder training and develop a diverse representation of the benign training set. We conduct extensive experiments on recent network intrusion datasets from two different application domains, industrial IoT and home IoT, that represent the modern IoV task. We show that our method performs robustly for all unseen attack types, with roughly 99% accuracy on benign data and between 97% and 100% performance on anomaly data. We extend these results to show that our model is adaptable through the use of transfer learning, achieving similarly high results while leveraging domain features from one domain to another. Index Terms —Internet of Vehicles, Internet of Things, anomaly detection. machine learning, traffic security. I. I NTRODUCTION VEHICULAR networks help realize real-time communi- cations between vehicles, infrastructure, and other es- sential components of a transportation system. The onset of Internet of Things (IoT) systems marks the implementation of This material is based upon work supported by the National Science Foundation under Grant Numbers CNS-2318726, and CNS-2232048, and Clemson University R-initiative funding. This work was supported by Clem- son University’s Virtual Prototyping of Autonomy Enabled Ground Systems (VIPR-GS), under Cooperative Agreement W56HZV-21-2-0001 with the US Army DEVCOM Ground Vehicle Systems Center (GVSC). DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. OPSEC#9299 J. Boone, T. Seyfi, and F. Afghah are with the Holcombe Department of Electrical and Computer Engineering at Clemson University. Emails: {jcboone, tseyfi, fafghah }@clemson.edu. Manuscript received September 15, 2024; revised May 5, 2025; accepted May 21, 2025.data-rich, persistent interconnections between various sensing devices designed for the monitoring and automation of various tasks. From industrial monitoring mechanisms designed to detect production deficiencies [1] to battlefield warnings gen- erated via collected data [2], IoT systems provide key ways in which safety and efficiency can be increased within a variety of systems. The Internet of Vehicles (IoV) has emerged from the IoT as a way
|
https://arxiv.org/abs/2505.21703v1
|
by which we can create intelligent transportation systems (ITS) to provide this persistent and data-rich inter- connectivity between vehicles. Despite the advantages of such systems, security for the IoV is an open challenge [3]. Given the physical safety and sensitive data risks that can be caused by attacks on interconnected vehicles, it is paramount that these systems are secured using methods capable of robustly detecting attacks. Attacks on distributed and decentralized systems like inter- connected vehicles can have widescale impacts. In late 2016, a botnet called Mirai designed for Distributed Denial of Service (DDoS) attacks was utilized to take down a variety of websites or online services, such as the security journalist Brian Krebs and the Dynamic DNS provider Dyn [4], [5]. Mirai was able to achieve this by targeting IoT devices with ARC processors running the Linux operating system and attempting to log into the device with default credentials. An estimated half a million IoT devices were utilized to carry out Mirai’s attacks and, in the case of Krebs’ attack, an estimated $323,973 of costs were inflicted on device owners in considering energy and bandwidth costs [6]. Generalizability is a particular concern in attack detection systems. While some methods are well suited for particular attacks or environments, a robust anomaly detection method should be capable of detecting various attacks in different environments with high degrees of accuracy. One key is- sue underneath this umbrella of generalizability in practical applications of anomaly detection methods is unseen attack detection to mitigate zero-day, or unknown, attacks. Ideally, an anomaly detection method should be able to detect new attack types as deviations from the norm when they occur. The Mirai attack is a key example of this need as it was an unknown attack capable of operating undetected on a massive number of decentralized devices. The ability for a model to not only perform well on an individual dataset but to perform well across datasets and to be generalizable through domain adaptation methods, such as transfer learning, is critical in the application of attack detection models across different environments. IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 2 While machine learning (ML) methods perform well on labeled sets of network data, they typically fail to detect data unseen during their training process and struggle to specifically capture the fine-grain spatial and temporal features of the input data [7], [8] From this, various artificial intelligence (AI)-based methods have been developed for network attack detection. Some pre-existing intelligent approaches to the anomaly detection task are entirely supervised, where all of the incoming data stream is labeled as anomalous or benign. While supervised methods generally achieve high performance, they are unrealistic for the network security task as raw traffic flows are not inherently labeled as benign or malicious that may not have pre-existing data for such new attacks. It can be time- consuming, expensive, and potentially infeasible to collect real attack data in new domains, such as an IoV scenario where vehicles operate in a highly dynamic environment that cannot be fully modeled ahead of
|
https://arxiv.org/abs/2505.21703v1
|
time, creating a complicated and unpredictable attack landscape. In this scenario, it is clear that being able to leverage known benign data and/or the performance of a model from another domain in a new domain is critical in ensuring the safety of newly deployed systems with no pre-existing attack knowledge. To this end, this work focuses on the development of unsu- pervised anomaly detection towards unseen attack detection. We develop an unsupervised autoencoder-based approach, where the model is trained on the aggregated sets of benign network flows only and reconstruction error is leveraged as the anomaly metric. In using reconstruction error alone, however, the latent space representations of the benign class may be too intertwined with the anomalies, leading to the reconstruction of anomalous samples as benign. Additionally, reconstruction error alone may struggle to capture discriminative features necessary for the detection task as it is intended to capture general patterns. Inspired by existing contrastive learning approaches to the anomaly detection task, we modify the tra- ditional reconstruction-based autoencoder training to address this by including a triplet margin loss to strengthen the latent space representation of the benign set from the encoder. This loss allows us to address the previous issues by introducing additional samples that represent similarity and dissimilarity into the model loss to both capture more relevant features of the benign class, which in turn helps strengthen the latent space representation boundaries of the benign class. We can also extend this diversification of the latent space through this loss as it relates to domain transference, in which clearly defined boundaries between the anomaly and benign classes out of a particular domain’s training can then be leveraged in a target domain. It allows us to easily adapt the model to that new domain without needing to conduct the full training process, as we show with the use of transfer learning in our evaluations. To the best of our knowledge, this work is the first in the IoV domain to perform entirely unsupervised anomaly detection via only training on the benign sample and utilizing two datasets that are application distinct from one another while also being well representative of modern IoV traffic patterns. This is especially critical in the context of zero- day attack detection, as every attack is effectively treated asa zero-day attack by our method, given that no pre-existing attack data is used for training. We show that our model is high performing despite the application differences on these datasets and is not specialized to detecting any one attack type, instead being capable of detecting a wide breadth of potential attacks. Considering that the transference of attack knowledge from domain-to-domain is also highly important for zero-day attack detection in new environments, we also present results showing that our model is adaptable across application domains through the use of transfer learning. Our contributions can be summarized as follows: •We develop a joint reconstruction error-triplet loss based approach for an autoencoder for attack detection in IoT networks. This method trains the autoencoder to recon- struct the feature sequences of network flow data with
|
https://arxiv.org/abs/2505.21703v1
|
a high degree of accuracy without overfitting to the benign set via the addition of the triplet loss. •We present a novel method specifically for the task of unseen attack detection in IoV networks. By training entirely on the benign set of traffic data, our method is entirely unsupervised and performs detection independent of the specific attack occurring, instead focusing on capturing the behaviors of the benign system traffic and detecting deviations. As such, our method is highly suit- able for the unseen attack detection task in IoV networks. •We evaluate our method on two recent and distinct datasets that well capture the traffic patterns of IoT net- works. We argue that these datasets are better represen- tatives of modern IoV problems compared to commonly used intrusion datasets across the current literature and have been underutilized in the development of robust network intrusion detection mechanisms in IoV works. •We also illustrate the capabilities of our model as a generalizable method towards unseen attack detection innew IoV environments via transfer learning. This work is one of the first to explore this capability of an anomaly detection specific model while using data that is appropriate for the modern IoV task. II. R ELATED WORKS A. ML versus AI for AD In the context of IoT systems, we specifically consider the task of time-series anomaly detection, which is critical for maintaining the security and functionality of these systems. Machine learning (ML) methods are commonly used for this task [9]–[11]. While ML performs well for some datasets, as data grows larger and more complex, ML may fail to catch the unique patterns within the data. Because deep learning (DL) can likely capture such patterns to outperform ML approaches [12], DL approaches have become popular for the development of new anomaly detection methods. Unsupervised DL methods are broadly classified into two categories: prediction-based and reconstruction-based mech- anisms. In prediction-based methods, regression models are trained on historical data to forecast future values of the system [13]. If the observed values deviate significantly from the predictions, they are considered to be anomalous. [14] IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 3 proposes a joint LSTM-Gaussian Naive Bayes model for industrial IoT (IIoT) anomaly detection, leveraging LSTM’s forecasting capabilities and Gaussian Naive Bayes for outlier detection. This work leverages the forecasting capabilities of the LSTM model in order to generate the future time predictions and the Gaussian Naive Bayes model to perform outlier detection on the prediction error. Similarly, [15] uses a GRU-based RNN for online anomaly detection, accounting for natural shifts in the data distribution. In contrast, reconstruction-based methods involve training generative models, such as autoencoders or GANs, on benign data to learn the normal data distribution. These models, once trained, use the learned distribution to reconstruct new data samples. Any significant deviation in the reconstruction error indicates an anomaly. [16] discusses the use of LSTM- based autoencoders for this purpose, while [17] explores the application of GANs. Anomaly thresholds can be set either as fixed numerical values or based on dynamic statistical measures of
|
https://arxiv.org/abs/2505.21703v1
|
the loss distribution, as demonstrated by [18]. B. Contrastive Learning for AI-Based AD and Its Relevance to IoV Networks Building on these techniques, contrastive learning has emerged as a self-supervised approach that aims to ex- tract meaningful representations from unlabeled data using proxy tasks. This method has gained attention for its ability to learn transformation-invariant representations, making it highly effective for unsupervised representation learning. By contrasting different views of the same sample (positive pairs) against views from different samples (negative pairs), con- trastive learning enhances the model’s capacity to distinguish and understand data patterns. [19] proposes an Adversarial Contrastive Autoencoder to improve multivariate time series anomaly detection by learning transformation-invariant repre- sentations through adversarial training. Positive and negative sample pairs are generated using multi-scale timestamp masks and random sampling. A 1D-CNN-based encoder extracts la- tent features from the samples, and composite features are cre- ated from positive and negative sample pairs while a discrimi- nator decomposes these features. [20] focuses on multi-grained contrasting and data augmentation by integrating contrastive learning into an autoencoder framework for anomaly detection, leveraging both window-level and pixel-level contrastive tasks to learn normal patterns. An LSTM decoder is utilized for data reconstruction and calculation of anomaly scores based on reconstruction errors. Contextual and instance contrasting are combined with attention mechanisms to learn temporal features and invariant features from augmented views. In IoV , however, reconstruction-based anomaly detection has been widely applied, with autoencoders leveraging re- construction loss to identify sensor anomalies [21], detect anomalous driving behaviors [22], and pinpoint location in- consistencies in CA Vs [23]. These methods rely solely on reconstruction quality without enforcing clear separation be- tween normal and anomalous samples. Contrastive loss is well-suited for this dynamic environment as it learns repre- sentations based on similarity relationships rather than fixeddecision boundaries. By continuously adapting to shifting network conditions and mobility patterns, it ensures that normal behaviors remain well-clustered while anomalies, even subtle or context-dependent ones, are effectively separated. This flexibility makes contrastive learning particularly robust against the inherent variability of IoV networks. C. Adaptive Anomaly Detection in Vehicle Systems Vehicular networks, part of the emerging IoV , face nu- merous challenges related to security and anomaly detection. Due to the real-time data exchange between vehicles and infrastructure, anomaly detection mechanisms must account for domain shifts across different environments, vehicle types, and driving conditions. This is particularly important in iden- tifying malicious activities or system failures that may affect vehicle performance or safety. Several significant works have addressed these challenges (Table I), and we will examine them in relation to our contribution. By putting these existing solutions into context with our own, we aim to show how our approach advances the field by enhancing domain adaptation and security in IoV systems. One approach, as discussed in [24], focuses on detecting anomalies in Vehicular Ad-hoc Networks (V ANETs) using supervised AI models while addressing the challenge of their ”black-box” nature. To enhance the transparency of these models, the framework integrates two key explainability techniques: Shapley Additive Explanations (SHAP) and Lo- cal Interpretable Model-agnostic Explanations (LIME). SHAP provides
|
https://arxiv.org/abs/2505.21703v1
|
global insights by quantifying the contribution of each feature to the overall model predictions, while LIME offers local interpretations by explaining individual model decisions on a per-sample basis. The framework is evaluated on two real-world autonomous driving datasets: the VeReMi dataset and a custom sensor dataset. The VeReMi dataset, specifically designed for misbe- havior detection in V ANETs, simulates various cyber-attacks, including DoS, Sybil attacks, and message falsification, cov- ering a total of 225 scenarios. Complementing VeReMi, the sensor dataset captures vehicular behaviors using data from ten distinct sensors, recording parameters such as location, speed, lane alignment, and headway time. These diverse features enable the model to detect abnormal vehicle behaviors that may indicate cyber-attacks or system malfunctions. In the context of anomaly detection, the framework is tested against five different types of attacks, encompassing both traditional attack types (like DoS) and more specific vehicular misbehav- ior attacks. In addition to this framework, [25] proposes an anomaly detection model for V ANETs using a GRU-based deep learn- ing architecture. The model introduces a semi-supervised technique called SEMI-GRU, which integrates GRU neural networks with the Synthetic Minority Oversampling Technique (SMOTE) oversampling technique to improve anomaly de- tection accuracy and reduce false positives. The GRU archi- tecture is particularly advantageous for capturing long-term dependencies in sequential data while using fewer parameters than traditional Long Short-Term Memory (LSTM) networks, IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 4 TABLE I COMPARISON OF RELEVANT WORKS IN ANOMALY DETECTION FOR VANET S AND AUTONOMOUS DRIVING SYSTEMS References Learning Method DatasetSupports Domain Shift?Attack Types Nazat et al. [24] Supervised VeReMi, Sensor Dataset No DoS, Sybil Attacks, Message Falsification ALMahadin et al. [25] Semi-supervised NSL-KDD Yes DDoS, Phishing Attacks, Password Attacks, R2L, U2R Nissar et al. [26] Unsupervised NSL-KDD Yes DDoS, Phishing Attacks, Password Attacks, R2L, U2R This paper Unsupervised ACI-2023, WUSTL-2021 Yes DoS, SQL Injection, Reconnaissance, Backdoor, Dictionary Brute Force, ARP Spoofing resulting in faster training. Furthermore, combining GRU with feed-forward neural networks (FNN) enhances feature extrac- tion, leading to more refined anomaly detection. The SEMI- GRU method addresses key challenges in anomaly detection, such as handling imbalanced datasets and detecting unknown cyber-attacks in V ANET traffic. To combat class imbalance, the model employs the SMOTE, which generates synthetic samples for underrepresented attack types in the dataset. The model is evaluated using the NSL-KDD dataset [24], which contains 42 features and several types of network attacks, including Denial of Service (DoS), Probe, Remote-to-Local (R2L), and User-to-Root (U2R). Furthermore, [26] presents an unsupervised approach for anomaly detection in V ANETs using Variational Autoencoders (V AEs) optimized with multi- objective evolutionary algorithms, such as AGE-MOEA and R-NSGA-III. This framework focuses on detecting zero-day attacks and handling high-dimensional vehicular network traf- fic, making it particularly suitable for dynamic and evolving V ANET environments. By learning latent data representations, the V AE model is capable of identifying novel intrusions without needing labeled data, which addresses one of the key limitations of supervised models. Unlike [25] that used the NSL-KDD dataset primarily to handle class imbalances in a semi-supervised setup,
|
https://arxiv.org/abs/2505.21703v1
|
this framework employs the same dataset but focuses on unsupervised anomaly detection, lever- aging the entire feature set to optimize detection accuracy across multiple objectives. III. I OV N ETWORK TRAFFIC DATASETS Analyzing the landscape of intrusion detection in the Inter- net of Vehicles (IoV), we observe a significant shortage of publicly available datasets explicitly designed for vehicular network security. Many works resort to using general IoT datasets as proxies for IoV traffic, which can be problematic given the fundamental differences between traditional IoT and vehicular environments. IoV systems introduce unique temporal patterns, mobility constraints, and attack surfaces, making it crucial to carefully assess dataset applicability. Unlike traditional network intrusion datasets, such as NSL- KDD and its predecessor KDD Cup’99, IoV datasets must capture both network-based and mobility-induced anomalies. As depicted in Table II, NSL-KDD refines the KDD Cup’99 dataset by reducing redundancy and mitigating class imbalance issues. It has removed redundant and duplicate records to avoid biased learning and overfitting during model training, leading to faster and computationally more feasible model training and evaluation while still retaining the complexity needed for network intrusion detection tasks. Unlike the KDD Cup’99 dataset, it does not suffer from class imbalance, as certainattack types were overrepresented, which hindered the ability of models to generalize well, particularly for underrepresented attack types. Despite these improvements, NSL-KDD and KDD Cup’99 do not entirely capture more modern attack types or the dynamic nature of traffic found in current IoV systems. Additionally, while NSL-KDD addressed class imbalance to some extent, challenges remain with underrepresented attack categories such as Remote-to-Local (R2L) and User-to-Root (U2R) attacks. To address these limitations, datasets such as VeReMi have been developed specifically for vehicular anomaly detection. Unlike NSL-KDD, VeReMi integrates spatiotemporal fea- tures, such as GPS data, vehicle speed, and direction, mak- ing it highly relevant for detecting falsified positioning data and misbehavior attacks in vehicular networks. The VeReMi dataset, used in more recent vehicular network research (e.g., [24]), focuses on threats unique to IoV , including Denial-of- Service (DoS) attacks on vehicular messaging, Sybil attacks where multiple fake vehicles compromise decision-making, and message falsification attacks that could manipulate vehicle responses. The dataset encompasses 225 distinct attack scenar- ios, providing a richer representation of adversarial behaviors in vehicular networks. However, VeReMi primarily focuses on vehicle-to-vehicle (V2V) communication anomalies and does not adequately capture network-side attacks, where adversaries exploit vulnerabilities within V2X infrastructure (e.g., roadside units (RSUs), edge servers, or core network elements) to launch large-scale disruptions. For addressing network-side attacks, we find that the ACI- IoT-2023 and WUSTL-IIoT-2021 datasets provide broader coverage of IoV-relevant cyber threats compared to VeReMi and NSL-KDD. ACI-IoT-2023 includes brute-force and Ad- dress Resolution Protocol (ARP) spoofing attacks, which target authentication and network integrity—key components of V2X security. ARP spoofing, for instance, can redirect vehicle communication within IoV networks, potentially leading to man-in-the-middle attacks where adversaries can modify or in- tercept safety-critical messages. Similarly, WUSTL-IIoT-2021 introduces command injection and reconnaissance attacks, both of which are highly relevant to securing real-time vehicle control systems against unauthorized commands and stealthy data gathering. A key
|
https://arxiv.org/abs/2505.21703v1
|
limitation of existing vehicular anomaly detection datasets is that they often assume an attacker operates exter- nally, either by injecting fake GPS signals or spoofing nearby vehicles. However, a sophisticated attacker could compromise the IoV network itself, blending into the system while exe- cuting malicious actions at strategically critical moments. For example, an adversary could send falsified GPS data to the V2X infrastructure, misleading the network into believing a IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 5 vehicle is in a different location. At a later time, they could remotely manipulate vehicle commands, such as triggering unintended acceleration or disabling braking systems, leading to catastrophic safety failures. To fully model network-based IoV attacks, datasets must encompass not only message anomalies but also network- layer attacks where adversaries leverage V2X communica- tion channels to systematically manipulate vehicular behavior. While ACI-IoT-2023 and WUSTL-IIoT-2021 capture network- oriented attack types, further work is needed to bridge the gap between mobility-driven threats and network-layer intrusions. The next generation of IoV anomaly detection datasets should integrate real-time network telemetry, vehicular control data, and multimodal sensor fusion to improve the detection of stealthy, coordinated attacks within V2X ecosystems. By structuring the discussion around these key differences, we highlight why IoV anomaly detection presents challenges distinct from traditional time-series datasets. The combination of dynamic mobility, real-time constraints, cross-layer attacks, and adversarial deception techniques makes IoV security an evolving research challenge that demands novel detection mechanisms beyond conventional network intrusion models. Figure 1 presents the t-SNE visualization of the ACI-IoT- 2023 dataset’s network flows, revealing a highly clustered and heterogeneous structure, with distinct regions of data points interspersed with well-separated groups. This suggests a dataset containing diverse traffic patterns, likely representing a wide range of attack behaviors and benign communications. The clear separation of clusters indicates that network activ- ities exhibit distinct behavioral patterns, which aligns with ACI-IoT-2023’s inclusion of authentication-based threats like brute force attacks and ARP spoofing. These types of attacks can create sharp deviations in feature space, reinforcing the importance of anomaly detection models that can effectively distinguish between normal and compromised network states. Conversely, Figure 2 depicts the t-SNE visualization of the WUSTL-IIoT-2021 dataset’s network flows, which displays a more continuous and densely packed distribution of data points. Unlike ACI-IoT-2023, this dataset appears to have less distinct clustering, suggesting that the data contains gradual variations between normal and anomalous traffic patterns, rather than sharply delineated attack signatures. This structure aligns with command injection and reconnaissance attacks, which can be more subtle and progressively influence network behavior over time. The overlapping nature of the data also suggests that anomalies in this dataset may be more chal- lenging to detect, requiring methods capable of learning fine- grained distinctions within benign and malicious activity. The differences between ACI-IoT-2023 and WUSTL-IIoT- 2021 emphasize the need for robust anomaly detection frame- works that can handle both highly clustered, distinct attack patterns (ACI-IoT-2023) and continuous, stealthy attack behav- iors (WUSTL-IIoT-2021). By leveraging advanced contrastive learning and domain adaptation techniques, we can develop intrusion detection methods that generalize effectively across
|
https://arxiv.org/abs/2505.21703v1
|
these varying IoV environments. Fig. 1. t-SNE visualization of the ACI-IoT-2023 dataset Fig. 2. t-SNE visualization of the WUSTL-2021 dataset IV. M ETHODOLOGY A. Problem Definition We consider the development of a network attack detection system for an IoV system. In this system, we collect network data in the form of network flows, which are records of network key performance indicators (KPIs) aggregated over a period of time. In this scenario, we do not have prior knowl- edge or data collected of potential attacks, but do have network flows of benign traffic for a particular system. This presents an unsupervised andunseen attack detection problem. Given this challenge, we focus on the development of an attack detection system that is capable of learning the benign behavior with a high degree of accuracy and detecting deviations from that benign behavior. A multivariate time series is a sequence of vectors observed at successive time points. Consider X= (x1, x2, x3, ..., x N)∈ Rm×n, where Xis a collection of univariate time series, mis the length of the multivariate time series, and nis the number of variables. Xcontains a sequence of mfeature vectors, xt∈Rm. In an IoV network, we consider that collected network KPI flows are such sequential multivariate time series that may vary in duration, leading to unevenly spaced data. Here, while the flows themselves may be unevenly spaced in this way, we consider that we can take equal-sized sequential collections of the flows to perform benign behavior base-lining and anomaly detection. We do this division of data based on several considerations. Firstly, we consider the case such that IoV networks may be computational resource limited and unable to perform continuous real-time inference on every network flow collected. Secondly, given the typical sustained IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 6 TABLE II IOT INTRUSION DATASETS Dataset Observation # Attacks Duration Device # Year KDD Cup 99 4,418,358 DoS, Probe, R2L, U2R 9 weeks unknown 1999 NSL-KDD 160,367 DoS, Probe, R2L, U2R 9 weeks (subset of KDD Cup 99) unknown 2009 WUSTL-2021 1,194,464 SQL Injection, DoS, Recon, Backdoor 53 hours 10 2021 ACI-IoT-2023 1,231,411 DoS, Recon, Brute Force 5 Days 49 2023 duration of attacks upon a system, it is unlikely a singular flow will be representative of an entire attack. If the majority of attacks within a flow can be considered as anomalous, the sequence should be flagged. Similarly, we consider that we can better learn overall network behavioral patterns when looking at flows collected over periods of time. This is particularly important when attempting to establish short-term and long- term benign traffic behaviors. B. Analytical Framework for Cybersecurity Threat Models To effectively design and evaluate anomaly detection mech- anisms, it is crucial to formally define the attack models the system aims to defend against. In this context, we provide mathematical formulations for the primary cyber threats con- sidered in this study: Brute Force Attacks, Denial of Service (DoS) Attacks, and Reconnaissance Attacks. These formal definitions establish a rigorous foundation for analyzing the model’s detection capabilities and guiding the development
|
https://arxiv.org/abs/2505.21703v1
|
of robust defense strategies. A brute force attack systematically attempts all possible com- binations to guess passwords or cryptographic keys. Let: •Nbe the total number of possible combinations: N= Ak, where Ais the alphabet size and kis the password length. •Tbe the time to attempt one guess. •pbe the number of parallel processors used. The expected time to success is: Tavg=N 2p×T (1) The probability of success after time tis: Psuccess(t) =r×t N(2) where r=1 Tis the guessing rate. A DoS attack overwhelms system resources, making services unavailable. Let: •Cbe the system’s capacity. •Rlegitbe the legitimate request rate. •Rattack be the attack request rate. The system overloads when: Rlegit+Rattack> C (3) The probability of overload in a queuing model is: Poverload =λlegit+λattack µ(4) where µis the service rate. Recon attacks gather system information for future exploita- tion. Let:•Nbe the number of IPs, Pthe number of ports, and S the number of services. •rscanbe the scanning rate. •dbe the detection threshold. The total search space is: Ω =N×P×S (5) The probability of detection over time is: pdetect(T) = 1−e−β·rscan·T(6) The probability of successfully finding a vulnerability is: Psuccess = 1−V−v Vrscan·T (7) C. Rationale for Joint Autoencoder Based on Analytical Framework The analytical framework presented in Section IV-B cat- egorizes cyber threats into dimensions with distinct charac- teristics, such as high-intensity DoS attacks and stealthier reconnaissance activities. Given the significant impacts that these diverse attack types can cause, particularly when trying to protect new and emerging systems that do not possess significant labeled attack data, a motivated technical approach that can adequately address the attacks detailed in the analyt- ical framework is needed. Our methodology is based on two primary assumptions from our framework: 1) Different attacks will present as statistically distinguishable deviations from learned benign network patterns that have been captured in sequential network flows. 2) The robust detection of zero-day attacks or detection of attacks in new environments with no preexisting attack data calls for the use of unsupervised learning based on benign data. This means that any method used must be trained on benign data, focusing on deviations from benign data performance as opposed to specific attack signatures. Based on these assumptions, we propose a joint reconstruc- tion error-contrastive loss approach that is designed to detect the threats outlined in our analytical framework. A typical reconstruction loss ( LREC ) is used to train the autoencoder on benign data samples only to model and accurately reconstruct normal network patterns. Sequences with significant deviations from the benign network traffic patterns, such as the abrupt change in traffic patterns originating from DoS attacks, will result in a high reconstruction error of those samples because the model is only trained to reconstruct samples from the benign data distribution. However, being able to adapt to shifts in benign data distri- butions over time so benign data points are not misclassified as IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 7 Fig. 3. Proposed joint triplet-reconstruction loss autoencoder architecture utilizing LSTM layers. anomalies, calls for the additional of elements that can provide
|
https://arxiv.org/abs/2505.21703v1
|
fine-grain discrimination to these subtle changes. To achieve this, we utilize a contrastive loss (here, triplet margin loss, denoted as LTML ). This loss explicltly encourages the model to learn more discriminative latent space representations by causing separation between benign representations that were collected at different points in time during a system’s operation while clustering similar variations (versions of benign samples augmented with subtle noise) closer together. This helps diversify the latent space of the benign representations in a way that reconstruction loss alone might overlook, allowing for the highest reconstruction errors to be isolated on true attack samples. By combining these two losses, our model is designed to effectively monitor for a wide range of anomalous behaivors that address the different dimensions of our threat framework to promote robustness for unseen attack detection. The specific technical details for implementing this model are detailed in Section IV-D. D. Autoencoder Architecture We opt for an autoencoder-based approach to the anomaly detection task. Specifically, we develop an autoencoder with a traditional encoder-decoder architecture designed to deal with a multivariate time-series data from collected IoT network flows. Our architecture focuses on the loss values and data structures necessary for the robust detection of network at- tacks. The architecture is depicted in Figure 3. 1) Encoder-decoder architecture: Here, we utilize LSTM layers for both the encoder, gϕand decoder, fθ. We add an additional linear layer at the output of the decoder. [27] provides an introduction to the larger formulations behind autoencoders. We summarize the general representa- tions of the encoder-decoder structure here. We can generically represent a encoder by the formulation: hi=gϕ(xi) (8) where hiis the latent feature representation of sample xi generated by the encoder gϕ. From this, we can generally represent a decoder by: ˆxi=fθ(hi) =fθ(gϕ(xi)) (9)where ˆxiis the reconstructed input generated by the decoder, fθ. 2) Losses: Our interest strongly lies in the loss functions utilized for the autoencoder-based approach for anomaly detec- tion. Given that our application domain is one that is entirely unsupervised, we are unable to directly use some supervised training methodologies or loss functions. This assumption of a limited scope of data guides our training process as we aim to design a method that can accurately learn the benign representations without over-fitting such that the model is too sensitive to small benign changes or too agnostic to changes indicating anomalous behavior. Traditionally, autoencoders have trained on a reconstruction loss,LREC such that the model is tuned to the reconstruction task. This loss can be defined generally as: LREC=1 NNX i=1L(xi,ˆxi) (10) where xiis a input sample, ˆxiis the reconstructed sample, Lis the chosen loss function for reconstruction loss, and N is the number of samples. The chosen loss function can vary based upon task, but is often the mean squared error (MSE) loss or mean absolute error (MAE) loss. While we still utilize the reconstruction task and loss for anomaly detection, we consider that the benign data may have temporal fluctuations that lead to false-positive results for benign network behavior sequences. In an attempt to diversify the latent space while
|
https://arxiv.org/abs/2505.21703v1
|
taking into account the necessity for clear boundaries between benign and anomalous behavior, we include a triplet margin loss factor, LTML , in our training process to help guide the training process. This loss has been used in the vision domain for tasks such as face recognition [28]–[30], but we adopt it for the time-series classification task in this work. It is defined as: LTML =max{d(ai, pi)−d(ai, ni) +m,0} (11) where mis the margin value, aiis an anchor sequence, piis a positive example for the anchor sequence, and niis a negative example for the anchor sequence. d(xi, yi)is defined as: d(xi, yi) =∥xi−yi∥2 (12) IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 8 Here, we specifically feed the latent representations of the anchor, positive, and negative sequences produced from the encoder to our triplet margin loss in order to isolate on the purpose of latent space diversification. In this unsupervised setting, we do not want to utilize samples from the anomaly class for the negative samples in the triplet margin loss. Because we are utilizing the triplet margin loss to help the model capture intra-class variability, we opt for the positive samples to be versions of the anchor sample augmented with random noise, ϵ. This helps to learn the diversity of samples within the class while ensuring that the resulting positive sequence is somewhat similar to the anchor. For our negative samples, we select a different sequence within the benign set. We consider the value of this selection for the negative to be that while the negative is in the same class, it is a distinct different sequence and forces the model to learn the more fine-grained distinctions within the benign set. Through our sequencing methodology, we also ensure that there is a temporal difference between the anchor and negative sequence by picking another sequence in the set. We can leverage the similarities of samples for the positive samples generated while attempting to find the fine-grained distinctions between those similarities with the negative class. We also experiment with weighting the loss terms using weights denoted as λREC andλTML . Our overall loss term is then defined as: L=λTML∗ LTML +λREC∗ LREC (13) E. Anomaly detection thresholding In line with previous autoencoder-based anomaly detection methods, we train the autoencoder on benign data samples. After training, we first set how an anomaly is detected. Based on the training set, a reconstruction error threshold is set. We note here that we only utilize the reconstruction error in the threshold selection and we do not scale it with the λREC value. As opposed to setting the threshold to a static number, we utilize percentile values of the reconstruction errors generated from the benign set to allow for flexibility in threshold value based on training performance. We provide analysis results for percentile values between 90% and 100% to illustrate how percentile values may impact performance and set the percentile value to 99% in all other trials. If the reconstruction error of a sample is below this thresh- old, the sample is benign. Otherwise, the
|
https://arxiv.org/abs/2505.21703v1
|
sample is considered anomalous. Here, we utilize the L2 norm/mean squared error (MSE) as our reconstruction error metric. V. E VALUATION 1) Pre-processing: We utilize Min-Max normalization on both datasets. Given the data imbalance favoring the per- centage of the ACI attack data, which is an inverse of the traditional imbalance problem within anomaly detection tasks, we investigate the inclusion and exclusion of the SMOTE [31] oversampling technique. When SMOTE is utilized, we are specifically oversampling the original network flows prior to sequence building. Outside of the sequence length variation trials, we fix the sequence length to 25 for all tables. Fortriplet generation, we fix the noise augmentation for positive sequence generation to 0.01. The triplet building process is visualized in Figure 4. Fig. 4. Sequencing for IoT network flows 2) Implementation Details: For both datasets, we utilize 80% of the benign set for training purposes and 20% of the benign set for testing benign data. We sweep across the range ofλTML = [0,1]andλREC = [0,1]in increments of 0.1 to identify the ideal combinations of these weights and present the results for these ideal values unless otherwise noted. A. Accuracy metricing To provide a comprehensive understanding of the model’s performance, we formally define the core evaluation metrics used in this study. These metrics are essential to accurately assess the ability of the model to distinguish between benign and anomalous behaviors. Benign Accuracy (BA): Measures the model’s ability to correctly classify benign (normal) traffic. BA=True Negatives (TN) True Negatives (TN) +False Positives (FP)(14) Anomaly Accuracy (AA): Measures the model’s ability to correctly detect anomalous (attack) traffic. AA=True Positives (TP) True Positives (TP) +False Negatives (FN)(15) Precision (P): Represents the proportion of correctly identified anomalies among all detected anomalies. Precision =True Positives (TP) True Positives (TP) +False Positives (FP) (16) Recall (R): Represents the proportion of detected anomalies among all actual anomalies. Recall =True Positives (TP) True Positives (TP) +False Negatives (FN)(17) F1 Score (F1): Provides a balance between Precision and Recall by computing their harmonic mean. F1= 2×Precision ×Recall Precision +Recall(18) We provide brief benchmarking results on an unsupervised ML method to illustrate why ML is insufficient for application scenario of having no attack data in the training set. We leverage the isolation forest ML algorithm, which is designed to isolate anomalies from the decision trees it generates for each dataset. Table III gives the results for this scenario. We IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 9 observe, in consideration of the benign-anomaly imbalances, near-random accuracy performance and imbalanced precision- recall values for both datasets. This performance motivates the need to develop more sophisticated models, as proposed in this paper. TABLE III ISOLATION FOREST RESULTS ,TRAINED ONLY ON BENIGN SAMPLES Dataset Accuracy Precision Recall ACI-2023 13.3338 0.8984 0.0952 WUSTL-2021 91.6900 0.4662 1.0000 Additionally, we utilize PyOD [32], a Python toolbox used for detecting anomalies in multivariate data, for the intrusion detection task on our utilized datasets. PyOD provides a variety of state-of-the-art outlier detection models for bench- marking outlier detection model performance. We provide results for a Deep
|
https://arxiv.org/abs/2505.21703v1
|
One-Class Classifier with AutoEncoder (DeepSVDD) [33] model and a Gaussian Mixture Model (GMM) [34] in Table IV. These models are trained in the same manner as our approach, where we use only benign samples for training and both benign and attack samples for testing. We also provide results on an autoencoder utilizing only reconstruction error during the training process to highlight why our modifications to the traditional autoencoder are necessary. Table IV presents the results for both the WUSTL and ACI datasets. While we found that a traditional approach could perform well on the WUSTL dataset, the model was unable to correctly capture the ACI dataset’s benign behavior leading to poor anomaly detection accuracy results. This highlighted that the reconstruction error approach alone was not adaptable to other IoV representative datasets, which was a key consideration for our work. Additionally, we found that our proposed method could help boost the WUSTL dataset’s accuracy values, as illustrated in the following sections. 1) ACI: Table IV shows the benign test set accuracy, anomaly set accuracy, overall precision, and overall recall val- ues for the ACI dataset given the identified ideal λTML, λREC pair. We find that our joint AE approach outperforms all baselines in every metric except for benign accuracy, but we still see 99% accuracy in this metric. Table V breaks down the classification accuracy for ACI, without and with SMOTE, across attack categories. That is, given the three overall attack days/categories of brute force, DoS, and reconnaissance, we evaluate the individual accuracy for each category. We see the highest performance for the brute force attacks at 100% detection accuracy and the worst performance for the DoS attacks at 91% detection accuracy. We note that we have the same benign accuracy values from Table IV. Given the lower performance for DoS-specific detection, we also explore if utilizing SMOTE in our training process increases the multi-category detection performance of our model. Table V shows the accuracy metricing for the multi-category detection task utilizing SMOTE. We note that while there are some minimal decreases in the metrics for the brute force and reconnaissance attacks, there is a 4% increase in the overall accuracy for DoS attack detection and a subsequent increase in the precision value for the DoS attacks.2) WUSTL: Table IV shows the same metrics for the WUSTL dataset. Notably, the anomaly accuracy here is 100%, but we reiterate that the number of anomaly samples is small within the WUSTL dataset compared to the benign set, which can lead to such high performance due to the imbalance. Nonetheless, we see high performance on the precision and accuracy values as well for traffic within the WUSTL dataset. Here, we find that the AE-based models outperform the traditional multivariate detection schemes utilized from PyOD. 3) Impact of oversampling: Considering the imbalance of the ACI dataset representing an inverse imbalance compared to traditional anomaly detection datasets, it allows us to explore the role of oversampling in IoT network anomaly detection. Given an environment where benign samples may be limited for various reasons (adverse conditions, limited computation resources,
|
https://arxiv.org/abs/2505.21703v1
|
etc), oversampling available data or generating synthetic samples can help diversify the benign training set. Here, we evaluate the use of Synthetic Minority Oversampling Technique (SMOTE) [31] to develop a more robust data distribution for the benign training data with our method. Table VI shows the overall accuracy metricing on ACI with and without utilizing SMOTE. We observe that resolving the benign imbalance issue with SMOTE helps boost the benign test, anomaly test, and overall precision values at the cost of a small decrease in overall recall. We also observe that utilizing oversampling helps overcome sensitivity to λREC andλTML values. In Table VII, we see that the average accuracy metric values across all loss weight values but recall are higher when utilizing SMOTE. We find that, overall, SMOTE is useful in developing a more robust benign training set when an IoV system’s pre-collected benign data may be limited. In this specific application, given that there is a variety of environments and system configu- rations that may be present in IoV , we emphasize techniques such as SMOTE to help ensure the best system performance. This is also with a minimal amount of fine-tuning needed, represented by its reduction in sensitivity to the loss weights. 4) Sequence Length Variations: Given that our approach relies on the sequencing of data, it is relevant to discuss how variations in sequencing can impact the overall results and what the sequence length value represents in a practical setting. Table VIII shows the results for sequence lengths of 10, 25, 50, and 100 on the ACI dataset. For fairness, we average across all loss weight pairs to present these results. We find that, generally, the model performs better as we increase the sequence length, although there is some be- nign accuracy degradation between the 50 and 100 sequence lengths. However, a shorter sequence length translates to a higher periodicity in the detection system, which could have broader implications for overall system security. For more safety and time-critical systems, such as an IoV system, a higher periodicity that the compute capabilities of the systemc can support is preferred. We also note that while we use averaging for fairness, we still see high performing individual trials with small sequence sizes if the proper loss weights are utilized. For a sequence length of 10, the trial with λREC= 0.6andλTML = 0.7achieved 96% benign accuracy IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 10 TABLE IV ACCURACY METRICS FOR ACI-2023 AND WUSTL-2021 DATASETS ACROSS DIFFERENT MODELS . ACI-2023 WUSTL-2021 AE Joint V AE Joint AE DeepSVDD [33] GMM [34] AE Joint V AE Joint AE DeepSVDD [33] GMM [34] Benign Acc. 99.94 96.22 99.06 88.31 88.31 94.97 99.13 99.09 99.96 100.00 Anomaly Acc. 66.64 96.91 97.28 77.80 76.82 99.99 100.00 100.00 90.07 90.10 Precision 0.6664 0.9691 0.9729 0.9994 0.9994 0.9999 1.0000 0.9775 0.7982 0.7988 Recall 0.9999 0.9998 1.0000 0.7780 0.7682 0.9975 0.9784 0.9886 0.9960 1.0000 F-one 0.7998 0.9843 0.9862 0.8749 0.8687 0.9987 0.9891 0.9830 0.8876 0.8881 TABLE V ACI METRICS FOR MULTI -CLASS CLASSIFICATION ,WITH AND WITHOUT SMOTE. Category
|
https://arxiv.org/abs/2505.21703v1
|
SMOTE? Anom. Acc. Precision Recall Brute Force N 100.0000 1.000 0.9961 Brute Force Y 98.3408 0.9834 0.9915 DoS N 91.7481 0.9175 0.9998 DoS Y 98.3408 0.9834 0.9915 Recon N 99.0946 0.9909 1.0000 Recon Y 98.3408 0.9834 0.9915 TABLE VI ACI (SMOTE) METRICS FOR JOINT AUTOENCODER . λREC = 0.8, λTML = 0.9,THRESHOLD = 99 TH PERCENTILE Benign Acc. Anom. Acc. Precision Recall 99.9110 98.3408 0.9834 0.9915 TABLE VII ACI METRICS AVERAGE ACROSS ALL λREC, λTML ,PAIRS (NOSMOTE VERSUS SMOTE), THRESHOLD = 99 TH PERCENTILE SMOTE? Benign Acc. Anom. Acc. Precision Recall Y 99.7066 98.2166 0.9822 0.9994 N 94.1081 95.7319 0.9573 0.9998 TABLE VIII ACI METRICS AVERAGE ACROSS ALL λREC, λTML ,PAIRS (VARYING SEQUENCE LENGTH ),THRESHOLD = 99 TH PERCENTILE Length Benign Acc. Anom. Acc. Precision Recall 10 96.4495 85.5202 0.8552 0.9998 25 94.1081 95.7319 0.9573 0.9998 50 99.3654 97.2389 0.9724 0.9999 100 96.2238 98.1101 0.9811 0.9999 and 98% anomaly accuracy on the ACI dataset. Thus, while higher sequence lengths are less sensitive to the loss weights, lower sequence lengths can be balanced to fit an environment’s specific needs. B. Ablation Studies 1) Transfer Learning: Given the importance of generaliz- ability as it relates to attack detection across diverse envi- ronments, we also highlight our model’s performance for the transfer learning task. In the IoV security setting, we consider that we may need zero-day detection systems for a given environment that has no pre-collected attack data. As such, leveraging the performance our model trained on data from a different environment on a new environment’s data is a useful consideration of the adaptability of our model. Given the different application domains of the WUSTL and ACI datasets, we explore the viability of training on theWUSTL dataset, freezing model weights, and fine-tuning on the ACI dataset. For our pre-trained cases, we set the triplet margin loss weight to 0 and the reconstruction loss to 1. Through this, we evaluate whether the latent space learning process introduced through the triplet margin loss for the WUSTL dataset positively impacts the reconstruction error for anomaly detection task. Table IX shows the experimental results for transfer learning freezing all but the input and output layer WUSTL-trained weights for the model with the ideal loss weights from the WUSTL pre-training ( λREC = 0.6, λTML = 1.0). Table X shows the experimental results for transfer learning freezing the encoder weights for the model. TABLE IX ACI ( NOSMOTE) METRICS FOR JOINT AUTOENCODER ,PRETRAINED ON WUSTL AND FREEZING ALL BUT INPUT AND OUTPUT LAYERS . λREC = 0.6, λTML = 1.0,THRESHOLD = 99 TH PERCENTILE PT? Benign Acc. Anom. Acc. Precision Recall Y 99.0566 96.1830 0.9618 1.0000 N 90.5661 97.4403 0.9755 0.9996 TABLE X ACI ( NOSMOTE) METRICS FOR JOINT AUTOENCODER ,PRETRAINED ON WUSTL AND FREEZING THE ENCODER ONLY . λREC = 0.6, λTML = 1.0,THRESHOLD = 99 TH PERCENTILE PT? Benign Acc. Anom. Acc. Precision Recall Y 99.0566 96.3717 0.9637 1.0000 N 90.5661 97.4403 0.9755 0.9996 In both, we see an increase in the overall detection accuracy for the benign sets as well as an increase in the recall values
|
https://arxiv.org/abs/2505.21703v1
|
while we see a decrease in the anomaly accuracy and the pre- cision value. This indicates the transfer learning was valuable for decreasing the number of false negatives at the trade- off of increased false positives. Freezing only the encoder yields slightly higher results between the two pre-trained cases, indicating performance benefits from letting the decoder train fully on a specific environment’s reconstruction task. These results establish our model’s generalizability capabilities and show that no attack data is needed in the source or target environment to detect any attack type in our method. 2) Variational autoencoder (VAE): Variational Autoen- coders (V AEs) are generative models that learn a structured latent space by imposing a probabilistic distribution (typically Gaussian) on the latent variables. V AEs introduce stochasticity through reparameterization. This makes V AEs particularly use- ful for tasks anomaly detection. However, V AEs can struggle IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 11 Fig. 5. ACI precision-recall curve across percentile values for joint AE and joint V AE with producing sharp reconstructions compared to AEs be- cause of the trade-off between reconstruction accuracy and la- tent space regularization. Additionally, if the prior distribution is poorly chosen or the KL divergence term is too dominant, V AEs may produce overly blurred outputs or fail to capture fine details, whereas AEs, being purely deterministic, often achieve better reconstruction fidelity. The chosen distribution is particularly important in our context as our attack datasets are not guaranteened to be inherently Gaussian and assuming a fixed distribution could skew detection results. However, due to their potential benefits, we perform an ablation study on the modification of our model architecture to a V AE, so we utilize contrastive, reconstruction, and regularization loss. We present the results in Table IV. We find that the joint V AE performs well but does not outperform the joint AE for the ACI-2023 dataset. Conversely, we find it performs similarly and, in some metrics, better than the joint AE for the WUSTL-2021 dataset. 3) Robustness Analysis Across Percentiles: To further eval- uate model robustness, we analyze performance across differ- ent percentile thresholds of anomaly scores. By varying the decision thresholds, we assess how sensitive the model is to detecting anomalies at different operating points. This analysis provides insight into the model’s stability and consistency, revealing how changes in the percentile threshold impact detection performance. We present these results numerically in Table XI for ACI and show corresponding precision-recall curves in Figure 5 for ACI-2023 and Figure 6 for WUSTL- 2021. We find that our overall benign accuracy increases dra- matically as we increase our percentile while our other metrics have minor increases or decreases. Because the thresholding is intended to find the outliers from the reconstruction error distribution, as the percentile grows, only those extreme out- liers should be flagged as anomalies. This leads to less benign samples being misclassified as attacks (ie, false positives). TABLE XI ACI ( NO SMOTE )METRICS FOR JOINT AUTOENCODER ACROSS DIFFERING PERCENTILE VALUES Benign Acc. Anom. Acc. Precision Recall F-one 90% 78.30 98.08 0.9808
|
https://arxiv.org/abs/2505.21703v1
|
0.9992 0.9899 95% 86.79 97.79 0.9779 0.9994 0.9886 99% 99.06 97.29 0.9729 1.000 0.9862 Fig. 6. WUSTL precision-recall curve across percentile values for joint AE and joint V AE Fig. 7. Benign representations with and without contrastive loss. 4) Impact of Contrastive Loss on Benign Representations: Given our utilization of contrastive loss with the intent of creating better cohesion among benign representation samples, we explicitly highlight that the use of contrastive does help to boost benign representation cohesion. To evaluate this, we train our joint autoencoder method with and without contrastive loss. For fairness, we train the model with the sameLREC value (0.9) for both models. We then plot the benign representations derived from these two models in the latent space to observe with what level of cohesiveness the individual clusters of points are grouped. This is plotted in Figure 7. Here, we qualitatively see that the latent space representations for benign data produced with contrastive loss have better cohesion around the center of that cluster of points. Quantitatively, we can use a measure of the average length along the axes to capture how wide the plotted points are in all directions. These results are provided in Table XII. We find that the overall average length is smaller for the benign representations created using contrastive loss, indicating better cohesion. TABLE XII AVERAGE LENGTH ALONG AXES FOR BENIGN REPRESENTATIONS With contrastive loss 0.2000 Without contrastive loss 0.7244 IEEE INTERNET OF THINGS JOURNAL, VOL. X, NO. X, XXXX XXXX 12 VI. C ONCLUSION Here, we presented a unique autoencoder method for net- work attack detection in IoV environments. We conducted ex- tensive metricing on two state-of-the-art datasets that are well representative of modern networking patterns in distributed networked systems. We show that this method can achieve high benign and anomaly test accuracy while having no attack data within the training set. These results demonstrate our model’s capabilities for unseen attack detection in IoV environments. Additionally, we show that our model works as an adaptable attack detection mechanism for detection across different environments, as proven by our transfer learning study. Our work has key implications for improving the secu- rity of these emerging and safety-critical automotive systems via the presentation of a highly robust unseen attack detection. REFERENCES [1] E. Sisinni, A. Saifullah, S. Han, U. Jennehag, and M. Gidlund, “Indus- trial internet of things: Challenges, opportunities, and directions,” IEEE Transactions on Industrial Informatics , vol. 14, no. 11, pp. 4724–4734, 2018. [2] V . Gotarane and S. Raskar, “Iot practices in military applications,” in2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI) , 2019, pp. 891–894. [3] H. Taslimasa, S. Dadkhah, E. C. P. Neto, P. Xiong, S. Ray, and A. A. Ghorbani, “Security issues in internet of vehicles (iov): A comprehensive survey,” Internet of Things , vol. 22, p. 100809, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2542660523001324 [4] J. Margolis, T. T. Oh, S. Jadhav, Y . H. Kim, and J. N. Kim, “An in- depth analysis of the mirai botnet,” in 2017 International Conference on Software Security and Assurance (ICSSA) , 2017, pp.
|
https://arxiv.org/abs/2505.21703v1
|
6–12. [5] G. Kambourakis, C. Kolias, and A. Stavrou, “The mirai botnet and the iot zombie armies,” in MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM) , 2017, pp. 267–272. [6] C. Osborne, “Mirai ddos attack against kreb- sonsecurity cost device owners $300,000,” May 2018. [Online]. Available: https://www.zdnet.com/article/ mirai-botnet-attack-against-krebsonsecurity-cost-device-owners-300000/ [7] N. O. Pinciroli Vago, F. Forbicini, and P. Fraternali, “Predicting machine failures from multivariate time series: An industrial case study,” Machines , vol. 12, no. 6, 2024. [Online]. Available: https: //www.mdpi.com/2075-1702/12/6/357 [8] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Deep learning for time series classification: a review,” Data Mining and Knowledge Discovery , vol. 33, no. 4, pp. 917– 963, 2019. [Online]. Available: https://link.springer.com/article/10.1007/ s10618-019-00619-1 [9] M. Zolanvari, M. A. Teixeira, L. Gupta, K. M. Khan, and R. Jain, “Machine learning-based network vulnerability analysis of industrial internet of things,” IEEE Internet of Things Journal , vol. 6, no. 4, pp. 6822–6834, 2019. [10] J. Dromard, G. Roudiere, and P. Owezarski, “Online and scalable unsupervised network anomaly detection method,” IEEE Transactions on Network and Service Management , vol. 14, no. 1, pp. 34–47, 2016. [11] T. M. Hoang, N. M. Nguyen, and T. Q. Duong, “Detection of eavesdrop- ping attack in uav-aided wireless systems: Unsupervised learning with one-class svm and k-means clustering,” IEEE Wireless Communications Letters , vol. 9, no. 2, pp. 139–142, 2019. [12] G. Pang, L. Cao, and C. Aggarwal, “Deep learning for anomaly detection: Challenges, methods, and opportunities,” in Proceedings of the 14th ACM international conference on web search and data mining , 2021, pp. 1127–1130. [13] A. A. Cook, G. Mısırlı, and Z. Fan, “Anomaly detection for iot time- series data: A survey,” IEEE Internet of Things Journal , vol. 7, no. 7, pp. 6481–6494, 2019. [14] D. Wu, Z. Jiang, X. Xie, X. Wei, W. Yu, and R. Li, “Lstm learning with bayesian and gaussian processing for anomaly detection in industrial iot,” IEEE Transactions on Industrial Informatics , vol. 16, no. 8, pp. 5244–5253, 2019.[15] S. Saurav, P. Malhotra, V . TV , N. Gugulothu, L. Vig, P. Agarwal, and G. Shroff, “Online anomaly detection with concept drift adaptation using recurrent neural networks,” in Proceedings of the acm india joint international conference on data science and management of data , 2018, pp. 78–87. [16] O. I. Provotar, Y . M. Linder, and M. M. Veres, “Unsupervised anomaly detection in time series using lstm-based autoencoders,” in 2019 IEEE International Conference on Advanced Trends in Information Theory (ATIT) , 2019, pp. 513–517. [17] H. Li and Y . Li, “Anomaly detection methods based on gan: a survey,” Applied Intelligence , vol. 53, no. 7, pp. 8209–8231, 2023. [18] D. Jia, X. Zhang, J. T. Zhou, P. Lai, and Y . Wei, “Dynamic thresholding for video anomaly detection,” IET Image Processing , vol. 16, no. 11, pp. 2973–2982, 2022. [19] J. Yu, X. Gao, F. Zhai, B. Li, B. Xue, S. Fu, L. Chen, and Z. Meng, “An adversarial contrastive autoencoder for robust multivariate time series anomaly detection,” Expert Systems with Applications , vol. 245,
|
https://arxiv.org/abs/2505.21703v1
|
2024. [Online]. Available: https://www.sciencedirect.com/science/ article/abs/pii/S0957417423035121 [20] H. Zhou, K. Yu, X. Zhang, G. Wu, and A. Yazidi, “Contrastive autoencoder for anomaly detection in multivariate time series,” Information Sciences , vol. 610, pp. 266–280, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025522008775 [21] X. Wang, Y . Liu, and H. Zhang, “Deep convolutional autoencoder for assessment of anomalies in multi-stream sensor data,” arXiv preprint , vol. arXiv:2202.07592, 2022. [Online]. Available: https: //arxiv.org/abs/2202.07592 [22] J. Kim, S. Park, and M. Lee, “Structural attention-based recurrent variational autoencoder for highway vehicle anomaly detection,” arXiv preprint , vol. arXiv:2301.03634, 2023. [Online]. Available: https://arxiv.org/abs/2301.03634 [23] T. Nguyen, L. Chen, and K. Wang, “Location anomalies detection for connected and autonomous vehicles,” arXiv preprint , vol. arXiv:1907.00811, 2019. [Online]. Available: https://arxiv.org/abs/1907. 00811 [24] N. Sazid, L. Lingxi, and A. Mustafa, “Xai-ads: An explainable artificial intelligence framework for enhancing anomaly detection in autonomous driving systems,” IEEE Access , vol. 12, pp. 48 583 – 48 607, 2024. [25] A. Ghayth, A. Yassine, S. Mohammad, A. Anurag Vijay, Y . Ghazaala, A. Esraa, Saleh, A.-K. Hamza, Mohammed Ridha, D. Debabrata, and M. Renato, Racelis, “Vanet network traffic anomaly detection using gru- based deep learning model,” IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, , vol. 70, no. 1, pp. 4548 – 4555, 2024. [26] N. Nissar, N. Naja, and A. Jamali, “Securing vanets: Multi-objective intrusion detection with variational autoencoders,” IEEE Transactions on Consumer Electronics , vol. 70, no. 1, pp. 3867–3874, 2024. [27] U. Michelucci, “An introduction to autoencoders,” arXiv preprint arXiv:2201.03898 , 2022. [28] W. Xie, H. Wu, Y . Tian, M. Bai, and L. Shen, “Triplet loss with multistage outlier suppression and class-pair margins for facial expres- sion recognition,” IEEE Transactions on Circuits and Systems for Video Technology , vol. 32, no. 2, pp. 690–703, 2022. [29] Z. Wang and T. Liu, “Two-stage method based on triplet margin loss for pig face recognition,” Computers and Electronics in Agriculture , vol. 194, p. 106737, 2022. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S0168169922000540 [30] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embed- ding for face recognition and clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2015. [31] N. V . Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” J. Artif. Int. Res. , vol. 16, no. 1, p. 321–357, jun 2002. [32] Y . Zhao, Z. Nasrullah, and Z. Li, “Pyod: A python toolbox for scalable outlier detection,” Journal of Machine Learning Research , vol. 20, no. 96, pp. 1–7, 2019. [Online]. Available: http://jmlr.org/papers/v20/ 19-011.html [33] L. Ruff, R. A. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. M ¨uller, and M. Kloft, “Deep one-class classification,” inProceedings of the 35th International Conference on Machine Learning (ICML) . PMLR, 2018, pp. 4393–4402. [Online]. Available: https://ml.cs.rptu.de/publications/2018/deep-svdd.pdf [34] C. C. Aggarwal, Outlier Analysis , 2nd ed. Springer, 2017. [Online]. Available: https://charuaggarwal.net/outlierbook.pdf
|
https://arxiv.org/abs/2505.21703v1
|
arXiv:2505.21715v1 [eess.IV] 27 May 2025Privacy-Preserving Chest X-ray Report Generation via Multimodal Federated Learning with ViT and GPT-2 Md. Zahid Hossain1*†, Mustofa Ahmed1†, Most. Sharmin Sultana Samu2†, Md. Rakibul Islam1† 1Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka, 1208, Bangladesh. 2Department of Computer Science and Engineering, BRAC University, Dhaka, 1212, Bangladesh. *Corresponding author(s). E-mail(s): zahid.cse@aust.edu; Contributing authors: mustofahmed24@gmail.com; sharminsamu130@gmail.com; rakib.aust41@gmail.com; †These authors contributed equally to this work. Abstract The automated generation of radiology reports from chest X-ray images holds significant promise in enhancing diagnostic workflows while preserving patient privacy. Traditional centralized approaches often require sensitive data transfer, posing privacy concerns. To address this, the study proposes a Multimodal Fed- erated Learning framework for chest X-ray report generation using the IU-Xray dataset. The system utilizes a Vision Transformer (ViT) as the encoder and GPT-2 as the report generator, enabling decentralized training without sharing raw data. Three Federated Learning (FL) aggregation strategies—FedAvg, Krum Aggregation and a novel Loss-aware Federated Averaging (L-FedAvg)—were evaluated. Among these, Krum Aggregation demonstrated superior perfor- mance across lexical and semantic evaluation metrics such as ROUGE, BLEU, BERTScore and RaTEScore. The results show that FL can match or surpass cen- tralized models in generating clinically relevant and semantically rich radiology reports. This lightweight and privacy-preserving framework paves the way for collaborative medical AI development without compromising data confidentiality. Keywords: Multimodal Federated Learning, Privacy-Preserving, Medical Report Generation, Vision-Language Models 1 1 Introduction Radiology reports are essential diagnostic tools in modern medicine. They play a criti- cal role in patient care and treatment planning. Among various radiological techniques, X-rays are the most commonly used imaging modality in clinical practice. Chest X-rays (CXRs) are particularly popular due to their speed, cost-effectiveness and diagnos- tic value. CXRs help identify abnormalities in the lungs, heart, ribs and surrounding structures. Common findings include pneumonia, pneumothorax, cardiomegaly, broken ribs and consolidation etc. Traditionally, the interpretation of X-ray images requires the expertise of trained radiologists. However, with the advent of deep learning (DL), computer-aided diagno- sis has emerged as a powerful tool to support and enhance clinical decision-making. Deep learning models have demonstrated remarkable success in analyzing medical images and detecting diseases with accuracy comparable to that of human experts [1]. In parallel, advances in generative artificial intelligence (AI) have enabled the devel- opment of models capable of automatically generating detailed radiology reports from X-ray images [2] [3] [4]. Despite significant progress in report generation, most existing approaches rely on centralized model training. This setup requires transferring sensitive patient data from hospital systems to external research facilities, which raises ethical and legal concerns regarding patient privacy and data security. As a result, many healthcare institutions are reluctant to share data, limiting the development and deployment of robust AI systems [5]. Federated Learning (FL) [6] offers a promising solution to these challenges. It is a distributed machine learning paradigm in which multiple clients (e.g., hospitals) col- laboratively train a global model without exchanging raw data. Instead, only model updates are shared, preserving data locality and aligning with data protection regula- tions such as HIPAA and GDPR
|
https://arxiv.org/abs/2505.21715v1
|
[7]. While FL has been extensively applied to disease classification from chest X-rays [8] [9] [10] [11] [12], its application to radiology report generation remains largely unexplored. This is due to several inherent challenges, including handling non-independent and identically distributed (non-IID) data, com- munication overhead, model heterogeneity and the need for clinically accurate and interpretable reports. Motivated by these privacy concerns and the growing capabilities of generative models, we explore the application of federated learning in radiology report gener- ation from chest X-ray images. In this work, we propose a novel federated learning framework for radiology report generation using chest X-ray data. Unlike conven- tional approaches, our method does not require high-end distributed systems. Instead, it leverages simple, accessible tools like Google Drive and Firebase for parameter exchange and data handling. To the best of our knowledge, no prior work has demon- strated such a lightweight and replicable implementation of FL for this task. Our key contributions are as follows: •We introduce a novel, practical implementation of federated learning for radiology report generation using chest X-ray images. 2 •We evaluate the quality of generated reports using standard natural language gener- ation metrics, including BLEU [13], ROUGE [14], BERTScore [15] and RaTEScore [16]. •We show that our federated approach can outperform baseline centrally trained models, demonstrating the feasibility of secure, decentralized training in sensitive medical applications. •Our method aims to reduce the workload of clinicians by automating the report- writing process, while also enabling healthcare institutions to collaboratively train high-quality models without compromising patient data privacy. This article is organized into several sections. Section 2 provides a summary of related studies. The research approach is explained in Section 3. Section 4 describes the experimental setup including dataset details. Section 5 presents the research outcomes and compares the performance of various federated learning aggregation techniques. Finally, Section 6 discusses the study’s limitations and suggests directions for future research. 2 Related Work Federated Learning has emerged as a promising approach for medical imaging appli- cations particularly in chest radiograph classification by enabling privacy-preserving model training across multiple institutions. [8] employs a CNN-GNN framework to address data heterogeneity and co-morbidity dependencies using CheXpert data partitioned across five sites. The model modifies Federated Averaging by training site-specific GNNs leading to a 1.74% performance improvement by achieving an average AUC of 0.79. However, model generalizability remains a challenge due to site- specific data distributions and future work should integrate clinical priors. [9] focuses on pneumonia classification using FL with transfer learning on ResNet18, ResNet50, DenseNet121 and MobileNetV2. The models achieve 98.3% accuracy for pneumonia detection and 87.3% for bacterial-viral differentiation with Momentum SGD outper- forming adaptive optimizers. Future improvements include extending classification to more diseases and optimizing hybrid FL frameworks. [10] compares FL with institu- tional incremental learning (IIL) and cyclic IIL (CIIL) using radiographic imaging from ten institutions. It demonstrates FL’s ability to achieve 99% of centralized model per- formance. The study highlights FL’s effectiveness but notes biases from institutional data variations and insufficient synchronization in some data types. Future directions include hyper-parameter tuning and addressing institutional
|
https://arxiv.org/abs/2505.21715v1
|
biases. [11] focuses on FL for chest X-ray analysis using the RSNA 2018 dataset. It employs UNet++ with EfficientNet-B4 for segmentation, ResNet50 and DenseNet121 for clas- sification. The study finds that FL improves generalizability. ResNet50 achieved 0.757 accuracy but highlights challenges in optimizing client selection and training epochs. [12] integrates differential privacy into FL to counter reconstruction attacks in chest X-ray classification using the Mendeley and CheXpert datasets. While differential pri- vacy reduces data leakage, privacy risks remain. The study suggests refining privacy budget optimization. [17] extends FL to COVID-19 detection by combining chest X- ray images with symptom data. It implements CNN with spatial pyramid pooling and 3 Differential Privacy Stochastic Gradient Descent (DP-SGD) but observes accuracy degradation in non-IID datasets. It emphasizes the need for robustness improvements. [18] applies FL to cardiovascular disease diagnostics utilizing a 3D-CNN model pre- trained on action recognition and the FL-EV voting approach across four medical centers. The study finds that FL-EV enhances model performance particularly in larger centers but is constrained by the small dataset size. [19] examines FL for histopathol- ogy image analysis using the TCGA dataset integrating FedAvg with differential privacy techniques like R´ enyi Differential Privacy Accountant. The study confirms that FL achieves comparable performance to centralized training while addressing privacy concerns especially in non-IID scenarios. A common limitation across these studies is performance degradation in non-IID settings and challenges in privacy-preserving mechanisms. Future research should refine FL parameters, improve data distribution strategies, optimize privacy budgets and explore additional security techniques to enhance model robustness and generalizability in real-world medical applications. [20] evaluates CNNs and transformer-based architectures using a large dataset of 610,000 chest radiographs from five institutions. It highlights FL’s role in improving off-domain performance emphasizing the impact of data diversity. In contrast, [21] employs the DenseNet-121 architecture on publicly available datasets (NIH, VinBig- Data and CheXpert). It demonstrates improved model generalizability with a novel aggregation scheme. SecureFed, introduced in [22] enhances lung abnormality analysis through secure aggregation. It outperforms existing frameworks like FedAvg, Fed- MGDA+, FedRAD in robustness and fairness with evaluation on a COVID-19 dataset. Meanwhile, [23] presents a systematic literature review of FL applications in medical imaging focusing on privacy preservation and performance evaluation. It underscores FL’s effectiveness in securing medical data. [24] integrates FL with CNNs, specifically VGG-16, to diagnose lung diseases by employing focal loss to address data imbalance. It achieved high accuracy (88.43%-96.69%) across different clients. A common limita- tion across these studies is the challenge of handling data heterogeneity. Some works [20] [22] [24] emphasize the need for validation on larger and more diverse datasets. Future research directions include integrating imaging with non-imaging features [20], refining aggregation techniques [21], exploring scalability [22], conducting experimental validations [23] and extending FL applications to other medical domains [24]. [25] introduces FedARC, a personalized FL method that integrates adaptive reg- ularization and model-contrastive learning to improve tuberculosis (TB) diagnosis accuracy. It demonstrates superiority over FedAvg. [26] compares FL with centralized learning for lung disease detection. It utilizes an ensemble of deep learning models such as VGG19, DenseNet
|
https://arxiv.org/abs/2505.21715v1
|
and Inception. It shows that FL can achieve comparable or better performance while preserving privacy. [27] presents a scalable FL framework that incorporates data augmentation techniques to address imbalanced datasets by achieving 98.14% accuracy but facing challenges with client participation variability. [28] proposes DMFL Net which employs DenseNet-169 for feature extraction. It out- performed VGG models with an accuracy of 98.45% yet struggled with model bias due to non-IID data. [29] provides a survey of FL in medical image analysis empha- sizing the integration of deep neural networks (DNNs) and discussing the potential of data augmentation and GANs to enhance model generalization while identifying 4 security concerns and dataset biases as critical challenges. Across these studies, FL consistently demonstrates its potential to enhance disease diagnosis while maintain- ing data privacy. However, challenges such as data heterogeneity, model aggregation inefficiencies and dataset biases persist. Future research directions include refining aggregation strategies, optimizing communication efficiency, integrating multimodal data and improving model robustness across diverse medical settings. [30] introduces a multimodal FL framework for medical report generation integrat- ing deep learning and the federal average algorithm which enhances report accuracy while preserving patient privacy. Its limitations include scalability and dataset vali- dation. [31] applies FL with ResNet-50 for disease prediction from chest X-rays. It improves detection accuracy by 2% but is limited by dataset diversity. [32] combines FL with active and transfer learning (FAL-TL) for lung cancer diagnosis. It achieves remarkable accuracies of 99.20% and 98.70% on different datasets yet struggles with scalability and communication overhead. [33] provides a comprehensive survey of FL in medical imaging categorizing methods and highlighting computational and com- munication challenges while advocating for broader FL applications. Recent works have applied FL for privacy-preserving health monitoring on smartphones. [34] used smartphone sensor data for depression detection but lacked multi-modal inputs. [35] employed FL on Reddit text data, but reliance on social media limits its clinical appli- cability. While both focus on sensor or text data, they highlight the growing interest in decentralized health AI. Building on this, we explore vision-language models for X-ray report generation using a ViT-GPT2 architecture within an FL framework. In terms of models, the studies predominantly leverage Convolutional Neural Net- works (CNNs) for tumor detection [36, 37], emotion prediction [38] and brain tumor diagnosis [39, 40]. Additionally, Siamese CNNs (SiCNN) [39, 41] and Generative Adversarial Networks (GANs) [40] are used for improving privacy and data augmen- tation. Federated learning plays a crucial role in enabling collaborative model training without sharing raw data, which is central to preserving privacy across decentralized healthcare environments [36–41]. The datasets employed vary from pancreatic tumor datasets [36] and MRI scans [39–41] to emotion prediction datasets like RAVDESS [38] and healthcare datasets from Saudi Arabia [42] to highlight the varied applications of FL across different domains. Performance results show high accuracy rates, such as 99.82% for brain tumor detection [40] and 95.72% for emotion prediction [38], though the studies acknowledge limitations such as small, imbalanced datasets and issues with scalability and generalizability [36, 38, 40, 42]. In terms of future work, expand- ing
|
https://arxiv.org/abs/2505.21715v1
|
dataset diversity, enhancing model robustness and testing the frameworks across varied real-world healthcare scenarios are commonly recommended [36, 39, 41, 42]. Notably, the integration of emerging technologies such as 5G [37], GANs [40] and zero- shot learning [42] further distinguishes these studies with future research aiming to refine these models for broader applicability in real-world healthcare settings. [43] compares traditional deep learning (DL) models with FL for COVID-19 detection using chest X-ray images from the Radiography CXR dataset. While models like ResNet50 achieved an accuracy of 98%, FL demonstrated slightly lower accuracy (3.56% reduction) due to its handling of non-IID datasets but showed faster conver- gence and better performance with more clients. Limitations include the absence of 5 a detailed discussion on communication costs and the need for model tuning. [44] proposes a federated learning framework that incorporates secure techniques such as differential privacy and homomorphic encryption. It uses real-world medical imaging datasets and achieves an accuracy of 98.6% with a focus on secure data sharing. Challenges such as communication overhead and model convergence are noted. Future research aims to enhance FL efficiency. [45] focuses on FL for COVID-19 diagnosis using the COVIDGR and CheXpert datasets employing DenseNet-121 and Grad- CAM for interpretability. While achieving good results, the study identifies issues with non-IID data distributions and calls for improvements in algorithmic robustness such as FedProx or Scaffold to enhance generalization. [46] investigates the use of federated learning with the NIH Chest X-ray dataset using a ResNet-34 model with secure aggregation and homomorphic encryption. The study reports a clinical-grade accuracy of 83% but acknowledges the need for more clients and scalability in real- world settings. All above studies emphasize the potential of FL in maintaining patient privacy while achieving high diagnostic performance, yet all identify similar challenges regarding data distribution, communication costs and model generalization. Future work across these papers points to improvements in scalability, efficiency and robust handling of non-IID data. The following research gaps are identified through our extensive literature search: •Federated learning for chest X-ray report generation is underexplored with no clear implementation guides available. •FL models lack generalizability due to site-specific data and limited validation on diverse, large datasets. •Addressing data heterogeneity, non-IID distributions and institutional biases remains a challenge in FL. •Scalability, communication efficiency and convergence issues hinder FL implemen- tations. •FL research lacks focus on hybrid frameworks, robust model tuning and combining imaging with non-imaging data. 3 Methodology Our federated learning framework adheres to the workflow depicted in Figure 1. The process begins with image acquisition and preprocessing. Chest X-ray images are sourced from the publicly available IU-Xray dataset [47], ensuring both diversity and task relevance. The dataset is partitioned among four clients, as summarized in Table 1. Table 1 : Client Data Distribution Client Training Data Size Validation Data Size Client 1 1655 237 Client 2 1241 178 Client 3 828 117 Client 4 414 60 6 Fig. 1 : Working Approach of our Federated Learning. Table 2 : Client’s Model Training Configuration ModelEpochs per RoundTraining Batch SizeOptimizerLearning RateWeight Decay ViT B16+GPT2 3 8 AdamW 5e-5
|
https://arxiv.org/abs/2505.21715v1
|
0.01 Each client is allocated a subset of the training and validation data, along with pretrained model parameters for the Vision Transformer (ViT) [48] and GPT-2 [49]. These pretrained models are retrieved from the Hugging Face library. Clients are simulated independently using Google Colab notebooks. Upon receiving the global pretrained parameters and corresponding data, each client initiates local training. The hyperparameters employed by each client are given in Table 2. The ViT model is utilized for visual feature extraction. It segments each image into fixed-size patches, which are embedded into feature vectors and subsequently processed through self-attention layers to capture global contextual information. These visual features serve as input to the language generation module. GPT-2 is employed for text generation, functioning as a decoder-only architec- ture. It generates descriptive text in an autoregressive manner, conditioned on the extracted visual features. A cross-attention mechanism is incorporated to effectively integrate visual representations into the language model, thereby ensuring semantic alignment between the input image and the generated text. The models are trained using paired image-report data to learn the mapping between visual content and textual descriptions. As the system operates synchronously, all clients must complete their local training before progressing to the next federated round. Upon completion, each client uploads 7 its locally updated model parameters to a shared Google Drive folder. Although Fire- base Storage was initially considered, it was found to be less efficient for uploading large files, so Google Drive was used instead. Given that the models are initialized from large pretrained checkpoints, each parameter file ranges between 700 and 800 MB. In addition to uploading model parameters, each client updates its training status in the Firebase Realtime Database. The central server, also implemented in a Colab notebook, continuously monitors the database to determine when all clients have completed their training. Once confirmation is received, the server performs model aggregation using one of three federated strategies: Krum Aggregation [50], Feder- ated Averaging (FedAvg) [6], or Loss-aware Federated Weighted Averaging, our novel approach. Following aggregation, the updated global model parameters are uploaded to the shared Google Drive folder. The server also posts a status update in the Firebase Realtime Database, prompting clients to begin the next training round. Clients mon- itor the database for this update, download the aggregated global model and resume training. This process is iteratively repeated across multiple rounds. The quality of the generated radiology reports is evaluated using a combination of lexical and semantic similarity metrics. ROUGE is employed to assess n-gram over- lap and precision, while BERTScore is used to measure semantic similarity between generated and reference texts. Additionally, RateScore is used to evaluate the fac- tual consistency and clinical relevance of the generated reports by comparing them to ground truth findings. 3.1 Federated Learning Algorithms 3.1.1 Federated Weighted Averaging This is the vanilla federated learning algorithm. The federated learning algorithm was initially developed by a group of researchers from Google [6]. This algorithm is written by the authors who first proposed federated learning. Here, each client contributes based on the proportion of the size of the
|
https://arxiv.org/abs/2505.21715v1
|
dataset they have. The clients that have the largest data size during training contribute the highest to the global model. All the clients send their model parameters to the global model, the global model will aggregate the model parameters by averaging all the client parameters while assigning more weights to the clients that have higher dataset size. 8 Algorithm 1 Federated Averaging (FedAvg) Algorithm Require: Number of communication rounds T, number of clients K, fraction of clients C, local epochs E, learning rate η 1:Initialize global model parameters w0 2:foreach round t= 1, . . . , T do 3: Randomly select a subset of clients St⊆ {1, . . . , K }with|St|= max( C·K,1) 4: foreach client k∈ Stin parallel do 5: wk t←wt ▷Send global model to client k 6: wk t←ClientUpdate( k,wk t) 7: end for 8: wt+1←1 |St|P k∈Stwk t ▷Aggregate updates 9:end for 10:return Final global model parameters wT 11:function ClientUpdate (k,w) 12: B ← (split local data Dkinto batches) 13: foreach local epoch e= 1, . . . , E do 14: foreach batch b∈ Bdo 15: w←w−η∇ℓ(w;b) ▷Gradient descent 16: end for 17: end for 18: return w 19:end function 3.1.2 Krum Aggregation The Krum aggregation algorithm in federated learning is a robust method that can defend itself against adversarial attacks or clients [50]. This is an improved version of the vainlla federated learning algorithm. It works by selecting a client update (gradi- ent or parameter vector) that is least affected by malicious updates. For all the clients, the Krum calculates or measures the distance to all the other clients and sums them up. It then considers the closest one, which is the value with least sum for aggrega- tion. This approach improves the security by removing malicious clients. But this is computationally very expensive because each client needs to calculate distance to all other clients. So although this is a good algorithm when there is a chance of potential malicious clients but it comes at a cost. 9 Algorithm 2 Krum Aggregation Algorithm Require: Global model parameters wt, updates from mclients {w1 t, . . . ,wm t}, number of clients m, number of Byzantine clients f Ensure: Aggregated model parameters wt+1 1:Initialize an empty list S 2:foreach client k∈ {1, . . . , m }do 3: Compute distances dk,j=∥wk t−wj t∥2∀j̸=k 4: Sort distances dk,jin ascending order 5: Compute the sum of the m−f−2 smallest distances: Dk=Pm−f−2 j=1dk,j 6: Append ( k, Dk) toS 7:end for 8:Select the client k∗with the smallest Dkfrom S 9:wt+1←wk∗ t 10:return w t+1 3.1.3 Loss-aware Federated Weighted Averaging We propose Loss-aware Federated Weighted Averaging (L-FedAvg), an enhanced vari- ant of the standard federated averaging algorithm. This robust federated learning approach prioritizes clients whose learning objectives are more closely aligned with the global model. Before aggregation, the server evaluates each client’s validation loss in conjunction with their total number of training samples. Clients with lower valida- tion loss are given higher priority by assigning them greater weights, while those with higher loss receive reduced weights. As
|
https://arxiv.org/abs/2505.21715v1
|
a result, clients that are better aligned with the global objective have a greater influence on the global model update, improving convergence and overall performance. Table 3 : FL Aggregation Hyperparameters ApproachParameter NameValue Purpose L-FedAvg alpha 0.5 Controls weightage of validation loss and training data size Krum Fault Tolerance 1 Tolerates up to 1 faulty client during aggregation The hyperparameters used in different approaches shown in Table 3. 10 Algorithm 3 Loss-aware Federated Weighted Averaging Require: List of client model weights {wk}K k=1, JSON file with client data, weighting factor α∈[0,1] Ensure: Aggregated global model weights wavg 1:Extract data lengths {dk}and validation losses {lk}from JSON 2:Compute total data points D=PK k=1dk 3:foreach client k= 1, . . . , K do 4: Compute data weight: wk d=dk D 5: Compute loss weight: wk l=1 lk▷Iflk>0 6: Compute combined weight: wk=α·wk d+ (1−α)·wk l 7:end for 8:Normalize weights: wk←wkPK j=1wjfor all k 9:Initialize wavg←0 10:foreach parameter θin the model do 11: wavg[θ]←PK k=1wk·wk[θ] 12:end for 13:return w avg 3.2 Pretrained Models 3.2.1 Vision Transformer (ViT) The Vision Transformer (ViT) is a model that applies transformer architecture, orig- inally designed for natural language processing tasks, to computer vision. Instead of processing entire images as a grid of pixels, ViT divides images into smaller patches, treats each patch as a sequence and processes them using transformer layers [48]. This approach enables ViT to model relationships between patches, capturing global context efficiently. ViT has shown competitive performance compared to traditional convolutional neural networks (CNNs), especially when trained on large datasets. 3.2.2 GPT-2 (Generative Pre-trained Transformer 2) GPT-2 (Generative Pre-trained Transformer 2) is a language model designed for gen- erating human-like text [49]. It is based on the transformer architecture and trained on a large corpus of text from the internet. GPT-2 uses unsupervised learning to predict the next word in a sentence, allowing it to understand context and generate coher- ent and contextually relevant text. The model can perform a wide range of natural language processing tasks, such as text completion, summarization and translation, without task-specific training. 4 Experimental Setup The experiments were conducted using NVIDIA T4 GPUs provided by Google Colab. Standard deep learning libraries such as PyTorch and Hugging Face Transformers were used. All the clients and the server had the same setup. Model parameters were stored 11 securely on Google Drive. Firebase Database was used for seamless communication by handling status updates between the server and clients. 4.1 Dataset We have used the IU-Xray [47] dataset which is a publicly available collection of radiographic images paired with their corresponding radiology reports. This dataset is widely used for research in medical imaging. The dataset comprises a total of 5,910 chest X-ray images along with their associated findings in the form of radiology reports. Each image in the dataset is accompanied by a detailed textual description that pro- vides diagnostic insights. Figure 2 presents two sample cases from the dataset: one depicting a normal chest X-ray and the other showing an abnormal case, along with their corresponding reports. The dataset is organized into predefined splits
|
https://arxiv.org/abs/2505.21715v1
|
for training, testing and validation. Training set, test set and validation set contains 4138, 1180 and 592 images and their corresponding reports respectively. Fig. 2 : Sample X-ray images and corresponding findings in form of report from the IU-Xray dataset. This report is being treated as the ground truth. Table 4 : Statistics of Report Length by Split Split Count Mean STD Min Max 25% 50% Train 4138.0 31.765 14.206 7.0 149.0 22.0 29.0 Test 1180.0 28.219 13.181 8.0 93.0 19.0 25.0 Validation 592.0 31.128 13.812 8.0 83.0 21.0 30.0 12 Before conducting our experiments, we performed an initial exploration of the dataset to better understand the characteristics of the radiology reports. Table 4 sum- marizes key statistics of report lengths across the training, validation and test splits, including mean, standard deviation and percentiles. Figure 3 shows the overall distri- bution of report lengths in terms of word count and their corresponding frequencies. Additionally, a boxplot is presented in Figure 4 to illustrate the variation in report length across the train, validation and test sets. Fig. 3 : Distribution of report length in number of words Fig. 4 : Report length distribution in train, test and validation split 13 5 Result Analysis Table 5 shows the performance of various approaches. The performance was evaluated using multiple different n-gram based metrics including ROUGE and BLEU. Among all the approaches, Krum Aggregation achieved the highest scores in most of the metrics. This approach obtained the best ROUGE-1 F1 score of 0.306, ROUGE-2 F1 score of 0.1411 and ROUGE-3 F1 score of 0.0726. This approach obtained a BLEU score of 0.0395 and ROUGE-L score of 0.2066. Table 5 : Evaluation with N-gram Based Metrics ApproachROUGE1 F1ROUGE2 F1ROUGE3 F1ROUGE4 F1ROUGEL F1BLEU FedAvg 0.2928 0.1289 0.0637 0.0329 0.2068 0.0371 L-FedAvg 0.2870 0.1257 0.0636 0.0367 0.1979 0.0387 Krum Aggregation 0.3060 0.1411 0.0726 0.0377 0.2066 0.0395 Centralized ViT B16+GPT2 [51]0.2877 0.1273 0.0689 0.0435 0.2031 0.0403 Federated Weighted Averaging (FedAvg) And L-FedAvg methods showed compa- rable performance. FedAvg slightly outperformed L-FedAvg in most metrics. FedAvg achieved a ROUGE-1 F1 score of 0.2928 and a BLEU score of 0.0371 whereas L-FedAvg scored 0.287 and 0.0387 for the same metrics respectively. The Centralized ViT B16+GPT2 [51] approach demonstrated moderate perfor- mance. It had a ROUGE-4 F1 score of 0.0435 and BLEU score of 0.0403. Although it did not surpass Krum Aggregation in most metrics, it achieved the highest ROUGE-4 and BLEU scores among all approaches. The results suggest that Krum Aggregation is the most robust method in terms of n-gram based Natural Language Generation(NLG) metrics. Centralized ViT B16+GPT2 approach provides competitive results particularly for higher-order ROUGE metrics. FedAvg and L-FedAvg perform consistently but fall behind in overall scores. Table 6 : Evaluation with BERTScore and RaTEScore ApproachBERTScore PrecisionBERTScore RecallBERTScore F1 ScoreRaTEScore FedWeighted Average 0.8509 0.8945 0.8720 60.93 L-FedAvg 0.8437 0.8989 0.8703 61.99 Krum Aggregation 0.8477 0.9003 0.8731 62.24 Centralized ViT B16+GPT2 [51] 0.8392 0.9015 0.8691 53.47 We also evaluated the approaches using metrics such as BERTScore Precision, Recall and F1 to assess the semantic similarity between the generated reports and the ground
|
https://arxiv.org/abs/2505.21715v1
|
truth. In Table 6, Krum Aggregation achieved the highest BERTScore F1 of 14 (a) Client 1 (b) Client 2 (c) Client 3 (d) Client 4 Fig. 5 : Training Loss for Clients 1–4 in L-FedAvg (a) Client 1 (b) Client 2 (c) Client 3 (d) Client 4 Fig. 6 : Validation Loss for Clients 1–4 in L-FedAvg 15 0.8731. It also demonstrated balanced performance with a Precision of 0.8477 and a Recall of 0.9003, suggesting that this approach is robust in generating reports with both accuracy and coverage. FedAvg and L-FedAvg methods showed comparable results. FedAvg achieved a slightly higher BERTScore F1 of 0.872 compared to 0.8703 for L-FedAvg. L-FedAvg obtained a better Recall at 0.8989, while Federated Weighted Averaging had slightly better Precision at 0.8509. The Centralized ViT B16+GPT2 [51] approach achieved a BERTScore F1 of 0.8691. It also demonstrated the highest Recall among all methods at 0.9015. However, its Precision was the lowest among the approaches at 0.8392. Again, Krum Aggregation emerged as the most effective method based on the BERTScore F1 metric. It demonstrated that this approach can be used to generate reports that are semantically accurate. The Centralized ViT B16+GPT2 approach demonstrated strong recall capabilities. FedAvg and L-FedAvg performed consistently well, with small trade-offs between precision and recall. In terms of RaTEScore, which considers both semantic relevance and textual quality, Krum Aggregation again achieved the highest score (62.24), followed closely by L-FedAvg with 61.99. This indicates that L-FedAvg, while slightly trailing in BERTScore F1, excels in generating coherent and fluent reports and outperforms FedAvg and the centralized approach in terms of RaTEScore. This suggests that the loss-aware weighting strategy of L-FedAvg contributes to better overall text quality and relevance. The training losses of different clients at various steps using the L-FedAvg approach are illustrated in Figure 5. Specifically, Figure 5a shows the training loss for Client 1, Figure 5b for Client 2, Figure 5c for Client 3 and Figure 5d for Client 4. Client 1 had the most amount of data. The training loss consistently went downwards for client 1. However, some spikes can be observed after each federated round due to global model update. For client 2 and 3, the training loss decreased gradually with each step without any significant rise. The training loss for client 4 dropped massively after the first federated round. The validation losses of the different clients across epochs using L-FedAvg approach are illustrated in Figure 6. Specifically, Figure 6a shows the validation loss for Client 1, Figure 6b for Client 2, Figure 6c for Client 3 and Figure 6d for Client 4. A similar pattern can be observed for the validation loss of client 1. After each federated round and global model update, the validation loss increased. For client 2 and 3, the valida- tion loss consistently decreased. For client 4, the validation loss was heavily reduced after the first round which is consistent with the training loss. The training losses in different steps for all the clients using Krum Aggregation approach are illustrated in Figure 7. Specifically, Figure 7a represents the
|
https://arxiv.org/abs/2505.21715v1
|
training loss for Client 1, Figure 7b for Client 2, Figure 7c for Client 3 and Figure 7d for Client 4. Again, we can see some spikes after each round in the training loss plot of client 1. The same pattern can be seen in client 2 training. The training loss for client 3 gradually decreased. However, since client 4 had the least amount of training data, a lot of oscillation can be observed in its training loss plot. 16 (a) Client 1 (b) Client 2 (c) Client 3 (d) Client 4 Fig. 7 : Training Loss for Clients 1–4 in Krum Aggregation (a) Client 1 (b) Client 2 (c) Client 3 (d) Client 4 Fig. 8 : Validation Loss for Clients 1–4 in Krum Aggregation 17 The validation losses of different clients across epochs using the Krum aggregation approach are illustrated in Figure 8. Specifically, Figure 8a shows the validation loss for Client 1, Figure 8b for Client 2, Figure 8c for Client 3 and Figure 8d for Client 4. We can notice the large spike in validation loss of client 1 after each federated round which is similar to its training loss. For client 2, we can see some smaller spikes in the validation loss after each round. For client 3, the validation loss consistently decreased. However, for client 4, an interesting pattern can be observed. The validation loss decreased massively in the first epoch of each federated round compared to other epochs of the round. So, the aggregation technique significantly improved this local client’s performance. Table 7 : Comparison with the existing literature Method BLEU ROUGE-L F1BERTScore F1RaTEScore MAIRA-2 [52] 0.117 0.2740 0.5576 – CXRMate [53] 0.046 0.2820 0.3230 – EAST [54] 0.120 0.2651 0.5464 – XRaySwinGen [55] 0.124 0.3000 – – PromptMRG [56] 0.098 0.1600 – – NLGR-CCR [57] 0.102 0.2530 – – Centralized ViT B16+GPT2 [51] 0.0403 0.2031 0.8691 53.47 FedViT-GPT2 (Krum, Ours) 0.0426 0.2066 0.8731 62.24 Fig. 9 : Generated report sample from our implementation with attached ground truth Table 7 presents a comparative analysis of our proposed federated model (FedViT- GPT2 with Krum aggregation) against existing methods for chest X-ray report generation. While prior models such as XRaySwinGen [55] and EAST [54] have demon- strated competitive performance in terms of BLEU and ROUGE-L F1, they lack evaluation on deeper semantic metrics like BERTScore and RaTEScore. Our approach, although slightly behind XRaySwinGen in BLEU and ROUGE-L, significantly outper- forms all baselines in semantic fidelity, achieving the highest BERTScore F1 (0.8731) 18 and RaTEScore (62.24). These results highlight the strength of our model in generating semantically accurate and clinically coherent reports. Furthermore, our decentralized setup achieves comparable or better performance than centralized and fully supervised approaches such as Centralized ViT B16+GPT2 [51], demonstrating the effectiveness of federated learning with robust aggregation in medical report generation. Figure-9 shows an example report generated through our implementation. From the attached ground truth, it is clearly visible that our model is capable of generating accurate and coherent report from a given x-ray image. 6 Conclusion and Future Work In this paper
|
https://arxiv.org/abs/2505.21715v1
|
we have evaluated different Federated Aggregation techniques for gen- erating reports from chest x-ray images. Our experiment finds the best performance from the Krum Aggregation approach in the task of accurate and coherent report generation from input x-ray images. Due to limited number of data, we had to perform simulation with only four clients. Our approach can be easily extended for larger datasets and more clients. The issue of limited data should also be addressed in the future to ensure reliable report generation. Conflict of interest The authors have no conflict of interest to declare relevant to this article’s content. Additionally, the authors have no relevant financial or non- financial interests to disclose. Data availability Not applicable. References [1] Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., van der Laak, J.A.W.M., van Ginneken, B., S´ anchez, C.I.: A survey on deep learning in medical image analysis. Medical Image Analysis 42, 60–88 (2017) https://doi.org/10.1016/j.media.2017.07.005 [2] Chen, Z., Shen, Y., Song, Y., Wan, X.: Cross-modal memory networks for radiology report generation. arXiv preprint arXiv:2204.13258 (2022) [3] Jing, B., Xie, P., Xing, E.: On the automatic generation of medical imaging reports. arXiv preprint arXiv:1711.08195 (2017) [4] Li, C.Y., Liang, X., Hu, Z., Xing, E.P.: Knowledge-driven encode, retrieve, paraphrase for medical image report generation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6666–6673 (2019) [5] Murdoch, B.: Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics 22(1), 122 (2021) https://doi. 19 org/10.1186/s12910-021-00687-3 [6] McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017). PMLR [7] Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., D’Oliveira, R.G.L., Eich- ner, H., Rouayheb, S.E., Evans, D., Gardner, J., Garrett, Z., Gasc´ on, A., Ghazi, B., Gibbons, P.B., Gruteser, M., Harchaoui, Z., He, C., He, L., Huo, Z., Hutchin- son, B., Hsu, J., Jaggi, M., Javidi, T., Joshi, G., Khodak, M., Koneˇ cn´ y, J., Korolova, A., Koushanfar, F., Koyejo, S., Lepoint, T., Liu, Y., Mittal, P., Mohri, M., Nock, R., ¨Ozg¨ ur, A., Pagh, R., Raykova, M., Qi, H., Ramage, D., Raskar, R., Song, D., Song, W., Stich, S.U., Sun, Z., Suresh, A.T., Tram` er, F., Vepakomma, P., Wang, J., Xiong, L., Xu, Z., Yang, Q., Yu, F.X., Yu, H., Zhao, S.: Advances and Open Problems in Federated Learning (2021). https://arxiv.org/abs/1912.04977 [8] Chakravarty, A., Kar, A., Sethuraman, R., Sheet, D.: Federated learning for site aware chest radiograph screening. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1077–1081 (2021). IEEE [9] Banerjee, S., Misra, R., Prasad, M., Elmroth, E., Bhuyan, M.H.: Multi-diseases classification from chest-x-ray: A federated deep learning approach. In: AI 2020: Advances in Artificial Intelligence: 33rd Australasian Joint Conference, AI 2020, Canberra, ACT, Australia, November 29–30, 2020, Proceedings 33, pp. 3–15 (2020). Springer [10] Sheller, M.J., Edwards, B., Reina, G.A., Martin, J., Pati, S., Kotrotsou, A., Milchenko, M., Xu, W., Marcus, D., Colen,
|
https://arxiv.org/abs/2505.21715v1
|
R.R., et al. : Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific reports 10(1), 12598 (2020) [11]´Slazyk, F., Jab lecki, P., Lisowska, A., Malawski, M., P lotka, S.: Cxr-fl: deep learning-based chest x-ray image analysis using federated learning. In: Interna- tional Conference on Computational Science, pp. 433–440 (2022). Springer [12] Ziegler, J., Pfitzner, B., Schulz, H., Saalbach, A., Arnrich, B.: Defending against reconstruction attacks through differentially private federated learning for classification of heterogeneous chest x-ray data. Sensors 22(14), 5195 (2022) [13] Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002) [14] Lin, C.-Y.: Rouge: A package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004) 20 [15] Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: Bertscore: Evaluat- ing text generation with bert. arXiv preprint arXiv:1904.09675 (2019) [16] Zhao, W., Wu, C., Zhang, X., Zhang, Y., Wang, Y., Xie, W.: RaTEScore: A met- ric for radiology report generation. In: Al-Onaizan, Y., Bansal, M., Chen, Y.-N. (eds.) Proceedings of the 2024 Conference on Empirical Methods in Natural Lan- guage Processing, pp. 15004–15019. Association for Computational Linguistics, Miami, Florida, USA (2024). https://doi.org/10.18653/v1/2024.emnlp-main.836 .https://aclanthology.org/2024.emnlp-main.836/ [17] Ho, T.-T., Tran, K.-D., Huang, Y.: Fedsgdcovid: Federated sgd covid-19 detec- tion under local differential privacy using chest x-ray images and symptom information. Sensors 22(10), 3728 (2022) [18] Linardos, A., Kushibar, K., Walsh, S., Gkontra, P., Lekadir, K.: Federated learn- ing for multi-center imaging diagnostics: a simulation study in cardiovascular disease. Scientific Reports 12(1), 3551 (2022) [19] Adnan, M., Kalra, S., Cresswell, J.C., Taylor, G.W., Tizhoosh, H.R.: Federated learning and differential privacy for medical image analysis. Scientific reports 12(1), 1953 (2022) [20] Tayebi Arasteh, S., Kuhl, C., Saehn, M.-J., Isfort, P., Truhn, D., Nebelung, S.: Enhancing domain generalization in the ai-based analysis of chest radiographs with federated learning. Scientific Reports 13(1), 22576 (2023) [21] Chowdari, D.K., Radhasyam, N., Pal, A., Paul, A.: Federated learning using multi-institutional data for generalizable chest x-ray diagnosis. In: Medical Imaging 2023: Computer-Aided Diagnosis, vol. 12465, pp. 81–87 (2023). SPIE [22] Makkar, A., Santosh, K.: Securefed: federated learning empowered medical imag- ing technique to analyze lung abnormalities in chest x-rays. International Journal of Machine Learning and Cybernetics 14(8), 2659–2670 (2023) [23] Sohan, M.F., Basalamah, A.: A systematic review on federated learning in medical image analysis. IEEE Access 11, 28628–28644 (2023) [24] Jindal, V., Kukreja, V., Singh, D.P., Vats, S., Mehta, S.: Pushing diagnostic fron- tiers: Federated learning cnn for diverse lung disease. In: 2023 4th IEEE Global Conference for Advancement in Technology (GCAT), pp. 1–6 (2023). IEEE [25] Liu, C., Luo, Y., Xu, Y., Du, B.: Fedarc: Federated learning for multi-center tuber- culosis chest x-ray diagnosis with adaptive regularizing contrastive representation. In: 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 2125–2128 (2023). IEEE [26] Zubair Nafis, K.F., Maisha Tarannum, S., Haque Charu, K., Kabir Mehedi, 21 M.H., Alim Rasel, A.: Comparative analysis of federated learning and central- ized approach for detecting
|
https://arxiv.org/abs/2505.21715v1
|
different lung diseases. In: Proceedings of the 2023 9th International Conference on Computer Technology Applications, pp. 60–66 (2023) [27] Ullah, F., Srivastava, G., Xiao, H., Ullah, S., Lin, J.C.-W., Zhao, Y.: A scalable federated learning approach for collaborative smart healthcare systems with inter- mittent clients using medical imaging. IEEE Journal of Biomedical and Health Informatics (2023) [28] Malik, H., Naeem, A., Naqvi, R.A., Loh, W.-K.: Dmfl net: A federated learning- based framework for the classification of covid-19 from multiple chest diseases using x-rays. Sensors 23(2), 743 (2023) [29] Nazir, S., Kaleem, M.: Federated learning for medical image analysis with deep neural networks. Diagnostics 13(9), 1532 (2023) [30] Chen, J., Pan, R.: Medical report generation based on multimodal federated learning. Computerized Medical Imaging and Graphics 113, 102342 (2024) [31] Khan, S., Palanisamy, L.S., Raghuraman, M.: Federated learning-a novel approach for predicting diseases in unprecented areas. In: 2024 International Con- ference on Artificial Intelligence in Information and Communication (ICAIIC), pp. 058–063 (2024). IEEE [32] Babar, F.F., Jamil, F., Alsboui, T., Babar, F.F., Ahmad, S., Alkanhel, R.I.: Fed- erated active learning with transfer learning: Empowering edge intelligence for enhanced lung cancer diagnosis. In: 2024 International Wireless Communications and Mobile Computing (IWCMC), pp. 1333–1338 (2024). IEEE [33] Guan, H., Yap, P.-T., Bozoki, A., Liu, M.: Federated learning for medical image analysis: A survey. Pattern Recognition, 110424 (2024) [34] Tabassum, N., Ahmed, M., Shorna, N.J., Sowad, U.R., Mejbah, M., Haque, H.: Depression detection through smartphone sensing: A federated learning approach. International Journal of Interactive Mobile Technologies 17(1) (2023) [35] Ahmed, M., Muntakim, A., Tabassum, N., Rahim, M.A., Shah, F.M.: On-device Federated Learning in Smartphones for Detecting Depression from Reddit Posts (2025). https://arxiv.org/abs/2410.13709 [36] Alruwais, N., Elhessewi, G.M.S., Saeed, M.K., Alshammeri, M., Alrusaini, O., Alkharashi, A., Al Zanin, S., Said, Y.: Federated learning and gwo-enabled consumer-centric healthcare internet of things for pancreatic tumour. Alexandria Engineering Journal 122, 344–354 (2025) 22 [37] Kumari, A., Patadia, D., Tanwar, S., Pau, G., Alqahtani, F., Tolba, A.: Cnn- based cancer prediction scheme using 5g-assisted federated learning for healthcare industry 5.0. Alexandria Engineering Journal 126, 131–142 (2025) [38] Raju, V.N., Saravanakumar, R., Yusuf, N., Pradhan, R., Hamdi, H., Saravanan, K.A., Rao, V.S., Askar, M.A.: Enhancing emotion prediction using deep learning and distributed federated systems with smote oversampling technique. Alexandria Engineering Journal 108, 498–508 (2024) [39] Onaizah, A.N., Xia, Y., Hussain, K.: Fl-sicnn: An improved brain tumor diagnosis using siamese convolutional neural network in a peer-to-peer federated learning approach. Alexandria Engineering Journal 114, 1–11 (2025) [40] Alalwan, N., Alwadain, A., Alzahrani, A.I., Al-Bayatti, A.H., Abozeid, A., Abd El-Aziz, R.M.: Advancements in brain tumor identification: Integrating synthetic gans with federated-cnns in medical imaging analysis. Alexandria Engineering Journal 105, 105–119 (2024) [41] Alanazi, S., Alanazi, R.: Enhancing diabetic retinopathy detection through fed- erated convolutional neural networks: Exploring different stages of progression. Alexandria Engineering Journal 120, 215–228 (2025) [42] Rathee, G., Garg, S., Kaddoum, G., Alzanin, S.M., Hassan, M.M.: An improved and decentralized/distributed healthcare framework for disabled people through ai models. Alexandria Engineering Journal 125, 441–448 (2025) [43] Naz, S., Phan, K., Chen, Y.-P.P.: Centralized and federated
|
https://arxiv.org/abs/2505.21715v1
|
learning for covid- 19 detection with chest x-ray images: Implementations and analysis. IEEE Transactions on Emerging Topics in Computational Intelligence (2024) [44] Muthalakshmi, M., Jeyapal, K., Vinoth, M., Dinesh, P., Murugan, N.S., Sheela, K.S.: Federated learning for secure and privacy-preserving medical image analysis in decentralized healthcare systems. In: 2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC), pp. 1442–1447 (2024). IEEE [45] Adhikari, R., Settles, C.: Secure federated learning approaches to diagnosing covid-19. arxiv 2024. arXiv preprint arXiv:2401.12438 [46] Ram, S., Kiran, Y.N., Bhute, A., Khare, T.: Federated learning for accurate label- ing of chest x-ray scans. In: 2024 36th Conference of Open Innovations Association (FRUCT), pp. 649–654 (2024). IEEE [47] Chen, Z., Song, Y., Chang, T.-H., Wan, X.: Generating radiology reports via memory-driven transformer. arXiv preprint arXiv:2010.16056 (2020) [48] Dosovitskiy, A.: An image is worth 16x16 words: Transformers for image 23 recognition at scale. arXiv preprint arXiv:2010.11929 (2020) [49] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. : Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) [50] Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learn- ing with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems 30(2017) [51] Islam, M.R., Hossain, M.Z., Ahmed, M., Samu, M.S.S.: Vision-Language Models for Automated Chest X-ray Interpretation: Leveraging ViT and GPT-2 (2025). https://arxiv.org/abs/2501.12356 [52] Bannur, S., Bouzid, K., Castro, D.C., Schwaighofer, A., Thieme, A., Bond-Taylor, S., Ilse, M., P´ erez-Garc´ ıa, F., Salvatelli, V., Sharma, H., et al.: Maira-2: Grounded radiology report generation. arXiv preprint arXiv:2406.04449 (2024) [53] Nicolson, A., Dowling, J., Anderson, D., Koopman, B.: Longitudinal data and a semantic similarity reward for chest x-ray report generation. Informatics in Medicine Unlocked 50, 101585 (2024) [54] Nicolson, A., Liu, J., Dowling, J., Nguyen, A., Koopman, B.: e-health csiro at rrg24: entropy-augmented self-critical sequence training for radiology report generation. arXiv preprint arXiv:2408.03500 (2024) [55] Magalh˜ aes, G.V., Santos, R.L.d.S., Vogado, L.H., Paiva, A.C., Santos Neto, P.d.A.: Xrayswingen: Automatic medical reporting for x-ray exams with multi- modal model. Heliyon 10(7) (2024) [56] Jin, H., Che, H., Lin, Y., Chen, H.: Promptmrg: Diagnosis-driven prompts for medical report generation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 2607–2615 (2024) [57] Liu, G., Hsu, T.-M.H., McDermott, M., Boag, W., Weng, W.-H., Szolovits, P., Ghassemi, M.: Clinically accurate chest x-ray report generation. In: Machine Learning for Healthcare Conference, pp. 249–269 (2019). PMLR 24
|
https://arxiv.org/abs/2505.21715v1
|
arXiv:2505.21717v1 [cs.LG] 27 May 2025Scaling Up Liquid-Resistance Liquid-Capacitance Networks for Efficient Sequence Modeling Mónika Farsang1Ramin Hasani2,3Radu Grosu1 Abstract We present LrcSSM, a nonlinear recurrent model that processes long sequences as fast as today’s linear state-space layers. By forcing the state-transition ma- trix to be diagonal and learned at every step, the full sequence can be solved in parallel with a single prefix-scan, giving O(T D) time and memory and only O(logT) sequential depth, for input-sequence length Tand a state dimension D. Moreover, LrcSSM offers a formal gradient-stability guarantee that other input-varying systems such as Liquid-S4 and Mamba do not provide. Lastly, for network depth L, as the forward and backward passes cost Θ(T D L ) FLOPs, with its low sequential depth and parameter count Θ(D L), the model follows the compute-optimal scaling law regime ( β≈0.42) recently observed for Mamba, outperforming quadratic-attention Transformers at equal compute while avoid- ing the memory overhead of FFT-based long convolutions. We show that on a series of long-range forecasting tasks, LrcSSM outperforms LRU, S5 and Mamba. 1 Introduction With the advent of linear state space models (LSSMs), more and more architectures have emerged, with increasingly better accuracy and efficiency. While LSSMs can be efficiently parallelized, for example with the aid of the parallel scan operator, this is considerably more difficult for traditional, nonlinear state-space models (NSSMs). This lead to a decreasing interest in NSSMs, although these should arguably capture input correlations in a more refined way through their state. Fortunately, recent work has shown how to apply the parallel scan operator to NSSMs, by lin- earizing them in every time step, and by implementing this idea in their DEER framework [ 24]. Unfortunately, the state-transition matrix (the Jacobian of the NSSM) was not diagonal, which precluded scaling it up to very long sequences. Subsequent work however, succeeded to scale up NSSMs by simply taking the diagonal of the Jacobian matrix, and stabilizing the DEER updates with trust regions. They called this method ELK (evaluating Levenberg-Marquardt via Kalman) [ 7]. In this paper, we propose an alternative approach to scaling up NSSMs. Instead of disregarding the non diagonal elements of the NSSM Jacobian, which might contain important information about the multistep interaction among neurons along feedback loops, we learn an NSSM whose state- transition matrix is constrained to be diagonal, and whose entries depend on both the current state and current input. As for LSSMs, our main intuition is that intricate loops of the NSSM state-transition (neural connectivity) matrix, can be well summarized by its complex eigenvalues. After all, the synaptic parameters define constant matrices that can be themselves diagonalized. To test our idea, we modified and scaled up LRCs (liquid-resistance, liquid-capacitance neural net- works), a bio-inspired NSSM [ 4] that considerably increased LTCs (liquid time constant networks) Preprint. 1Technische Universität Wien (TU Wien) 2MIT CSAIL 3Liquid AI Corr. author: monika.farsang@tuwien.ac.at Process Input encoder DecoderMLP... ... ... Post-NormalizationIterative parallelizable state updatesNon-linear input- and state- dependent LRC (DxT)Input sequence (pxT) Outputskip connection Pre-NormalizationLrcSSM block repeat #blocks timesFigure 1: Liquid-Resistance Liquid-Capacitance NSSM (LrcSSM) architecture. The input sequence of
|
https://arxiv.org/abs/2505.21717v1
|
length Tand input dimension pis first passed through an input encoder, followed by a normal- ization layer. The core component is a non-linear, state-and-input dependent LRC with hidden dimension Dand sequence length T. This NSSM is computed by a parallelizable iterative lin- earization method. The final state values are then processed by an MLP , with a skip connection added to preserve information flow. The LrcSSM block can be stacked and repeated an arbitrary number of times (we use 2,4,6 layers in our experiments). A post-normalization layer is applied before the output is passed to the decoder, which produces the final output. accuracy while also decreasing their convergence time [ 13], by capturing saturation effects in biological neurons, and accounting for the state-and-input dependent nature of their capacitance. Most importantly, we introduce an inherent diagonalization to LRCs forcing the system’s Jacobian to be diagonal. This modification enables an exact ELK update - rather than an approximation. Our experimental results on the Heartbeat, SelfRegulationSCP1, SelfRegulationSCP2, EthanolCon- centration, MotorImagery, and EigenWorms long-sequence benchmarks, show that LrcSSMs are either on par or outperform state of the art LSSMs such as NRDE, NCDE, Log-NCDE, LRU, S5, S6, Mamba, LinOSS-IMEX, and LinOSS-IM. The EthanolConcentration benchmark in particular, seems to have the most intricate input correlations, a fact that will need future examination. In summary, our main contributions in this paper are the following ones: •To the best of our knowledge, we are the first to show how to scale up a bioinspired RNN to a competitive NSSM on long sequences and compare it to state of the art LSSMs. •We discuss in detail how to scale LRCs to LrcSSMs having a diagonal nonlinear state-and- input dependent state-transition matrix and inherently a diagonal Jacobian matrix. •We demonstrate that LrcSSMs can capture long-horizon tasks in a very competitive fashion on a set of standard benchmarks used to assess LSSMs accuracy and efficiency. •We show that LrcSSMs consistently outperform many of the state-of-the-art LSSMs, including LRU, S5, S6, and Mamba, especially on the EthanolConcentration benchmark. 2 Background Here we introduce the necessary background for understanding LrcSSMs: Firstly, the bioinspired nonlinear liquid networks LTCs, STCs, and LRCs, known for their dynamic expressivity. Secondly, the parallelization techniques enabling efficient training of traditionally sequential NSSMs. 2.1 Bio-inspired Liquid Neural Networks Electrical Equivalent Circuits (EECs) are simplified models defining the dynamic behavior of the membrane potential (MP) of a postsynaptic neuron, as a function of the MP of its presynaptic neurons and external input signals [ 17,36]. In ML, EECs with chemical synapses are termed liquid 2 time constant networks (LTCs) [ 23,13]. For a neuron iwith mpresynaptic neurons of MPs xand n inputs of value u, the forget conductance fi(x,u) and update conductance zi(x,u) are defined as: fi(x,u)=m+nX j=1gmax j iσ(aj iyj+bj i)+gleak i(1) zi(x,u)=m+nX j=1kmax j iσ(aj iyj+bj i)+gleak i, (2) where y=[x,u] concatenates the MP (state) of all neurons and the inputs. In Equation (1),gmax j i represents the maximum synaptic channel conductance, aj iand bj iparameterize the sigmoidal activation governing channel openness, and gl eak iis the leaking conductance. In
|
https://arxiv.org/abs/2505.21717v1
|
Equation (2), kmax j i=gmax j ier ev j i/el eak i, where er ev j iis the synaptic reversal potential (equilibrium membrane po- tential) and el eak iis the leaking potential. Since gmax j i≥0, the sign of kmax j idepends on er ev j i/eleak i. LTC-Equation (4)states that the rate of change of xiof neuron i, is the sum of its forget current −fixiand its update current zieleak i. LTCs ignore saturation aspects, which were introduced in saturated LTCs (STCs) in [ 3]. As fi(x,u)is positive, it is saturated with a sigmoid, and as zi(x,u)is either positive or negative, it is saturated with a tanh. Saturation is capturd in STC-Equation (5). Finally, both LTCs and STCs assume that the membrane capacitance is constant, and for simplicity equal to 1 in Equations (4)-(5), as the capacitance can be learned jointly with the other parameters. However, this assumption does not hold in biological neurons. In reality, the capacitance has a nonlinear dependence on the MP of presynaptic neurons and the external input as they both may cause the neuron to deform [ 16,33,21]. This behavior can be modeled by the following elastance: σ(ϵi(x,u))=σ(m+nX j=1wj iyj+vj), (3) where y=[x,u], as before. LRCs incorporate this biological behavior of neurons, by introducing the elastance (which is the reciprocal of the capacitance) as a multiplicative term in LRC-Equation (6). LTC: ˙xi= − fi(x,u)xi+ zi(x,u)el eak i(4) STC: ˙xi= −σ(fi(x,u))xi+τ(zi(x,u))el eak i(5) LRC: ˙xi=(−σ(fi(x,u))xi+τ(zi(x,u))el eak i)σ(ϵi(x,u)) (6) The time constant of LRCs is1 RC=σ(fi(x,u))σ(ϵi(x,u)), which factors into a liquid resistance R=1/σ(fi(x,u)) and a liquid capacitance C=1/σ(ϵi(x,u)). While the resistive liquidity is the core of both LTCs and STCs, the capacitive liquidity acts as an additional control of the time constant. The states xof LRCs at time tcan be computed using the explicit Euler integration scheme as: LRC: xt=xt−1+∆t˙xt−1 (7) 2.2 Parallelization Techniques The DEER method [ 24] formulates next-state computation in NSSMs as a fixed-point problem and solves it using a parallel version of the Newton’s method. At each iteration step, DEER linearizes the NSSM. This approximation is widely effective across many domains, and often yields accurate estimates and fast convergence. The main limitation of DEER is the use of a square Jacobian, which does not scale up to long sequences when included in the parallel scan. The second limitation is its numerical instability, which arises from the nature of Newton’s method. In particular, the undamped version lacks global convergence guarantees and often diverges in practice [37, 7]. As an improvement, [ 7] introduces Quasi-DEER, which scales DEER by using the diagonal of the Jacobian, only. This is shown to achieve convergence comparable to Newton’s method while using less memory and running faster. Nevertheless, Quasi-DEER still suffers from limited stability. To stabilize its convergence, Quasi-DEER leverages a connection between the Levenberg- Marquardt algorithm and Kalman smoothing in their ELK (Evaluating Levenberg-Marquardt 3 with Kalman) algorithm [ 7]. This stabilization of the Newton iteration by constraining the step size within a trust region, prevents large and numerically unstable updates. As a result, updates are computed using a
|
https://arxiv.org/abs/2505.21717v1
|
parallel Kalman smoother, with a running time that is logarithmic in the length of the sequence. Algorithm 1 below, presents this method [7]. Algorithm 1 ELK 1:procedure ELK( f,s0,init_guess,tol,method,quasi) 2: diff← ∞ 3: states ←init_guess 4: while diff>toldo 5: shifted_states ←[s0,states[: −1]] 6: fs←f(shifted_states) 7: Js←GETJACOBIANS (f,shifted_states) 8: Js←DIAG(Js) 9: bs←fs−Js·shifted_states 10: new_states ←PARALLEL KALMAN FILTER (Js,bs,states, s0) 11: diff← ∥ states −new_states ∥∞ 12: states ←new_states 13: end while 14: return states 15: end procedure 3 Scaling Up Non-linear LRCs A scalable DEER or ELK approximation, first computes the dense Jacobian of the NSSM, as shown in Line 7 of Algorithm 1, and then extracts its diagonal as shown in Line 8. This results in a quasi approximation of the original DEER technique, called Quasi-DEER and Quasi-ELK [7]. Our Parallelization. Instead of following this approach, we directly modify the underlying nonlinear LRC of Equation (6), such that its Jacobian is diagonal by the formulation itself. The main idea of this modification is that the state-connectivity submatrices ax,wx,gmax ,x, and kmax ,xof the state-and-input-connectivity matrices a,w,gmax, and kmax, are constant parameter matrices that are theselves diagonlizable. Consequently, all cross terms are zeroed out in the LRC through diagonalization. Accordingly, we learn the complex diagonal matrices directly, instead. As a result, our own algorithm is no longer a quasi-approximation, as we do not explicitly remove nondiagonal entries. Instead we learn their contribution to the dynamics, within the complex eigenvalues of the diagonal. Consequently, Line 8, Js←Diag (Js) of Algorithm 1, is not needed anymore, and the update computations become more efficient. In this way, we retain the best of both approaches: A much more precise, more stable, and more scalable, parallelization technique. 3.1 Proposed Model In order to achieve a diagonal Jacobian for the LRCs by default, we first modify the Equations (1)- (3), by splitting their summation terms into a state-dependent and an input-dependent group, respectively. For the former, we only keep the self-loop synaptic parameters, and zero out all the cross-state synaptic parameters in the associated matrices. For the latter we keep the influence of all external inputs uthrough their cross-input synaptic parameters, as this part is zeroed out anyway in the Jacobian. To highlight the separation of the terms, we include an extra superscript xfor the learnable parameters in the state-dependent part, and the superscript of ufor the parameters in the input-dependent part. This separation results in Equations (8)-(10). As a consequence, instead of keeping cross-synaptic activations, where each individual synapse between neuron jand ihas its own gmax j i,bj iand kmax j ias it was in Equation (1)and (2), we now only keep the self-loop neural activations, where the synaptic parameters from the same neuron are equal. Note that instead of the ijindices, we have only the jindex in Equations (8) and (9). We denote the modified equations of the LRCs with an asterisk. This gives us the following equations for the f∗ i(xi,u),z∗ i(xi,u), and ϵ∗ i(xi,u) terms: 4 f∗ i(xi,u)=gmax ,x iσ(ax ixi+bx i) |{z } xistate-dependent+gmax ,u iσ(nX j=1au j iuj+bu j) | {z } uinput-dependent+gleak
|
https://arxiv.org/abs/2505.21717v1
|
i(8) z∗ i(xi,u)=kmax ,x iσ(ax ixi+bx i) |{z } xistate-dependent+kmax ,u iσ(nX j=1au j iuj+bu j) | {z } uinput-dependent+gleak i(9) ϵ∗ i(xi,u)= wx ixi+vx i|{z} xistate-dependent+nX j=1wu j iuj+vu j |{z} uinput-dependent(10) LrcSSM: ˙xi= −σ(f∗ i(xi,u))σ(ϵ∗ i(xi,u))xi+τ(z∗ i(xi,u))σ(ϵ∗ i(xi,u))eleak i(11) For the final form our proposed LRC model, Equation (11) can be formulated into the form of SSMs, by taking the vectorial form of the states xof size mand input vector uof size n: LrcSSM: ˙x=A(x,u)x+b(x,u), (12) where A(x,u)=diag −σ(f∗ 1(x1,u))σ(ϵ∗ 1(x1,u)) ... −σ(f∗ i(xi,u))σ(ϵ∗ i(xi,u)) ... −σ(f∗ m(xm,u))σ(ϵ∗ m(xm,u)) , (13) and b(x,u)= τ(z∗ 1(x1,u))σ(ϵ∗ 1(x1,u))eleak 1 ... τ(z∗ i(xi,u))σ(ϵ∗ i(xi,u))el eak i ... τ(z∗ m(xm,u))σ(ϵ∗ m(xm,u))eleak m . (14) This diagonal A(x,u) form of Equation (13) and the reduced version of b(x,u) in Equation (14) results in a diagonal Jacobian matrix which makes the parallelizable iterative state updates exact and efficient, that is, this is not anymore a quasi-approximation of the Jacobian. 3.2 Comparison to Linear State Space Models State-of-the-art time-invariant LSSMs typically take the following general form: ˙x=Ax+Bu (15) y=Cx+Du (16) The main differences between LrcSSM and time-invariant LSSMs are the following: •There is no non-linearity in the recurrent and input update ( Aand B, respectively) in time-invariant LSSMs, which allows them to be parallelized over the time dimension. Here, we investigate non-linear recurrent update and non-linear input update too. • There are two key aspects of the matrices that state-of-the-art LSSMs usually follow: (1) First, matrix Ais generally time-invariant (constant), although recent work has intro- duced an input-dependent variant A(u)[14,9]. In our model however, this matrix is both state-and-input dependent, A(x,u). Second, instead of using a traditional B matrix that is simply multiplied by the input u, we adopt the form b(x,u), allowing the input to have a more embedded influence on the state update. 5 (2) Modern LSSMs typically require a special initialization, such as diagonal plus-low rank parameterization of the transition matrix of the LSSMs via higher-order poly- nomial projection (HiPPO) matrix [ 10] or only diagonal transition matrices with specific parameterization [ 11,28]. In our case, we calculate the entries of Aand b from biology-grounded equations of (13) and (14). 3.3 Comparison to Liquid-Resistance Liquid-Capacitance Networks (LRCs) In summary, our approach of LrcSSM differs from LRCs [4] in the following ways: •Learning in LRCs, like in traditional NSSMs (or nonlinerar RNN models), is inherently sequential. In contrast, we aim for an efficient, parallelizable version in LrcSSMs. •We modified entries of Aand bto only depend on the self states, rather than on all other states, while still allowing them to depend on the full input. This change yields diagonal Jacobians, exact solutions, and improved efficiency in the update computations. •While the original LRCs use a single computation layer (a single computation block), we have restructured the LRC architecture into a block-wise design in LrcSSMs, similar to the LSSM-styled models such as LRU and S5. This design is illustrated in Figure 1. 3.4 Theoretical Insights The LrcSSM architecture enjoys three important theoretical properties. Firstly, by forcing the state-transition matrix of LrcSSMs to be diagonal and learned
|
https://arxiv.org/abs/2505.21717v1
|
at every time step, the full sequence can be solved in parallel with a single prefix-scan, giving O(T D) time and memory and only O(log T) sequential depth, where Tis the input-sequence length, and Dis the state dimension. Secondly, LrcSSMs offer a formal gradient-stability guarantee that other input-varying systems such as Liquid-S4 and Mamba do not provide. Lastly, because LrcSSM forward and backward passes cost Θ(T D L ) FLOPs, where Lis the network depth of the LrcSSM architecture, for its low sequential depth and parameter count Θ(D L), the model follows the compute-optimal scaling law regime ( β≈0.42) recently observed for Mamba, outperforming quadratic-attention Transformers at equal compute while avoiding the memory overhead of FFT-based long convolutions. The full proof of all these properties is given in Appendix A, due to obvious space limitations. In particular we provide all details about LrcSSMs stability in A.1 and scalability in A.2. 4 Related work Linear Structural State-Space Models (LSSMs). Since the introduction of S4 [ 12], LSSMs rapidly evolved in sequence modeling. S4 used FFT to efficiently solve linear recurrences, and inspired sev- eral variants, including S5 [ 34], which replaced FFT with parallel scans. Liquid-S4 [ 14] introduced input-dependent state-transition matrices, moving beyond the static structure but relied on FFT. Recent work, such as S6 and Mamba [ 9], adapted the concept of input-dependency and continued to push LSSMs to more efficient computation with a hardware-aware parallel algorithm. Parallelizing Non-linear State Space Models (NSSMs). While traditional nonlinear RNNs have been favored for their memory efficiency, their major limitation lies in the lack of parallelizability over the sequence length. This has led to the development of parallelizable alternatives, such as [25,28,27,8]. One notable example is the Linear Recurrent Unit (LRU) [ 28], which uses complex diagonal state-transition matrices with stable exponential parameterization, achieving comparable performance with LSSMs. While LRUs argue that linear recurrence is sufficient, in this work we show that incorporating non-linearity in the transition dynamics can offer significant advantages. Importantly, these approaches achieve parallelism through entirely new architectures, without addressing how to parallelize existing NSSMs. Techniques like DEER [24] and ELK [7] fill this gap by enabling parallel training and inference for arbitrary non-linear recurrent models. Positioning LrcSSM in Recent Advances. Our LrcSSM aligns with the structured state-space duality (SSD) framework introduced by [ 2], as its main focus is on designing an RNN that behaves 6 Table 1: Test accuracy comparison of different models across relatively short-horizon datasets (<1,500). The performance of the models marked by †is reported from [ 32]. The same hyperpa- rameter tuning protocol and dataset splitting over the same 5 seeds were used. Heartbeat SelfRegulationSCP1 SelfRegulationSCP2 Sequence length 405 896 1,152 Input size 61 6 7 #Classes 2 2 2 NRDE†73.9±2.6 76.7 ±5.6 48.1 ±11.4 NCDE†68.1±5.8 80.0 ±2.0 49.1 ±6.2 Log-NCDE†74.2±2.0 82.1 ±1.4 54.0 ±2.6 LRU†78.1±7.6 84.5±4.6 47.4 ±4.0 S5†73.9±3.1 87.1±2.1 55.1 ±3.3 Mamba†76.2±3.8 80.7±1.4 48.2 ±3.9 S6†76.5±8.3 82.8±2.7 49.9 ±9.4 LinOSS-IMEX†75.5±4.3 87.5±4.0 58.9 ±8.1 LinOSS-IM†75.8±3.7 87.8±2.6 58.2 ±6.9 LrcSSM (Ours) 72.7 ±5.7 85.2 ±2.1 53.9 ±7.2 almost like an SSM (diagonalizable and parallelizable). In addition,
|
https://arxiv.org/abs/2505.21717v1
|
recent work on parallel state-free inference [29] can be also combined with LrcSSMs to further enhance their efficiency. 5 Experiments We compare LrcSSMs against nine models representing the state of the art for a range of long- sequence tasks. These include the Neural Controlled Differential Equations (NCDE) [ 20], Neural Rough Differential Equations (NRDE) [ 26] and Log-NCDE [ 35], Linear Recurrent Unit (LRU) [ 28], S5 [ 34], MAMBA [ 9], S6 [ 9], Linear Oscillatory State-Space models with implicit-explicit time integration (LinOSS-IMEX [32]) and with implicit time integration (LINOSS-IM [32]). We follow the same classification evaluation benchmark proposed in [ 35] and then used by [ 32]. These tasks are part of the UEA Multivariate Time Series Classification Archive (UEA-MTSCA). All of these datasets consist of biologically or physiologically grounded time-series data, derived from real-world measurements of dynamic systems, which can be human, animal, or chemical. They capture continuous temporal signals such as neural activity, bodily movements, or spectroscopic readings, making them very well-suited for benchmarking models that need to learn complex temporal dependencies. We followed the exact same hyperparameter-tuning protocol, using a grid search over the validation accuracy. More details on these experiments are given in the Appendix B.2. After fixing the hyperparameters, we compare the average test set accuracy over five different random splits of the data. As we are reporting the results of the other models from Rusch et. al [ 32], we also used the exact same seeds for the dataset splitting as well. When presenting the results, we highlight the top three performing models. Short-Horizon Sequence Tasks. In Table 1, we report results on datasets with sequence lengths shorter than 1,500 elements. These datasets include the Heartbeat dataset [ 6], which contains heart sound recordings, as well as SelfRegulationSCP1 and SelfRegulationSCP2 [ 1], which include data on cortical potentials. We report the test accuracy results. We found that our LrcSSM model performed average on these tasks, and we suspect that they lack interesing input correlations. Long-Horizon Sequence Tasks. Next, we focus on the tasks that require learning long-range interactions, especially those with a sequence length above 1,500, up to 18,000. These include the EthanolConcentration dataset [ 22], which contains spectroscopic recordings of solutions, the Mo- torImagery dataset, which captures data from the motor cortex, and the EigenWorms [ 38] dataset of postural dynamics of the worm C. elegans . As shown in Table 2, our LrcSSM model outperforms all other state-of-the-art methods on the EthanolConcentration task and achieves second-best per- 7 Table 2: Test accuracy comparison of different models across long-horizon datasets (>1,500). The performance of the models marked by † is reported from [32]. Results are averaged over 5 seeds. EthanolConcentration MotorImagery EigenWorms Sequence length 1,751 3,000 17,984 Input size 2 63 6 #Classes 4 2 5 NRDE†31.4±4.5 54.0±7.8 77.2 ±7.1 NCDE†22.0±1.0 51.6 ±6.2 62.2 ±2.2 Log-NCDE†35.9±6.1 57.2±5.6 82.8 ±2.7 LRU†23.8±2.8 51.9 ±8.6 85.0±6.2 S5†25.6±3.5 53.0 ±3.9 83.9 ±4.1 Mamba†27.9±4.5 47.7 ±4.5 70.9 ±15.8 S6†26.4±6.4 51.3 ±4.7 85.0±16.1 LinOSS-IMEX†29.9±1.0 57.9±5.3 80.0±2.7 LinOSS-IM†29.9±0.6 60.0±7.5 95.0 ±4.4 LrcSSM (Ours) 36.9±5.3 58.6 ±3.1 90.6 ±1.4 formance on
|
https://arxiv.org/abs/2505.21717v1
|
the MotorImagery and EigenWorms datasets. We believe that EthanolConcentration contains interesting input correlations, which LrcSSMs can capture through its state dependence. Average Performance Across Datasets. In Table 3, we report the average accuracy across all six datasets considered from the UEA-MTSCA archive. LrcSSM achieved an accuracy of 66 .3%, placing it at the forefront alongside the LinOSS-IM model, outperforming all other state-of-the-art models, including LRU, S5, S6, Mamba, and LinOSS-IMEX. The implicit integration scheme of the LinOSS-IM model, seems to have played an important role, and we plan to investigate a similar integration scheme for LrcSSM, too. Our current scheme is just a simple explicit Euler. Table 3: Average test accuracy (%) across all datasets. As before, the performance of the models marked by † is reported from [32]. Results are averaged over 5 seeds. Average Test Accuracy NRDE†60.2±17.1 NCDE†55.5±18.2 Log-NCDE†64.4±16.9 LRU†61.8±22.6 S5†63.1±21.2 Mamba†58.6±18.8 S6†62.0±21.2 LinOSS-IMEX†65.0±19.0 LinOSS-IM†67.8±21.6 LrcSSM (Ours) 66.3±18.6 6 Discussion Competitive Long-Horizon Performance. Our experimental evaluations show that the LrcSSM model performs moderately well on short-horizon datasets, as seen in Table 1, while demonstrat- ing highly competitive performance on datasets with long input sequences, as shown in Table 2. In those long-sequence tasks, LrcSSMs outperform LRUs, Mamba, and S6, and also achieve better average performance across all datasets, as presented in Table 3. 8 The only model that LrcSSMs generally does not outperform is the LinOSS-IM model, except on the EthanolConcentration dataset (for both LinOSS-IMEX and LinOSS-IM versions), and on MotorImagery and EigenWorms (in the case of LinOSS-IMEX). This may be attributed to the fact that LinOSS is based on forced linear second-order ODEs, whereas LrcSSMs are built upon LRCs, which are nonlinear first-order ODEs. Another possible reason lies in the integration technique: while we were able to outperform the implicit-explicit (IMEX) integration scheme, we did not surpass the fully implicit one (IM) in average test accuracy. This suggests that more sophisticated integration schemes for LrcSSMs (which currently use explicit Euler) may be worth investigating. Biological Inspirations in Sequence Modeling. We find it particularly interesting that the LinOSS model also exhibits biological relevance, as it models cortical dynamics through harmonic oscilla- tions. In contrast, our approach models information transmission through chemical synapses, which is a different biological phenomenon. The strong performance of both approaches, de- spite being grounded in different aspects of neuroscience, highlights the significant potential of biologically inspired models as a foundation for future research in sequence modeling. Efficient Sequence Modeling with Diagonalized Jacobians. In this paper, we focused on the biologically inspired non-linear LRC model, and demonstrated how this model can be made more efficient for long-sequence modeling, by redesigning its underlying state-transition matrix Aand its input-transition vector b, such that the resulting Jacobian is a diagonal matrix, for the state-update iterations. This matrix can then be directly used in the parallelizable ELK method, which gives an exact ELK update, and not an approximation. We believe this approach can also be applied to many other non-linear RNNs of interest. Limitations. As pointed out in Section A.2, this is parallelized version holds a good promise towards efficient non-linear
|
https://arxiv.org/abs/2505.21717v1
|
RNNs compared to sequential computation costs. Linear SSMs have also the same costs. However, we also have to take into account that LRCs solved by ELK need more Newton steps to converge at each iteration, which linear SSMs do not require. The number of iterations depends on the convergence of the state updates, which stops once the difference between the consecutive state updates gets below a defined threshold (see Line 4 of Algorithm 1). 7 Conclusion In this work, we revisited the potential of nonlinear RNNs in the era of efficient, scalable LSSMs. While LSSMs have seen remarkable success due to their parallelizable structure and computational efficiency, nonlinear RNNs have largely been sidelined due to their inherently sequential nature. However, recent advances, particularly the DEER and ELK methods, have opened the door to parallelizing nonlinear RNNs, thus challenging their long-standing scalability limitation. Building on these developments, we introduced the liquid-resistance liquid-capacitance nonlinear state-space model (LrcSSM), a novel NSSM architecture that combines the expressive power of bioinspired nonlinear RNNs with the scalability of modern LSSMs. By adapting the ELK method and carefully redesigning the internal structure of LRCs, we enable efficient parallel computation by inherently learning diagonal Jacobian matrices, while still preserving the dynamic richness of nonlinear state updates in biological neurons. Our design allows for exact ELK updates, rather than relying on quasi-approximations. Our experiments demonstrate that LrcSSM not only matches but often exceeds the performance of leading LSSMs such as LRU, S5, S6, and Mamba, particularly in long-horizon sequence modeling tasks. These results suggest that nonlinear-RNN-based SSMs are not only a feasible solution but can also be competitive, offering a promising direction for future research in sequence modeling. In summary, this work bridges the gap between the expressive flexibility of nonlinear dynamics and the computational advantages of parallelism, within the LrcSSMs architecture, opening new pathways for scalable, biologically inspired architectures in modern deep learning. 9 References [1] Niels Birbaumer, Nimr Ghanayim, Thilo Hinterberger, Iver Iversen, Boris Kotchoubey, Andrea Kübler, Juri Perelmouter, Edward Taub, and Herta Flor. A spelling device for the paralysed. Nature , 398(6725):297–298, 1999. [2] Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060 , 2024. [3] Mónika Farsang, Mathias Lechner, David Lung, Ramin Hasani, Daniela Rus, and Radu Grosu. Learning with chemical versus electrical synapses does it make a difference? In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 15106–15112. IEEE, 2024. [4] Mónika Farsang, Sophie A Neubauer, and Radu Grosu. Liquid resistance liquid capacitance networks. In The First Workshop on NeuroAI@ NeurIPS2024 , 2024. [5] Daniel Y Fu, Hermann Kumbong, Eric Nguyen, and Christopher Ré. Flashfftconv: Efficient convolutions for long sequences with tensor cores. arXiv preprint arXiv:2311.05908 , 2023. [6] Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stan- ley. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation , 101(23):e215–e220, 2000. [7] Xavier Gonzalez, Andrew
|
https://arxiv.org/abs/2505.21717v1
|
Warrington, Jimmy Smith, and Scott Linderman. Towards scalable and stable parallelization of nonlinear rnns. Advances in Neural Information Processing Systems , 37:5817–5849, 2024. [8] Riccardo Grazzi, Julien Siems, Arber Zela, Jörg K. H. Franke, Frank Hutter, and Massimiliano Pontil. Unlocking state-tracking in linear rnns through negative eigenvalues, 2025. [9] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 , 2023. doi:10.48550/arXiv.2312.00752. [10] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. Hippo: Recurrent memory with optimal polynomial projections. Advances in neural information processing systems , 33:1474–1487, 2020. [11] Albert Gu, Karan Goel, Ankit Gupta, and Christopher Ré. On the parameterization and initialization of diagonal state space models. Advances in Neural Information Processing Systems , 35:35971–35983, 2022. [12] Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with struc- tured state spaces, 2022. doi:10.48550/arXiv.2111.00396. [13] Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu Grosu. Liquid time-constant networks. In Proc. of the AAAI Conference on Artificial Intelligence , volume 35(9), pages 7657–7666, 2021. doi:10.1609/aaai.v35i9.16936. [14] Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, and Daniela Rus. Liquid structural state-space models. arXiv preprint arXiv:2209.12951 , 2022. doi:10.48550/arXiv.2209.12951. [15] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 , 2022. [16] B. Howell, L.E. Medina, and W.M. Grill. Effects of frequency-dependent membrane capaci- tance on neural excitability. Neural Engineering , 12(5):56015–56015, October 2015. [17] Eric R Kandel, James H Schwartz, Thomas M Jessell, Steven Siegelbaum, A James Hudspeth, Sarah Mack, et al. Principles of neural science , volume 4. McGraw-hill New York, 2000. 10 [18] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling Laws for Neural Language Models, 2020. Version Number: 1. [19] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. [20] Patrick Kidger, James Morrill, James Foster, and Terry Lyons. Neural controlled differential equations for irregular time series. Advances in neural information processing systems , 33:6696–6707, 2020. [21] Jitender Kumar, Patrick Das Gupta, and Subhendu Ghosh. Effects of nonlinear membrane capacitance in the hodgkin-huxley model of action potential on the spike train patterns of a single neuron. Europhysics Letters , 142(6):67002, jun 2023. [22] James Large, E Kate Kemsley, Nikolaus Wellner, Ian Goodall, and Anthony Bagnall. Detecting forged alcohol non-invasively through vibrational spectroscopy and machine learning. In Pacific-Asia Conference on Knowledge Discovery and Data Mining , pages 298–309. Springer, 2018. [23] Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A Henzinger, Daniela Rus, and Radu Grosu. Neural circuit policies enabling auditable autonomy. Nature Machine Intelli- gence , 2(10):642–652, 2020. doi:10.1038/s42256-020-00237-3. [24] Yi Heng Lim, Qi Zhu, Joshua Selfridge, and Muhammad Firmansyah Kasim. Parallelizing non- linear sequential models over the sequence length. In The Twelfth International Conference on
|
https://arxiv.org/abs/2505.21717v1
|
Learning Representations , 2024. [25] Eric Martin and Chris Cundy. Parallelizing linear recurrent neural nets over sequence length. arXiv preprint arXiv:1709.04057 , 2017. [26] James Morrill, Cristopher Salvi, Patrick Kidger, and James Foster. Neural rough differential equations for long time series. In International Conference on Machine Learning , pages 7829–7838. PMLR, 2021. [27] Sajad Movahedi, Felix Sarnthein, Nicola Muca Cirone, and Antonio Orvieto. Fixed-point rnns: From diagonal to dense in a few iterations. arXiv preprint arXiv:2503.10799 , 2025. [28] Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and Soham De. Resurrecting recurrent neural networks for long sequences. In International Conference on Machine Learning , pages 26670–26698. PMLR, 2023. [29] Rom N Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy TH Smith, Ramin Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, et al. State-free infer- ence of state-space models: The transfer function approach. arXiv preprint arXiv:2405.06147 , 2024. [30] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. In International Conference on Machine Learning , pages 28043–28078. PMLR, 2023. [31] Michael Poli, Armin W Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, et al. Mechanistic design and scaling of hybrid architectures. arXiv preprint arXiv:2403.17844 , 2024. [32] T Konstantin Rusch and Daniela Rus. Oscillatory state-space models. arXiv preprint arXiv:2410.03943 , 2024. [33] Daniel Severin, Sofia Shirley, Alfredo Kirkwood, and Jorge Golowasch. Daily and cell type- specific membrane capacitance changes in mouse cortical neurons. bioRxiv , 2022. 11 [34] Jimmy TH Smith, Andrew Warrington, and Scott W Linderman. Simplified state space layers for sequence modeling. In ICLR , 2023. [35] Benjamin Walker, Andrew Donald McLeod, Tiexin Qin, Yichuan Cheng, Haoliang Li, and Terry Lyons. Log neural controlled differential equations: The lie brackets make a difference. InForty-first International Conference on Machine Learning , 2024. [36] Stephen R Wicks, Chris J Roehrig, and Catharine H Rankin. A dynamic network simulation of the nematode tap withdrawal circuit: predictions concerning synaptic function using behavioral criteria. Journal of Neuroscience , 16(12):4017–4031, 1996. [37] Stephen J Wright. Numerical optimization, 2006. [38] Eviatar Yemini, Tadas Jucikas, Laura J Grundy, André EX Brown, and William R Schafer. A database of caenorhabditis elegans behavioral phenotypes. Nature methods , 10(9):877–879, 2013. 12 Technical Appendices and Supplementary Material A Theoretical Insights A.1 Stability We analyze a single hidden dimension— because every recurrence is diagonal, all dimensions behave independently and identically. Recall the discrete-time update: xt+1=λtxt+bt, 0 <λt≤ρ<1, (17) where ρis a user–chosen radius (typically 0 .9−0.99) enforced by either the tanh-clamp or the negative–softplus–exponential parametrisation. One step is contractive. Lemma 1 (ρ–contraction) .For any x ,y∈RDwe have ∥xt+1−yt+1∥2= ∥λt(x−y)∥2≤ρ∥x−y∥2. Proof. λtis diagonal with all entries ≤ρ, hence its operator (spectral) norm is ≤ρ; multiplying by it can only shrink Euclidean distances. Forward states stay bounded. Iterating Lemma 1 ttimes yields ∥xt∥2≤ρt∥x0∥2+1−ρt 1−ρB, B:=max s≤t∥bs∥2. (18) Therefore the hidden state can never blow up , irrespective of sequence length. Back-propagated gradients never explode.
|
https://arxiv.org/abs/2505.21717v1
|
Theorem 1 (Gradient stability) .Let a loss Ldepend only on the final state xT. Then for any 0≤τ<T °°∇xτL°° 2≤ρT−τ°°∇xTL°° 2, hence the Jacobian product norm is ≤1and cannot explode. Proof. The Jacobian of one step is Jt=λt, so∥Jt∥2≤ρ. Back-propagation multiplies T−τsuch Jacobians: ∇xτL=J⊤ τ···J⊤ T−1∇xTL. Sub-multiplicativity of the spectral norm gives the result. Controlled vanishing. Because ρistunable , gradients decay at most geometrically: choosing ρ≈0.99 keeps long-range signals alive; smaller values add regularisation. Deep stacks. For Lstacked layers with radii ρℓthe bound becomes°°∇(layer L) xτL°° 2≤¡QL ℓ=1ρT−τ ℓ¢ ∥∇xTL∥2. Keeping every ρℓclose to 1 therefore preserves stability in depth. How others models handle forward/gradient stability. S4/S6 keep Re(A)<0 and collapse the recurrence into a single convolution kernel. In this setting, forward activations are bounded and back-propagated Jacobians never appear. Mamba re-introduces recurrence via a gate σ(·)∈[0,1]; if that gate is clipped the same ρ-Lipschitz bound as ours holds, but no proof is given. LinOSS discretizes a non-negative diagonal ODE with a symplectic IMEX step, proving both state and gradient norms stay ≤1.Liquid-S4 adds an input term B u twithout clamping the spectrum, so stability relies on empirical eigenvalue clipping. Thus, among truly recurrent models, only LrcSSM (and LinOSS under its specific integrator) enjoy a formal guarantee that both forward trajectories and full Jacobian chains remain inside the unit ball. LrcSSM has a stronger guarantee than Liquid-S4 or Mamba, and—unlike S4-type convolutions, can propagate gradients through actual recurrent steps while remaining provably safe from explosion. This makes training deep, long-sequence stacks straightforward: set ρ≈1, forget about gradient clipping, and tune ρitself as a single parameter to trade off memory length versus regularization. 13 Table 4: Per–layer asymptotic complexity (sequence length T, width D). Architecture F/B FLOPs Memory Parallel depth Mamba[9] O(T D) O(D) O(log T) LinOSS[32] O(T D) O(D) O(log T) Liquid-S4[14] O(T D) O(D) O(T) S4/Hyena[12, 30] O(TlogT D )O(T) O(log T) Transformer[19] O(T2D) O(T2+T D)O(1) LrcSSM (ours) O(T D) O(D) O(log T) A.2 Scalability LetTdenote the input sequence length and Dthe state dimension. Sequential methods inherently cannot be parallelized, requiring O(D) memory complexity and O(T D2) computational work. Compared to this, the DEER [ 24] method is parallel but it comes with a major drawback, it requires O(T D2) memory complexity and O(T D3) computational cost. The ELK technique introduced in [ 7] achieves fast and stable parallelization by incorporating diagonal Jacobian computation for scalability. This reduces both memory and computational complexity significantly to O(T D). Our approach achieves the same complexity — O(T D) for both memory and computation, thanks to the use of inherently diagonal Jacobians. Now let’s assess formal complexity and compute–optimal scaling laws for LrcSSM: Compute, throughput, and memory. LetFLOPs ≈cfB T D L , be the dominant training cost, where Bis the batch size, Tthe sequence length, Dthe hidden width, Lthe network depth, and cf an architecture–specific constant we define (lower for SSMs and higher for Transformers). The single-GPU throughput (tokens s−1GPU−1) isthroughput ≈T B wall-clock timeThe memory footprint is the sum of peak activations and model parameters. Scaling-law [18, 15] .Ascaling law is any
|
https://arxiv.org/abs/2505.21717v1
|
asymptotic or empirical relation of the form Loss( C)=A C−β+E, C=compute (FLOPs), β>0, (19) or a closed-form complexity identity such as FLOPs ∝T D. Recent large-scale studies like [ 31] show that βdepends on the operator’s per-token cost: Dense attention :β≈0.48–0 .50 [18].Linear-time RNN/SSM (Mamba, Hyena): β≈0.42–0 .45 in 70 M–7 B runs [9, 30]. Hybrid (recurrence + sparse attention): βcan reach 0.41 (MAD pipeline) [31]. Table 4 summarizes the per–layer cost of the main long-sequence architectures in terms of for- ward/backward FLOPs, peak activation memory, and parallel depth over the sequence length T. Because LrcSSM shares the same O(T D) compute curve as Mamba but with a smaller constant cf(no low-rank gate, no FFT), we expect it to sit at—or slightly below—the 0 .42–0 .45 band. The claim is compatible with existing data: Mamba-3B matches a 6-B Transformer at the same FLOPs [9], and LinOSS shows 2 ×lower NLL than Mamba on 50 k-token sequences at equal compute [ 32]. Hence, β≈0.42 is a defensible prior for LrcSSM; a hybrid LrcSSM + local-attention block could plausibly move βtoward 0.41. Sequence-length scaling. For single–GPU throughput K(T), LrcSSM inherits the near-perfect linear behaviour K(T)∝Tof the scan primitive, with practical speed-ups obtainable through width- wwindowing and double buffering that saturate L2 cache bandwidth. Liquid-S4 degrades linearly in latency because it remains sequential, whereas FFT-based S4/Hyena layers incur O(TlogT) compute and become memory-bound beyond T≈64k tokens. Hence, for contexts up to 64k, LrcSSM (and Mamba) are the compute winners ; at larger Tthe FFT models may overtake them in raw FLOPs but pay a significant activation cost. Sequence-length scaling. Let K(T) be the wall -clock time for a single forward pass of length Ton one GPU. LrcSSM: K(T)≈c SMsT(linear) but can drop to ≈c SMsT wwith a width -wscan and double -buffering—near -perfect L2 -cache reuse, where SM is the number of CUDA Streaming Multiprocessors on the GPU, and ca hardware-and-kernel–dependent constant (e.g., time per 14 token per SM). Mamba [ 9]: same asymptotic, but the fused CUDA kernel shows ≈5×higher throughput than a Transformer on 4k tokens; on shorter sequences the constant cost of its scan kernel dominates. S4/Hyena (FFT): O(TlogT); cross -over with linear methods occurs around T≈8–16k on A100s—FlashFFTConv reduces the constant 4 ×–8×[5]. Liquid -S4 [ 14]: remains sequential; throughput degrades linearly without remedy. Thus, for T≤64k, LrcSSM and Mamba are compute winners; beyond 64k, Hyena/S4 win in pure flops but can be memory-bound. B Experimental Details B.1 Training Setup We used A100 GPUs with 80 GB of memory. Training time ranged from less than 1 up to 2-3 hours per data split, depending on the dataset and model. Early stopping was used to prevent overfitting, which varies the training time. B.2 Hyperparameters We performed a grid search over the following set of hyperparameters: Table 5: Hyperparameter grid. Same values as in [35, 32]. Parameter name Value learning rate 10−5,10−4,10−3 hidden dimension 16,64,128 state-space dimension 16,64,256 number of blocks (#blocks) 2,4,6 Using the grid shown in Table 5, we selected the best configuration for each dataset based on the average validation
|
https://arxiv.org/abs/2505.21717v1
|
accuracy across five data splits. The splits were generated using the same random seeds as in [ 32] to ensure full comparability. The final hyperparameters used to report the test accuracies are listed in Table 6. Table 6: Hyperparameters used for LrcSSM per dataset. lr hidden dim. state-space dim. #blocks Heartbeat 10−364 64 4 SelfRegulationSCP1 10−364 16 2 SelfRegulationSCP2 10−3128 64 2 EthanolConcentration 10−4128 16 2 MotorImagery 10−416 16 4 EigenWorms 10−416 16 4 We found that, in general, LrcSSMs benefit from higher learning rates and are not particularly sensitive to the hidden dimension of the encoded input. However, a lower state-space dimension and fewer layers tend to be advantageous. B.3 Dataset sources The datasets can be downloaded from the following links: • Short-horizon tasks: –Heartbeat –SelfRegulationSCP1 –SelfRegulationSCP2 15 • Long-horizon tasks: –EthanolConcentration –MotorImagery –EigenWorms B.4 Additional Remarks on the Datasets We used the datasets as they were publicly available (i.e., without an additional time dimen- sion). However, this aspect is treated as an additional hyperparameter in the models reported by [32]. We hypothesize that incorporating this dimension could help our model learn even better dependencies, which might be worth investigating in the future. B.5 Additional Remarks on the Model Design Integration Scheme. As pointed out in Section 6, we used the explicit Euler integration scheme. This is a simple and straightforward solution, but it might be worth investigating more sophis- ticated and computationally expensive integration methods. In fact, we conducted some pre- liminary experiments with a hybrid explicit-implicit solver but did not observe any performance improvement, although we did not explore it across the full hyperparameter grid. Integration Timestep. For the integration step, we used a timestep of ∆t=1 in all our experi- ments. As [ 32] investigated different ∆tvalues across the datasets and observed no substantial gain in performance, they also continued with ∆t=1 for all their experiments. However, it might still be worth investigating this in our case as well. C Ablation Studies Input- and State-dependency. We conducted ablation studies to assess the importance of incorporating state-dependency in the state-transition matrix Aand input-transition vector b. Given the extensive hyperparameter search required, we fixed the architecture to 6 layers of SSM blocks, each with 64 states, an input encoding dimension of 64, and a learning rate of 10−4. As shown in Table 7, the average results indicate that learning both input- and state- dependent transitions yields better performance. We also suggest that future work could treat these dependencies as tunable hyperparameters, as some datasets may benefit from both forms of dependency, while others may perform well with input-dependency alone. Table 7: Experimentation with different input and state-dependent matrices. Here, we use a fixed configuration with input encoding of 64 and 6 blocks of SSMs with 64 units. We found that excluding state dependency from Aand then from btoo, downgrades performance on average. LrcSSM LrcSSM LrcSSM (default) Adependence A(x,u) A(u) A(u) bdependence b(x,u) b(x,u) b(u) Heartbeat 75.0±2.6 75.0 ±1.8 73.0±2.7 SelfRegulationSCP1 84.8 ±2.8 85.0±2.9 83.1±1.4 SelfRegulationSCP2 55.4±7.7 49.6±5.5 51.4 ±2.9 EthanolConcentration 36.1 ±1.1 37.6±3.9 34.2±2.9 MotorImagery 55.7 ±4.1 57.9±2.9 54.3±6.0 EigenWorms
|
https://arxiv.org/abs/2505.21717v1
|
85.6 ±5.4 85.0 ±5.5 86.7±5.4 Average 65.4±17.9 65.0±18.0 63.8 ±18.7 Please note that results reported here for LrcSSM, do not match the results of the previous tables because we used a fix setup without hyperparameter tuning, to only focus on the importance of state-dependency and changed the underlying matrix Aand bof˙x=A(x,u)x+b(x,u). This results in having even better test accuracies reported here for Heartbeat and SelfRegulationSCP2. 16 Complex-valued State-Transition Matrix and Input-Transition Vector. We also experimented with complex-valued learnable parameters, focusing on those interacting directly with the state x. In particular,we experimented with the parameters gmax ,x ioff∗ i(xi,u) and kmax ,x iofz∗ i(xi,u) as defined in Eq. (8)and (9), respectively, as well as their shared sigmoidal channel parameters ax iand bx i. These were gradually converted to complex values, and experiments were conducted using a fixed configuration of 6 SSM blocks, each with 64 state dimensions and 64-dimensional encoded input, and a learning rate of 10−4. As shown in Table 8, we found no significant performance gains on average from using complex-valued parameters. As a result, we opted to use real- valued learnable parameters in our main experiments. Nevertheless, we also evaluated the tuned models with their complex-valued counterparts. The only notable improvement occurred on the MotorImagery dataset, where accuracy increased from 54.3 ±3.1 to 58.6 ±3.1. Substituting this result into the average accuracy reported in Table 3 would yield 65.6 ±18.9, which still ranks our model as the second-best overall. Table 8: Experimentation with complex valued parameters. Here, we use a fixed configuration with input encoding of 64 and 6 blocks of SSMs with 64 units. We found very similar average performance between real-valued and complex-valued parameters. LrcSSM LrcSSM LrcSSM LrcSSM (default) with with with gmax ,x i∈R ∈C ∈C ∈C kmax ,x i∈R ∈C ∈C ∈C ax i∈R ∈R ∈C ∈C bx i∈R ∈R ∈R ∈C Heartbeat 75.0 ±2.6 74.3 ±5.2 73.75 ±3.2 73.0 ±4.0 SelfRegulationSCP1 84.8 ±2.8 82.9 ±2.7 83.1 ±4.2 84.8 ±2.2 SelfRegulationSCP2 55.4 ±7.7 50.4 ±4.4 53.6 ±3.6 58.6 ±3.5 EthanolConcentration 36.1 ±1.1 41.8 ±2.1 40.0 ±4.5 42.1 ±3.6 MotorImagery 55.7 ±4.1 53.2 ±2.6 53.9 ±3.5 52.5 ±4.3 EigenWorms 85.6 ±5.4 85.0 ±6.5 88.3 ±5.7 86.1 ±6.3 Average 65.4 ±17.9 64.6 ±16.8 65.4 ±17.4 66.2 ±16.4 17
|
https://arxiv.org/abs/2505.21717v1
|
Responsible Data Stewardship: Generative AI and the Digital Waste Problem Vanessa Utz Simon Fraser University vutz@sfu.ca Abstract As generative AI systems become widely adopted, they ena- ble unprecedented creation levels of synthetic data across text, images, audio, and video modalities. While research has addressed the energy consumption of model training and in- ference, a critical sustainability challenge remains understud- ied: digital waste . This term refers to stored data t hat con- sumes resources without serving a specific (and/or immedi- ate) purpose. This paper presents this terminology in the AI context and introduces digital waste as an ethical imperative within (generative) AI development, positioning environ- mental sustain ability as core for responsible innovation. Drawing from established digital resource management ap- proaches, we examine how other disciplines manage digital waste and identify transferable approaches for the AI com- munity. We propose specific recommendation s encompass- ing research directions, technical interventions, and cultural shifts to mitigate the environmental consequences of indefi- nite data storage. By expanding AI ethics beyond immediate concerns like bias and privacy to include intergenerational environmental justice, this work contributes to a more com- prehensive ethical framework that considers the complete lifecycle impact of generative AI systems. Introduction The explosive growth of generative AI has transformed how we create and consume digital c ontent, enabling the rapid production of synthetic data across text, images, audio, and video modalities at record scale and speed. While this tech- nological advancement has the potential to significantly in- crease human productivity, it also presents signif icant sus- tainability challenges that require urgent consideration. Current discourse on AI sustainability has primarily cen- tered on two resource -intensive processes: (1) the computa- tional demands during the training and fine -tuning large models (Strubell, Ganesh & McCallum 2019) , and (2) the energy required during inference as these systems generate outputs (Utz & DiPaola 202 3). Researchers have quantified these impacts through carbon emissions (Lacoste et. al. 2019; Luccioni & Hernandez -Garcia 2023) and proposed various strategies to reduce them (Chien et. al. 2023) . How- ever, a critical dimension of generative AI's environmental footprint remains largely unexplored: the long -term conse- quences of storing generated data in perpetuity. This gap in resea rch warrants immediate investigation as generative AI adoption accelerates globally, with an esti- mated 500 million daily users as of 2024 (Qiang , Liu & Wang 2024) . In this paper, we introduce the "digital waste" terminology to the AI community, including d evelopers, ac- ademic researchers and end users, and position environmen- tal sustainability as a core ethical consideration in responsi- ble AI development. Traditional AI ethics frameworks have predominantly fo- cused on issues such as bias, fairness, privacy, and transpar- ency (Prem 2023) . However, as the climate crisis we are cur- rently facing intensifies, the environmental footprint of AI - generated data deserves equal attention as an ethical concern of global significance. By framing digital waste as an ethical challenge with intergenerational implications, we establish sustainability as a core requirement rather than an optional consideration for AI -generated content. Drawing parallels with
|
https://arxiv.org/abs/2505.21720v1
|
digital waste management ap- proaches from other fields, we introduce sp ecific recom- mendations on how to tackle digital waste and aim to cata- lyze new thinking on how the AI community addresses these challenges. This work contributes to the growing field of sustainable AI research by broadening the scope beyond computational ef ficiency to encompass the entire lifecycle of AI -generated content and its potential impact on future generations. Understanding Digital Waste and its Material Reality Digital waste, also referred to as data waste (Bietti & Vatan- parast 2020) , refers to stored data that does not serve a func- tion, but still negatively affects the environment (i.e. due to emissions or resource extraction) . Commonly encountered examples of digital waste include duplicates of files (such as vacation photos that are stored on several different de- vices, or within several locations on a single device) or old/outdated files that are no longer needed (such as old e - mails). This phenomenon occurs across the digital ecosys- tem, from individual consumer devices to massive corporate data centers , creating environmental impacts at multiple scales. The concept extends beyond old emails and vacation pho- tographs. It includes any redundant copies of the same data, outdated versions that remain stored alongside current iter- ations, temporar y files that never get deleted, and data that has outlived its usefulness yet continues to occupy storage space . What makes digital waste particularly insidious is its seeming immateriality : unlike physical waste that accumu- lates visibly, digital waste acc umulates silently, its environ- mental footprint largely unseen by end users. While this waste accumulation might not be immediately visible, digi- tal waste still has physical substrates which have significant environmental impacts. Initial Hardware and Infr astructure Requirements The physical infrastructure enabling data storage constitutes a complex global network of semiconductor chips, memory units, storage devices, and data centers. The environmental footprint of this infrastructure begins with manufactu ring processes that require extensive resource extraction and re- finement. Semiconductor fabrication facilities produce memory chips, solid -state drives, and processing units that form the backbone of data storage systems. Production relies heavily on preci ous and rare earth metals including cobalt, gold, copper, and aluminum (Hosseini, Gao, Vivas -Valencia 2025) . The extraction of these materials often involves de- structive mining practices that lead to habitat destruction, soil erosion, and groundwater conta mination. Furthermore, w ater usage in semiconductor manufactur- ing has now reached alarming levels. Clean rooms require ultra-pure water for cleaning silicon wafers and cooling manufacturing equipment. This consumption has more than doubled over the past decade, with recent industry assess- ments indicating that chip manufacturing now consumes water nearly equivalent to the daily usage of 30 million Americans (Ruberti 2024) . This consumption creates signif- icant pressure on local water resources, particularly in al- ready water -stressed communities. The pollution footprint however exte nds beyond produc- tion facilities. Studies have documented heavy metal con- tamination, including tungsten, copper, and arsenic, in wa- terways downstream from semiconductor manufacturing fa- cilities (Hsu et. al. 2011) . These contaminants
|
https://arxiv.org/abs/2505.21720v1
|
can persist in ecosyste ms for decades. The production of toxic waste as- sociated with chip manufacturing has quadrupled over the last decade, reaching 874 kilotons in 2021 (Ruberti 2024) . The energy demands of hardware manufacturing are sim- ilarly substantial. A life -cycle analysi s of the six leading global chip manufacturers revealed their combined annual energy consumption reached 27,768 GWh. This energy us- age translated to approximately 16,369 kilotons of CO₂ equivalent emissions ( a unit of measure for the global warm- ing potenti al of all greenhouse gasses (GHGs)) (Ruberti 2023) . Ongoing Data Center Maintenance The environmental impact extends far beyond the initial manufacturing phase. Once operational, data centers create ongoing environmental burdens through their continuous operation. These facilities are designed for maximum relia- bility, which necessitates 2 4/7 operation with multiple re- dundancy systems (Monserrate 2022) . Modern data centers now consume more energy than some nations, with approximately 300 TWh required in 2021 (Alkrush et. al. 2024). C ooling systems represent the largest component of this ene rgy use. Traditional air condi- tioning systems typically account for over 40% of a data center's total electricity consumption, driven by the signifi- cant heat generated by servers operating at capacity (Monserrate 2022) . To reduce energy consumption, many d ata center opera- tors have begun implementing water -based cooling systems. While this approach significantly reduces electricity usage, it transfers the environmental burden from energy consump- tion to water usage. A typical small, water -cooled data cen- ter uses around 25 million liters of water annually ( Mytton, 2021). Additionally, pollution also poses a significant concern. Pollution associated with data centers manifests in two forms. 1) Noise pollution from cooling systems and backup generators affects wo rkers and nearby communities and 2) the regular replacement cycle for servers and storage equip- ment generates substantial electronic waste. Enterprise - grade storage systems (i.e. GPUs in data centers) typically operate on 5-year replacement cycles (Horizon Technology 2024) , creating a continuous stream of retired equipment containing hazardous materials . The industry has made progress in some aspects of data center sustainability. Operational efficiency improvements and renewable energy adoption have helped reduce the en- ergy intensity of data storage. However, these advances pri- marily address operating emissions rather than embodied carbon from manufacturing. As highlighted by Gupta et . al. (2021), hardware manufa cturing and infrastructure con- struction conti nue to lag significantly in sustainability ad- vances , compared to the operation phase . Generative AI and its Impact of Digital Waste Generative AI represents a paradigm shift in how digital content is created and consumed, enabling the production of synthet ic data at unprecedented rates. The relationship be- tween generative AI and digital waste manifests through two interconnected mechanisms: the substantial resources con- sumed during content generation itself and the ongoing en- vironmental burden of storing th e resulting synthetic content indefinitely. Resource -Intensive Generation Creating synthetic data through generative AI systems is fundamentally resource -intensive, with computational de- mands varying based on the generation task and output mo- dality. These
|
https://arxiv.org/abs/2505.21720v1
|
systems rely on complex neural network archi- tectures that perform numerous computationally expensive operations to transform input prompts into coherent outputs. Recent research by Luccioni et . al. (2024) has provided in- sights into the energy requirements of different generative and classification tasks. Generation tasks (such as text -to- image) consistently proved more energy and carbon -inten- sive than classification tasks due to the inherently greater complexity of media creation . The modality of the gener ated content significantly influ- ences resource requirements. Image -based tasks demand substantially more computational resources than text -only processes . Text -generation operations require a median of 0.042 kWh per 1000 inferences , which is equivalent to charging two smartphones (charging a modern average smartphone takes approximately 0.022 kWh) . By contrast, image -generation tasks consume a median of 1.35 kWh per 1000 inferences, equivalent to charging approximately 61 smartphones (Luccioni et. al. 2024) . These energy demands intensify with newer multimodal systems. Novel video generation systems like OpenAI's Sora represent the frontier of resource intensity and are therefore understudied in their impact on energy consump- tion and emissions, initial work has shown that video -based generative AI systems require even more computational re- sources due to their iterative diffusion denoising process that is involved in creating moving imagery (Li, Jiang & Tiwari 2024) . Unlike traditional content creation, where human cogni- tive effort serves as a natural limiting factor, generative AI enables content creation at machine speed, allowing users to generate ever increasing volumes of data in decreasing time frames. Three factors specific to the user interfaces of gen- erative AI systems further encourage large quantities of data generations: First, the iterative nature of interacting with generative systems leads users to create multiple versions while repeatedly refining prompts. A user might generate dozens of images b efore finding one that meets their require- ments. Second, the ease of generation encourages explora- tory creation without clear purpose. Third, synthetic content often lacks clear contextual metadata that might help users later determine its relevance, makin g it less likely to be de- liberately assessed during storage cleanup. This process also increases the volume of potentially dis- posable content, establishing the foundation for unprece- dented levels of digital waste. Perpetual Storage Demands and Generational Debt The environmental footprint of generative AI extends far be- yond the moment of content creation, due to the accumula- tion of digital waste files. Once generated, synthetic content requires continued storage either locally (i.e. on home com- puters) or on cloud services, which puts a strain on the phys- ical resources required to manufacture and maintain the needed hardware and infrastructure. This perpetual storage requirement creates a "long tail" of environmental impact: an extended perio d during which the content continues to consume resources regardless of whether it serves an active purpose. This persistent storage represents a form of intergenera- tional burden. When we store data indefinitely, we commit future generations to maintainin g the infrastructure needed to support that storage, including ongoing energy consump- tion, hardware replacement, and resource extraction. The scale
|
https://arxiv.org/abs/2505.21720v1
|
of this potential burden becomes particularly concerning when we consider the current growth trajectory of generative AI adoption. Even if individual users only gen- erate modest amounts of content that persists in storage, the cumulative environmental burden still grows significantly (Utz & DiPaola 202 3). Unlike physical waste, which even- tually degrades, digital waste can theoretically persist indef- initely unless deliberately deleted. This persistence creates a sustainability debt that com- pounds overtime. Each generation that fails to address the accumulation of digital waste passes on a larger problem to subsequ ent generations, who must then devote increasingly significant resources to maintaining legacy data. This dy- namic parallels other environmental challenges such as ris- ing global temperatures, where delays in addressing root causes make ultimate solutions mo re difficult and costly . Despite these implications, this aspect of generative AI's en- vironmental impact remains critically understudied. Digital Waste Management Approaches: Learning from Adjacent Fields While digital waste presents a significant challenge for gen- erative AI sustainability, it is not an entirely novel p henom- enon . Other disciplines have developed frameworks and methodologies that can inform our approach to address dig- ital waste in the context of AI . By examining how adjacent fields c onceptualize and manage digital waste, we can iden- tify and extract translational principles . Established Approaches to Digital Resource Manage- ment Several fields have developed systematic approaches to managing digital resources efficiently. For example, Digital Lean Manufacturing (DLM), which evolved from tradi- tional manufacturing practices, offers insights into address- ing digital waste. Lean Manufacturing (LM) emerged in the 1990s as a for- malized set of principles derived from the Toyota Produc- tion Syste m (Bhamu & Sangwan 2014 ; Ciarniene & Viena- zindiene 2012) . At its core, it represents a systematic ap- proach to identifying and eliminating waste , defined as any resource expenditure that does not create value for the end customer. As manufacturing operation s have become in- creasingly digitalized through Industry 4.0 initiatives, Lean principles have evolved to also address digital resources (Powell & Romero 2021) . In DLM literature, practitioners distinguish between pas- sive digital waste (missed opportunities to utilize existing data effectively) and active digital waste (issues arising from data overabundance) (Romero et. al. 2018) . Passive waste includes scenarios where valuable data exists but remains unanalyzed or inaccessible. Active waste encompasses re- dundant data storage, excessive collection beyond what's needed, and maintaining outdated information. It is this ac- tive form of digital waste that we identify as a sustainability challenge for AI. While digital waste has become a topic of discussion withi n DLM, for example the issue around storing and main- taining large quantities of generated data ( Rossi et al 2022) , only a limited number of works have attempted to actually tackle the inefficiencies and waste that are associated with these quantities of data. The majority of the literature in this area focus es on how data can be u tilized to make processes more efficient. In the DLM and Industry 4.0 literature that does aim to addre
|
https://arxiv.org/abs/2505.21720v1
|
ss digital waste, the main challenge that has been identified is the problem of attempting to transfer waste reduction principles used for physical systems (i.e. the manufacturing of a physical product) for digital environ- ments (Yarbrough, Harris & Purdy 2 022). One strategy, however, that appears to translate from the physical to the digital paradigm , is Lean Thinking 4. 0 (Rossi et. al. 2022). Lean Thinking 4.0 emphasizes establishing waste -conscious values and digital literacy before implementing any new technologies (Ciarniene & Vienazindiene 2012) . Information Lifecycle Management (ILM) offers another approach to addressing digital waste management . ILM fo- cuses on managing information through defined stages from creation through archiving or deletion base d on its changing value over time (Short 2007 ; Al-Fedaghi 2008 ). This ap- proach emphasizes appropriate retention periods and auto- mated processes to transition data through storage tiers as utility diminishes. These d ata governance frameworks from enterprise computing provide insights on establishing or- ganizational responsibilities for data management. These frameworks typically include data quality assessment, metadata management, access controls, and systematic dis- posal protocols. Translational Principles for Generative AI Five key principles emerge from DLM and ILM approaches that transfer effectively to generative AI contexts: 1. Value -based assessment evaluates data based on its current and potential future utility rather than accumulating it indiscrimin ately. Central to both Lean Manufacturing and Information Management, this principle could guide frame- works for distinguishing between AI -generated content worth preserving and content that can be discarded, as- sessing not just immediate utility but also lo ng-term value relative to storage costs. 2. Tiered storage architecture s move data through differ- ent storage systems with varying performance characteris- tics based on access patterns and utility. Drawing from ILM practices, this could inform systems that a utomatically com- press, or archive infrequently accessed outputs, or store gen- eration parameters rather than full outputs for content that could be regenerated when needed. 3. Systematic pruning protocols to identify and remove data that no longer serves ne eds. Derived from Lean Manu- facturing's emphasis on eliminating waste and ILM's struc- tured approach to data retirement, this could guide the de- velopment of tools that help users identify and remove un- necessary AI -generated content, including automated iden- tification of duplicates and content aging analysis. 4. Resource -conscious design creates systems with effi- ciency as a core consideration rather than an afterthought. This principle could influence generative AI interfaces that encourage thoughtful content creation rather than unlimited generation, such as interfaces that visualize environmental impact, default settings that limit unnecessary variations, and design patterns emphasizing quality over quantity. 5. Education before implementation establishes cul tural foundations for responsible resource management before deploying new technologies. This principle, emphasized in Lean Thinking 4.0, recognizes that technical solutions alone cannot create sustainable practices without corresponding cultural norms, an d could inform how organizations intro- duce generative AI tools. These five principles provide a first foundation for devel- oping sustainable approaches to generative AI by adapting established framework to digital waste management. The success of Lean
|
https://arxiv.org/abs/2505.21720v1
|
Manuf acturing in transforming resource management across diverse industries suggests that its core principles, properly adapted, could contribute significantly to addressing digital waste in AI contexts. Future Directions for Sustainable Data Prac- tices Having identified translatable principles from ILM and DLM , we now turn to specific recommendations targeting individual stakeholders for mitigating digital waste in gen- erative AI. The environmental context within which we must address this sustainability challenge has now acquired an even higher level of urgency. With 2024 being recently confirmed as the hottest year on record since the beginning of temperature record keeping in the 1880s (Bardan 2025 ; NOAA 2024) , we have now reached 1.5-degree Celsius above the mid -19th century average, a threshold established in the Paris Agreement. The main driving factor between this increase in global temperatures are emissions of GHGs such as CO 2, CO and NO (Ramanathan & Feng 2009) , and the effects of climate change a re far reaching: A 2022 report on climate change impacts and vulnerabilities (Parmesan, Morecroft & Tri surat 2022) highlight s concerning threats such as longer drought and wildfire seasons, fresh water supply loss, crop loss, extinctions within both fauna and flora, spreading of wildlife diseases, loss of settlements and infrastructure due to extreme weather events among others . As generative AI adoption accelerates globally, there is a pressing need to develop comprehensive strategies that ad- dress the sust ainability implications of massive synthetic data generation and storage as ethical imperatives with in- tergenerational consequences. The responsible development and deployment of AI systems must therefore include con- sideration of their complete environment al footprint . The extensive timeframe of data storage requires particular at- tention. Current practices of indefinite data storage effec- tively commit future generations to maintaining digital in- frastructure they had no role in creating and from which they may or may not derive benefit. This creates a form of sus- tainability debt that continu es to grow as more data accumu- lates. Academic Research Community The academic research community must establish the knowledge foundation needed to address digital waste effec- tively, with several key research directions requiring imme- diate attention. 1. Researchers should develop comprehensive assessment methodologies for generative AI systems that incorporate long-term projections. New methodologies must account for how the environmental footprint of stored data might evolve over generations as technologi es change, energy systems transform, and climate conditions shift. These assessments should model different scenarios for data accumulation and retention to provide policymakers with evidence regarding potential future trajectories. Quantitative research o n multi - generational storage impacts is essential. Studies should model the cumulative environmental footprint of storing dif- ferent types of AI -generated content over periods of 50 -200 years , timeframes that better reflect the potential longevity of digita l infrastructures. This research should account for projected changes in technologies, energy systems, and cli- mate conditions to provide more accurate assessments of in- tergenerational burdens. 2. Researchers should establish normative frameworks for evalua ting the justifiability of digital waste across gen- erations, exploring questions like: What
|
https://arxiv.org/abs/2505.21720v1
|
responsibilities do we have to future generations regarding the digital infra- structure we create? What constitutes fair distribution of benefits and burdens across generations? How should we balance present convenience against future environmental costs? 3. The psychological and social dimensions of generative AI usage require investigation, particularly regarding how users conceptualize the longevity of their digita l content. Other questions that need to be explored are: What factors mitigate data generation volumes amongst users? How do users (re -)engage with generated data? Are users aware of the environmental impacts of data storage ? Understanding these patterns (and others) will enable interventions that help users internalize the long -term impacts of their storage decisions. 4. Researchers should establish intergenerationally fair data lifecycle frameworks for generative AI content with clear protocols for determ ining the value of generated con- tent over time and criteria for content retirement, archiving, or deletion. These frameworks should include mechanisms for periodic reassessment that allow future generations to participate in decisions about maintaining dig ital legacies rather than being bound by perpetual storage commitments made in the past. Developer s Developers have a direct influence over the technical archi- tecture and user experience of these technologies, position- ing them to implement significant int erventions that reduce digital waste at scale. These technical interventions repre- sent an opportunity to embed sustainability values directly into system design. 1. File format optimization represents a foundational in- tervention. Developers should explore the creat ion of com- pression algorithms and file formats specifically designed for AI -generated content, recognizing unique characteristics that enable more efficient storage. Text -to-image systems might store the original prompt and seed value alongside a compressed output, enabling regeneration on demand rather than storing full -resolution images indefinitely. 2. Data lifecycle management should become a core fea- ture rather than an afterthought. Developers should imple- ment expiration protocols that sugges t or automate content pruning after specified periods of non -use, potentially with graduated approaches that compress content after initial in- activity before suggesting deletion to users . These systems should provide clear information about storage implica tions while encouraging more sustainable practices through inter- face design . 3. Storage efficiency metrics would enable more transpar- ent environmental assessment. Developers should create standardized measurements that quantify storage efficiency of differ ent models, incorporating these metrics into evalu- ation frameworks alongside traditional performance measures. Making these metrics transparent would allow us- ers to make informed choices based on environmental pref- erences . 4. System architecture should inc reasingly emphasize content reuse rather than regeneration. Developers should build systems capable of efficiently modifying existing con- tent rather than generating entirely new outputs for small changes. These approaches would reduce both computa- tional re sources required for generation and the storage bur- den of maintaining near -duplicate outputs, exemplifying the ethical principle of sufficiency. 5. Developers should implement "digital environmental impact" notifications within interfaces, providing users with real-time feedback about the environmental consequences of their generation and storage decisions. By making typi- cally invisible impacts visible, such features could promote more conscious content
|
https://arxiv.org/abs/2505.21720v1
|
management decisions among us- ers. End-User and Organi zation s Sustainable management of generative AI content ulti- mately requires cultural and operational shifts among end - users and organizations. These shifts represent a move to- ward s responsible digital stewardship , taking accountability for the environmenta l consequences of technological choices rather than externalizing these costs to society and future generations. Organizations adopting generative AI should implement principles from established resource management ap- proaches. This begins with comprehensiv e education to en- sure employees understand both the capabilities of genera- tive AI and the environmental implications of digital waste. Organizations should cultivate a culture that values data minimalism, perusing efficient storage practices rather than indiscriminate accumulation. This cultural development should precede large -scale deployment, establishing sus- tainable usage patterns from the outset rather than attempt- ing to reform entrenched wasteful practices later. Formal data governance policies provid e essential struc- ture for managing AI -generated content. These policies should establish guidelines for content retention based on utility, purpose, and regulatory requirements, articulating different tiers of storage with corresponding retention peri- ods. They should assign clear responsibility for content management across the organization, ensuring that digital waste reduction becomes an ongoing operational priority ra- ther than a one -time initiative. From an ethical perspective, these policies represent i nstitutional commitments to envi- ronmental responsibility. Regular data auditing practices should complement gov- ernance frameworks. Organizations should schedule peri- odic reviews of stored AI -generated content to identify un- necessary files and remove them f rom active storage. These audits might employ both automated tools and manual re- views that incorporate domain expertise regarding content value. Organizations should develop frameworks for distin- guishing high -value content from content with rapidly di- minishing value. By clearly identifying which content mer- its long -term preservation, organizations can focus storage resources on maintaining truly valuable assets while allow- ing less critical content to expire naturally. For individual end -users, education about the environ- mental impact of digital storage represents a critical inter- vention. Accessible educational materials that explain these connections could promote more conscious choices about content generation and retention . Digital Waste as an Ethical Imperative for Sustainable AI and Future Generations As generative AI transforms content creation and consump- tion, we must broaden sustainability considerations beyond model training and inference to include the entire data lifecycle. Digital waste represe nts a significant environmen- tal and ethical challenge with profound implications for fu- ture generations. The intergenerational dimension requires particular em- phasis. Unlike many environmental challenges that manifest immediately, the consequences of unman aged digital waste accumulate gradually but inevitably. Each generation that fails to address this accumulation passes on an increasing burden to subsequent generations, who inherit not only the data but also the obligation to maintain supporting infra- structure. This creates environmental debt that compounds across generations, potentially becoming unmanageable if left unaddressed. This paper contributes to AI sustainability research and AI development in several important ways. It expands re- sponsible AI de velopment to explicitly include environmen- tal sustainability as a core consideration, with
|
https://arxiv.org/abs/2505.21720v1
|
particular at- tention to intergenerational justice. By framing digital waste as an ethical issue with multigenerational implications, we establish that responsible AI development must account for the complete environmental footprint across extended timeframes. The paper bridges disciplines by introducing concepts from digital resource management, lifecycle assessment, and intergenerational ethics to the discourse on et hical and sustainable AI . This interdisciplinary approach demon- strates that responsible AI development does not need to re- invent sustainability frameworks but can adapt established methodologies from adjacent fields. We provide a structured foundation for addressing digital waste through specific recommendations that explicitly con- sider long -term impacts. These recommendations operation- alize abstract ethical principles into concrete practices that can be implemented today to prevent accumulation of an un- sustainable digital legacy. This work expands the scope of AI ethics beyond its tra- ditional focus on immediate harms to include considerations of intergenerational justice and global environmental im- pact. Just as decisions made by previous generations about industrial production created environmental challenges we face today , our digital practices will shape the world inher- ited by our descendants. Addressing digital waste requires coordinated effort across the AI ecosystem. By introducing this concept as an ethical imperative rather than merely a technical challenge, we hope to inspire a more comprehensive approach to AI development that places sustainability at its core. As climate challenges intensify, the stakeholders involved in AI devel- opment have both the opportunity and responsibility to lead in sustainable digital practices that minimize environmental harm while maximizing human benefit across generations. References Al-Fedaghi, S. 2008. On information lifecycle management. 2008 IEEE Asia -Pacific Servic es Computing Conference : 335-342. doi.org/10.1109/APSCC.2008.81 Alkrush, A. A.; Salem, M. S.; Abdelrehim, O.; and Hegazi, A. A. 2024. Data centers cooling: A critical review of techniques, chal- lenges, and energy saving solutions. International Journal of R e- frigeration. 160: 246 -262. doi.org/10.1016/j.ijrefrig.2024.02.007 Bardan, R. 20 05. Temperatures rising: Nasa confirms 2024 warm- est year on record. NOAA . Bhamu , J. and Sangwan , K. S. 2014 . Lean manufacturing: Litera- ture review and research issues . International Journal of Opera- tions & Production Management , 34 : 876-940. doi.org/10.1108/IJOPM -08-2012 -0315 Bietti, E. and Vatan parast. R. 2020. Data Waste. Harvard Interna- tional Law Journal. 61 : 1-11. Chien, A. A. ; Lin, L. ; Nguyen, H. ; Rao, V. ; Sharma, T .; and Wi- jayawardana, R. 2023. Reducing the carbon impact of generative AI inference (today and in 2035). HotCarbon ’23: Proceedings of the 2nd Workshop on Sustainable Computer Systems . doi.org/10.1145/3604930.3605705 Ciarniene , R. and Vienazindiene , M. 20 12. Lean manufacturing: Theary and practice . Economics and Management , 17(2): 726-732. doi.org/ 10.5755/j01.em.17.2.2205 Gupta, U. ; Kim, Y. G. ; Lee, S. ; Tse, J. ; Lee, H. S. ; and Wei, G. 2021. Chasing carbon: The elusive environmental footprint of computing. IEEE International Symposium on High -Performance Computer Architecture (HPCA) . doi.org/10.1109/HPCA51647.2021.00076 Horizon Technology. 2024. Navigating hardware refresh cycle s in the data center. Data Center Decommi ssioning ; Lake Forest , C.A. Hosseini, M.
|
https://arxiv.org/abs/2505.21720v1
|
; Gao, P. ; and Vivas -Valencia, C. 2025. A social -en- vironmental impact perspective of generative artificial intelli- gence . Environmental Science and Ecotechnology , 15(23): 10052 0. doi.org/ 10.1016/j.ese.2024.100520 Hsu, S. ; Hsieh, H. ; Chen, C. ; Tseng, C. ; Huang, S. ; Huang, C .; Huang, Y. ; Radashevsky, V. ; and Lin, S. 2011. Tungsten and other heavy metal contamination in aquatic environments receiving wastewater from semiconductor manufacturing . Journal of Haz- ardous Materials , 189(1) : 193-202. doi.org/ 10.1016/j.jhaz- mat.2011.02.020 Lacoste, A. ; Luccioni, A. ; Schmidt, V. ; and Dandres, T. 2019. Quantifying the carbon emissions of machine learning. arXiv: 1910.09700v2 . Li, B. ; Jiang, Y. ; and Tiwari, D. 2024. Carbon in motion: Charac- terizing Open -Sora on the sustainability of generative AI for video generation . ACM SIGEnergy Energy Informatics Review. 4(5): 160-165. doi.org/ 10.1145/3727200.3727224 Luccioni, A. S. and Hernandez -Garcia, A. 2023. Counting Carbon: A survey of factors influencing the emissions of machine learning. arXiv: 2302.08476v1 . Luccioni, A. S. ; Jernite, Y. ; and Strubell, E. 2024. Power hungry processing: Watts driving the cost of AI deployment? ACM Con- ference on Fairnes s, Accountability, and Transparency (ACM FAccT ’24) , doi.org/10.1145/3630106.3658542 Monserrate, S. G. 2022. The cloud is material: On the environmen- tal impacts of computation and data storage . MIT Case Studies in Social and Ethical Responsibilities of Computing , no. Winter 2022 (January). doi.org/10.21428/2c646de5.031d4553 Mytton, D. 2021. Data center water consumption. Nature NPJ Clean Water . 4:11. doi.org/ /10.1038/s41545 -021-00101 -w National Oceanic and Atmospheric Administration (NOAA). 2024. Global Climate Report for Annual 2024. NOAA. Parmesan, C.; Morecroft, M. D. ; and Trisurat. Y. 2022. Climate Change 2022: Impacts, Adaptation and Vulnerability. Research Report . GIEC. Powell, D. J. and Romero, D. 2021. Digital lean manufacturing: A literature re view. IEEE International Conference on Industrial En- gineering and Engineering Management (IEEM) . doi.org/ 10.1109/IEEM50564.2021.9673032 Prem, E. 2023. From ethical AI frameworks to tools: a review of approaches. AI and Ethics , 3:699 -716. doi.org/ 10.1007/s 43681 - 023-00258 -9 Qiang, C. Z. ; Liu, Y. ; and Wang, H. 2024. Who on earth is using generative AI? Policy Research Working Paper ; DIGITAL Wash- ington, D.C.: World Bank Group. Ramanathan, V. and Feng, Y. 2009. Air pollution, greenhouse gases and climate change - Global and regional perspectives. At- mospheric Environment, 43 (1): 37-50. doi.org/10.1016/j.at- mosenv.2008.09.063 Romero, D.; Gaiardelli , P.; Powell, D.; Wuest, T.; and Thuerer, M. 2018. Digital lean cyber -physical production systems: The emer- gence of dig ital lean manufacturing and the significance of digital waste . In Advances in Production Management Systems , edited by I. Moon ; G. M. Lee ; J. Park ; D. Kiritsis ; and G. von Cieminski . Cham, Switzerland: Springer, 2018 . 11-20. doi.org/ .1007/978 -3- 319-99704 -9_2 Rossi, A. H. G. ; Marcondes, G. B. ; Pontes, J. ; Leitao, P. ; Treinta, F. T. et. al. 2022. Lean tools in the context of industry 4.0: literature review, implementation and trends. Sustainability, 14 (19): 12295 . doi.org/10.3390/su141912295 Ruberti,
|
https://arxiv.org/abs/2505.21720v1
|
M. 2023. The chip manufacturing industry: Environmen- tal impacts and eco -efficiency analysis. Science of The Total Envi- ronment, 858:159873 . doi.org/10.1016/j.scitotenv.2022.159873 Ruberti, M. 2024. Environmental performance and trends of the world’s semiconductor f oundry industry. Journal of Industrial Ecology , 28(5) : 1183 -1197 . doi.org/10.1111/jiec.13529 Short, J. 2007. Information lifecycle management concepts, prac- tices, and values. A report for the Society for Information Manage- ment Advanced Practices Council. Strubell, E. ; Ganesh, A. ; and McCallum, A. 2019. Energy and pol- icy consideration for deep learning in NLP. arXiv:1906.02243v1 . Utz, V. and DiPaola, S. 2023. Climate implications of diffusion - based generative visual AI systems and their mass adoption. Pro- ceedings of the 14th International Conference on Computational Creativity : 264-272 Yarbrough, A. C. ; Harris, G. A. ; and Purdy, G. T. 2022. Improving the flow of data and information in manufacturing. Manufacturing Letter , 32:1. doi.org/ 10.1016/j.mfglet. 2022.01.001
|
https://arxiv.org/abs/2505.21720v1
|
Saddle-To-Saddle Dynamics in Deep ReLU Networks: Low-Rank Bias in the First Saddle Escape Ioannis Bantzis∗ EPFL Lausanne, Switzerland ioannis.bantzis@epfl.chJames B. Simon UC Berkeley and Imbue Berkeley and San Francisco, USA james.simon@berkeley.edu Arthur Jacot Courant Institute, NYU New York, USA arthur.jacot@nyu.edu Abstract When a deep ReLU network is initialized with small weights, GD is at first dominated by the saddle at the origin in parameter space. We study the so-called escape directions, which play a similar role as the eigenvectors of the Hessian for strict saddles. We show that the optimal escape direction features a low-rank bias in its deeper layers: the first singular value of the ℓ-th layer weight matrix is at least ℓ1 4larger than any other singular value. We also prove a number of related results about these escape directions. We argue that this result is a first step in proving Saddle-to-Saddle dynamics in deep ReLU networks, where GD visits a sequence of saddles with increasing bottleneck rank [24]. 1 Introduction In spite of the groundbreaking success of DNNs, the training dynamics of GD in these models remain ill-understood, especially when the number of hidden layers is large. A significant step in our understanding is the (relatively recent) realization that there exists multiple regimes of training in large neural networks: a kernel or lazy regime, where DNNs simply implement kernel methods w.r.t. the NTK [ 25,18,3], and an active or rich regime, which is characterized by the emergence of feature learning [ 15,39,14], and some form of sparsity, such as a low-rank bias [32, 22, 5]. The kernel regime is significantly simpler than the active one, because the dynamics can be linearized around the initialization [ 25,30], and the loss is approximately quadratic/convex in the region traversed by GD [ 26] (it also satisfies the PL inequality [ 33]). This makes it possible to prove strong convergence guarantees [ 18,3] and apply generalization bounds from the kernel methods literature almost directly [ 6,10]. Our understanding of the kernel regime is today essentially complete, but we also know that there are functions that DNNs cannot learn in the kernel regime, but can be learned in the active regime [8, 20]. Now there are arguably many active regimes, corresponding to different ways to leave the kernel regime, such as small initialization of the weights [ 49], large learning rates [ 31,17], large noise SGD [44, 38, 46, 47], late training with the cross-entropy loss [29, 16], or weight decay [19, 36, 24]. ∗Ioannis Bantzis was supported by a scholarship for graduate studies from the Onassis Foundation. Preprint. Under review.arXiv:2505.21722v1 [cs.LG] 27 May 2025 We will focus on the effect of initialization scale, where a phase change from kernel regime to active regime occurs as the variance of the initial weights decays towards zero. Here again we can distinguish two active regimes [ 34]: the mean-field regime which lies right at transition between regimes [ 15,39,35], and the saddle-to-saddle regime [ 40,27,37,13] for even smaller initialization. The mean-field limit was first described for shallow networks [ 15,39,35], and has more recently been extended
|
https://arxiv.org/abs/2505.21722v1
|
to the deep case [ 4,11]. A limitation of these approaches is that the limiting dynamics remain complex, especially in the deep case where they are described by algorithms that are not only very costly in the worst case [ 11,50], but also difficult to interpret and reason about. This high complexity could be explained by the fact that the mean-field limit is critical, i.e. it lies exactly at the transition between kernel and active, and therefore it must capture the complexity of both of those regimes, as well as of the whole spectrum of intermediate dynamics. 1.1 Saddle-to-Saddle dynamics This motivates the study of the saddle-to-saddle regime for even smaller initializations. As the name suggests, this regime is characterized by GD visiting a number of saddles before reaching a global minimizer. Roughly speaking, because of the small initializations, GD starts in the vicinity of the saddle which lies at the origin in parameter space and remains stuck there for a number of steps until it finds an escape direction, leading to a sudden drop in the loss. This escape direction exhibits a form of approximate sparsity (amongst other properties) that is preserved by GD. At this point, the level of sparsity can either be enough to fit the training data in which case the loss will drop to zero and training will stop, but if the network is ‘too sparse’ to fit the data, GD will approach another saddle at a lower cost (which is locally optimal given the sparsity) before escaping along a less sparse escape direction. GD can visit a sequence of saddles before reaching a final network that fits the data while being as sparse as possible. This has been described as performing a greedy algorithm [ 32] where one tries to find the best data-fit with a sparsity constraint that is gradually weakened until the training data can be fitted. Such incremental learning dynamics were first observed in diagonal linear networks [ 40,41,21] (and by extension to linear CNNs, which are diagonal nets in Fourier space), before being extended to linear fully-connected networks [ 5,32,27,45]. These result in coordinate sparsity of the learned vector for diagonal networks and rank sparsity of the learned matrix for fully-connected networks. For nonlinear networks, the focus has been mainly on shallow networks, where a condensation effect is observed, wherein group of neurons end up having the same activations (up to scaling). Roughly speaking, in the first escape direction, a first group of hidden neurons comes out first, all with the same activation (up to scaling), in the subsequent saddles, new groups can emerge or an existing group can split in two [ 15] (though sometimes they may fail to split leading to problems [ 12]). This condensation effect leads to a form of sparsity, since each group then behaves as a single neuron, thus reducing the effective number of neurons [ 34,43]. An additional effect of this condensation is that the weight matrices have rank bounded by the number of group of neurons, leading to a low-rank bias, though this bias
|
https://arxiv.org/abs/2505.21722v1
|
is arguably secondary. To our knowledge, the only prior theoretical analysis of saddle-to-saddle dynamics in deep nonlinear networks is for nonlinearities that are differentiable at zero (such as the arctan), in which case the dynamics around the saddle at the origin can be approximated by those of a linear network (because the pre-activations are small, the nonlinearity can be approximated by its linearization) [ 9]. This leads to a low-rank bias, where all layers are rank 1 in the first escape direction. Saddle-to-saddle dynamics with multiple steps have been observed empirically in deep ReLU networks trained on supervised [ 7] and self-supervised [42] tasks, and these empirics motivate our present theoretical study. 1.2 Bottleneck Rank Incremental learning Surprisingly, we show a more complex rank sparsity structure in deep ReLU networks: the majority of layers are rank 1 (or approximately so), with possibly a few high-rank layers at the beginning of the network, in contrast to linear nets, shallow ReLU networks, and deep nets with differentiable nonlinearity where all layers are rank 1 in the first escape. This fits into the bottleneck structure and related bottleneck rank (BN-rank) observed in large depth ReLU networks trained with weight decay [ 24,23,48,28], where almost all layers share the same 2 low rank, with a few higher rank layers located close to the input and output layers. Additionally, in the middle low-rank layers ("inside the bottleneck"), the activations are approximately non-negative, so that the ReLU approximates the identity in these layers. The Bottleneck rank Rank BN(f)is a notion of rank for finite piecewise linear functions f, defined as the minimal integer k∗such that fcan be decomposed f=h◦gwith intermediate dimension k∗[24]. For large depths, it is optimal to represent fwith a bottleneck structure, where the first few high-dim. layers represent g, followed by many rank k∗layers representing the identity on the dimension k∗intermediate representation, before using the last few layers to represent h. Our results imply that the first escape direction of deep ReLU networks has BN-rank 1, because almost all layers are approximately rank 1 except a few high rank layers in the beginning. This is a "half" bottleneck structure, since it lacks high dimensional layers before the outputs, but it still fits within the BN-rank theory, suggesting that the BN-rank is the correct notion of sparsity in deep ReLU networks (rather than the traditional notion of rank). We conjecture that deep ReLU networks exhibit similar saddle-to-saddle dynamics as e.g. linear networks, with the distinction that it is the BN-rank that gradually increases rather than the traditional rank. 1.3 Contributions In this paper, we give a description of the saddle at the origin in deep ReLU networks, and the possible escape directions that GD could take as it escapes this first saddle. As in [ 27], each escape direction can be assigned an escape speed, and we show that the optimal escape speed is non-decreasing in depth. We then prove that the optimal escape directions feature a low-rank bias that gets stronger in deeper layers (i.e. layers closer to the output layer). More precisely the weight
|
https://arxiv.org/abs/2505.21722v1
|
matrix Wℓand activations Zσ ℓover the training set for ℓ= 1, . . . , L areℓ−1 4-approximately rank 1in the sense that their second singular value is O(ℓ−1 4)times smaller than the first. Furthermore, deeper layers are also more linear, i.e. the effect of the ReLU becomes weaker. Finally, we provide an example of a simple dataset whose optimal escape direction has the following structure: The first two weight matrices are rank 2, followed by rank 1 matrices. This shows that the structure of our first result where the first layers are not approximately rank 1 but the deeper layers are is not an artifact of our proof technique and reflects real examples. This is interesting in contrast to previous saddle-to-saddle dynamics, where all layers are approximately rank 1 in the first escape direction. 2 Saddle at the Origin We represent the training dataset x1, . . . , x N∈Rdinas adin×Nmatrix X. We consider a fully-connected neural network of depth Lwith widths n0=din, n1, . . . , n L=doutand ReLU nonlinearity σ(x) =max{x,0}. The nℓ×Ndimensional matrices of activation Zσ ℓand preactivation Zℓat the ℓ-th layer are then defined recursively as Zσ 0=X Zℓ=WℓZσ ℓ−1 Zσ ℓ=σ(Zℓ), for the nℓ×nℓ−1weight matrix Wℓ,ℓ= 1, . . . , L . The weight matrices W1, . . . , W Lare the parameters of the network, and we concatenate them into a single vector of parameters θof dimension P=P ℓnℓnℓ−1. The output of the network are the preactivations of the last layer Yθ=ZL. We then consider a general cost C:Rdout×N→Rthat takes the outputs of the network Yθand returns the loss L(θ) =C(Yθ). The parameters θ(t)are then trained with gradient flow (GF) on the lossL ∂tθt=−∇L (θt) starting from a random initialization θ0∼ N(0, σ2 0)for a small σ0. 3 One can easily check that the origin θ= 0is a critical point of the loss. Our analysis will focus on the neighborhood of this saddle, and for such small parameters the outputs Yθwill be small, we can therefore approximate the loss as L(θ) =C(0) + Tr ∇C(0)TYθ +O(∥Yθ∥2), (1) where ∇C(0)is anL×Nmatrix. Since we only care about the dynamics of gradient flow, the first term can be dropped. We will therefore mainly focus on the localized loss L0(θ) = Tr[ GTYθ], writing G=∇C(0)for simplicity. The localized loss L0can be thought of as resulting from zooming into origin. It captures the loss in the neighborhood of the origin. Note that since the ReLU is not differentiable, neither is the loss at the origin, so that we cannot use the traditional strategy of approximating L0by a polynomial. However, this loss has the advantage of being homogeneous with degree L, i.e.L0(λθ) =λLL0(θ), which will be key in our analysis. 2.1 Gradient Flow on Homogeneous Losses On a homogeneous loss, the GF dynamics decompose into dynamics of the norm ∥θ∥and of the normalized parameters ¯θ=θ/∥θ∥: ∂t∥θ(t)∥=−¯θ(t)T∇L0(θ(t)) =−L∥θ(t)∥L−1L0(¯θ(t)) ∂t¯θ(t) =−∥θ(t)∥L−2 I−¯θ(t)¯θ(t)T ∇L0(¯θ(t)). where we used Euler’s homogeneous function theorem: θT∇L0(θ) =LL0(θ). Notice that (I−¯θ¯θT)is the projection to the tangent space of the sphere at ¯θ, which implies that
|
https://arxiv.org/abs/2505.21722v1
|
the normalized parameters are doing projected GF over the unit sphere on the L0loss (up to a prefactor of∥θ∥L−2which can be interpreted as a speed up of the dynamics for larger norms). Therefore, we may reparametrize time t(s), such that s(t) =Rs 0∥θ(s1)∥L−2ds1, which correspond to switching to a time-dependent learning rate ηs=∥θ(s)∥2−L, we obtain the dynamics: ∂s∥θ(s)∥=−L∥θ(s)∥L0(¯θ(s)) ∂s¯θ(s) =− I−¯θ(s)¯θ(s)T ∇L0(¯θ(s)). We can therefore solve for ¯θ(s)on it’s own, and the norm ∥θ(s)∥then takes the form ∥θ(s)∥=∥θ(0)∥exp −Zs 0L0(¯θ(s1))ds1 . If needed, these solution can then be translated back in t-time, using the formula t(s) =Zs 0∥θ(s1)∥2−Lds1=∥θ(0)∥2−LZs 0exp (L−2)Zs1 0L0(¯θ(s2))ds2 ds1. 2.2 Escape Directions and their Speeds Assuming convergence of the projected gradient flow ¯θ(s), for all initializations x0there will be a times1where ¯θ(s1)will be close to a critical point of L0restricted to the sphere, i.e. a point ¯θ∗such that I−¯θ∗¯θ∗T ∇L0(¯θ∗) = 0 . We call these escape directions (assuming L0(¯θ∗)<0), because once such a direction is reached, ¯θ(s)will remain approximately constant, while the parameter norm will grow fast. Definition 2.1. Anescape direction is a vector on the sphere ρ∈L1/2SP−1such that ∇L0(ρ) = −sρfor some s∈R+, which we call the escape speed associated with ρ. We switch from the unit sphere to the radius√ Lsphere as it will lead to cleaner formulas. Anoptimal escape direction ρ∗∈L1/2SP−1is an escape direction with the largest speed s∗>0. It is a minimizer of L0restricted to L1/2SP−1: ρ∗∈arg minρ∈L1/2SP−1L0(ρ). 4 If the parameters start aligned with an escape direction θ0∝ρ, then GF on the localized loss will diverge towards infinity in a straight line with rate determined by the depth Land the escape speed s: Proposition 2.2. Considering gradient flow on the localized loss L0, if at some time t0the parameter satisfies θ(t0) =ρwith ρ∈L1/2SP−1and∇L0(ρ) =−s ρ, then for all t≥t0the normalized direction remains constant, and the norm ∥θ(t)∥satisfies ∥θ(t)∥= ∥θ(t0)∥2−L+ (2−L)L s(t−t0)1 2−L,ifL̸= 2, ∥θ(t0)∥exp 2s(t−t0) , ifL= 2. Of course, GF on the localized loss L0is only a good approximation for the GF on the full loss Las long as the outputs Yθare small. This will apply up to some escape time t1(r)which is the first time GF attains a parameter norm of ∥θ∥=r, thus guaranteeing a bound on the labels ∥Yθ(t)∥F≤ ∥WL∥op···∥W1∥op∥X∥F≤ r√ LL ∥X∥F. Proposition 2.2 allows us to approximate this escape time: t1(r)−t0≈ 1 (L−2)Ls ∥θ(t0)∥2−L+r2−L ,ifL̸= 2, 1 2slogr ∥θ(t0)∥, ifL= 2. After this escape time, we expect the localized GF to diverge from the true GF: the localized GF diverges towards infinity (in finite time when L >2), while the true GF typically slows down as it approaches another saddle or a minima. This paper focuses on the dynamics before the escape time. In general, we do not start aligned with an escape direction, but since the normalized parameters ¯θ(s) follow GF restricted to the sphere , they will converge to an escape direction, at which point a similar explosion of the norm will take place. Note that the dynamics of ¯θ(s)(in reparametrized s-time) are unaffected by multiplying the initialization θ0by a factor α >0.
|
https://arxiv.org/abs/2505.21722v1
|
Therefore the time s1of convergence to an escape direction is independent of α, and at the time s1, the parameter norm will depend linearly on α:∥θ(s1)∥=Cα for some C > 0. We can therefore always choose a small enough αso that the Taylor approximation (Equation 1) remains valid up to the time of convergence s1. 3 Low Rank Bias and Approximate Linearity of the Escape Directions The main result of this paper is that at the optimal escape directions, the deeper layers (i.e. for large ℓ) are approximately low-rank and have almost no nonlinearity effect: Theorem 3.1. Consider an optimal escape direction θ⋆= arg min ∥θ∥2=LTr G⊤fθ(X) with optimal speed s∗=min∥θ∥2=LTr G⊤fθ(X) , then for all layers ℓwe have: P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ),P i≥2s2 i(Zσ ℓ)Pr i=1s2 i(Zσ ℓ),∥Zσ ℓ−Zℓ∥2 F ∥Zℓ∥2 F≤8c 1−cℓ−1 2ℓ−1 2 where c=∥X∥F∥G∥F s∗q 2 log∥X∥F∥G∥F s∗ . In the rest of the section, we will prove a result that shows that the optimal escape speed s∗is increasing in depth, thus controlling the constant Cin depth. We then present a sketch of proof for the Theorem, stating a few intermediate results along the way that are of independent interest. 5 3.1 Optimal Speed is Increasing in Depth The bounds of Theorem 3.1 are strongest when the optimal escape direction s∗is large. Thankfully, the optimal escape speed is increasing in L: Proposition 3.2. Given a depth Lnetwork with L0(θ) =−s0fors0>0and∥θ∥2=L, we can construct a network of depth L+kfor any k≥1with parameters θ′that satisfies ∥θ′∥2=L+k andL0(θ′)≤ L0(θ). Therefore, the optimal escape speed s∗(L)is a non-decreasing function. Furthermore, in the deeper network, we have Rank( ZL′) = Rank WL′= 1 for all L′≥Land ZL′=Zσ L′for all L′> L. To construct the deeper network, we first transform the last weights WLto be rank 1 (this is possible without increasing L0), and we then add rank 1 weights in the additional layers. Some very similar structures have been used in previous work [24, 9]. 3.2 Sketch of proof To prove Theorem 3.1, we first show that if the inputs are approximately rank 1, then the optimal escape will be approximately rank 1 in all layers: Proposition 3.3. Consider the minimizer θ∗= arg min∥θ∥2≤LTr G⊤Yθ(uv⊤+X) where u, v∈ Rn,u, v≥0entry wise and ∥X∥F≤ϵfor some ϵ >0. Then for all ℓwe have: P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ),P i≥2s2 i(Zσ ℓ)Pr i=1s2 i(Zσ ℓ),∥Zσ ℓ−Zℓ∥2 F ∥Zℓ∥2 F≤8∥G∥F s∗− ∥G∥Fϵϵ. This also implies that if a hidden representation is approximately rank in one layer ℓ0, then it must also be approximately rank in all subsequent layers ℓ≥ℓ0. We can prove the existence of many such low-rank layers assuming the escape speed is large enough: Proposition 3.4. Assuming Tr[G⊤ZL]≤ −s0for some constant s0>0and∥θ∥2≤Lthen for any ratiop∈(0,1)there are at least (1−p)Llayers that are approximately rank 1 in the sense that P i≥2s2 i(Zσ ℓ)Pr i=1s2 i(Zσ ℓ)≤2 log∥X∥F∥G∥F s01 pL The proof of Theorem 3.1 therefore goes as follows: for any ℓ=pL, Proposition 3.4 implies that there are at least (1−p)L=L−ℓlayers that are approximately rank 1. The earliest such layer ℓ0 must satisfy ℓ0≤ℓ. Proposition 3.3
|
https://arxiv.org/abs/2505.21722v1
|
implies that all layers after ℓ0must be approximately rank 1, including the ℓ-th layer. The two propositions are also of independent interest. Proposition 3.3 gives an example of inputs where all layers are low-rank, not just the deeper layers. Proposition 3.4 applies to any parameter with fast enough escape speed, not just to the optimal escape direction, and guarantees a similar low rank structure. Interestingly, in contrast to the other results, it does not say anything about where those low-rank layers are. 3.3 Empirical Results on MNIST We empirically confirm the presence of low-rank structure in networks trained on the MNIST dataset. Specifically, we train a 6-layer fully connected network without bias terms and with small weight initialization. Figure 1 highlights two distinct saddle points during training. After escaping the first saddle, we observe the emergence of a single dominant singular value in every layer, with this effect being particularly pronounced in the deeper layers (layers 4–6). While our theoretical analysis explains the behavior after the first saddle escape, our experiments reveal that, towards the end of training, a second dominant singular value appears. This suggests that the rank of the weight matrices increases following subsequent saddle escapes. A detailed visualization of the singular value evolution in each layer is provided in Appendix 3. 6 1.0 1.2 1.4 1.6 1.8 step1e60.00.51.01.52.0(a) Training Loss 1 2 3 4 5 6 024681012si(W) (b) After the first saddle escape 1 2 3 4 5 6 024681012si(W) (c) Final Iteration Figure 1: Deeper layers show a stronger bias toward low-rank structure than earlier layers on MNIST. Left: Training loss over training time. Vertical lines indicate the specific iterations at which singular values are extracted. Center and Right: Top 10 singular values of the weight matrices per layer ℓfor layers 1–6 including input and output layer. 4 The optimal escape direction is not always exactly rank one Our discussion has thus far consisted of results which paint the picture that deep ReLU networks trained from small initialization first escape the origin in a direction which is approximately rank one in each weight matrix. Much of our labor has been in identifying suitable notions of “approximately rank one.” Before concluding, it is worth asking: do we actually need such notions? In fact, if one performs straightforward numerical experiments on simple datasets, one will often find that the first escape direction is exactly rank one in each layer. Might we hope that the optimal escape direction is in fact always exactly rank one? In this section, we provide a simple counterexample in which the optimal escape direction is rank two in the first layer. We then give numerical experiments which show that (projected) gradient descent actually finds this rank-two solution. Example 1 (Rank-two optimal escape direction) .Consider the unit circle dataset (xj)N j=1= sin 2π∗j N ,cos 2π∗j NN j=1with alternating loss gradients G= ((−1)j)N j=1.2LetN= 8. Consider training a depth-three bias-free ReLU MLP with hidden width at least four from small initialization on this dataset. Then the optimal rank-one escape direction has speed s1=√ 2−1≈0.414, but there exists a
|
https://arxiv.org/abs/2505.21722v1
|
better rank-two escape direction with speed s2=1 2. Proof. Our network has weight matrices W1, W2, W3which parameterize the network function as fθ(X) =W3σ◦W2σ◦W1X. As discussed in Subsection 2.2, we wish to minimize the escape speed s=−Tr[G⊤fθ(X)]such thatP ℓ∥Wℓ∥2 F= 3. We know from homogeneity that the minimizer will have∥Wℓ∥F= 1for all ℓ. If we additionally constrain all three weight matrices to be rank-one, then a width-one ReLU network can achieve the same maximal escape speed (a network with only rank 1layers can only represent ‘one neuron functions’: fθ(x) =uσ(vTx)for some vectors u, v, independently of depth). Taking into account the positivity of ReLU , we need only study a width-one network with W1= [cos( ϕ),sin(ϕ)] for some ϕ∈[0,2π),W2= [1] , andW3= [±1]. The only degree of freedom remaining is the angle ϕto which W1is attuned. We solve this 1D optimization problem in Appendix C.1, finding that the optima fall at ϕ=πj 4forj∈Z, giving speed s1=√ 2−1≈0.414. Without such a rank-one constraint, we can improve this speed. We use only four neurons in the first hidden layer and one neuron in the second hidden layer (setting all incoming and outgoing weights to other neurons to zero) and choose the following weights for the active neurons: W1=1 2 1 0 0 1 −1 0 0−1 , W 2=1 2[1−1 1 −1], W 3= [1]. (2) This gives a speed s2=1 2. 2Such alternating loss gradients can result straightforwardly from, for example, targets Y= ((−1)j)N j=1and the usual squared loss. 7 1 0 1 x11 01x2+1-1Unit circle dataset 0 10000 20000 step0.0/radicalbig 2−10.5−Tr[G/latticetopfθ(X)]Escape simulations (width = 16) 101102103 width0.250.500.751.00Successful fractionEscape simulation resultsFigure 2: Depth-3 neural networks find rank-two escape directions on a toy dataset. Left: visualization of the dataset. Red and blue points have loss gradient values G= 1 andG=−1, respectively. Center: several training runs of projected gradient descent on the first-order loss objective under the parameter norm constraint ∥θ∥2=L. Runs whose objective exceeds√ 2−1, the best achievable value for rank-one weights, are colored blue and deemed successful. Right: as width increases, the fraction of successful runs increases. See Figure 6 for a visualization of the training runs at all widths. This counterexample shows that the optimal escape direction may in fact be non-rank-one, and thus it is reasonable to search for a sense in which, for a sufficiently deep network, the optimal escape direction is approximately rank one.3 4.1 Numerical experiments: wide networks find the optimal escape direction Of course, the existence of such a non-rank-one optimal escape direction is only interesting if gradient descent actually finds it. In this case, it does. We train networks of varying width with projected gradient descent to minimize the loss on the sphere ∥θ∥2= 3. As shown in Figure 2, wider networks are more likely to converge to the faster, rank-two escape direction. 5 Discussion: Saddle-to-Saddle dynamics The results of this paper only describe the very first step of a much more complex training path. They describe the escape from the first saddle at origin, but it is likely that the full dynamics might visit the
|
https://arxiv.org/abs/2505.21722v1
|
neighborhood of multiple saddles, as is the case for linear networks [ 27,32] or shallow ReLU networks [ 2,1]. We now state a few conjectures/hypotheses, which should be viewed as possible next steps towards the goal of describing the complete Saddle-to-Saddle dynamics: (1) Large width GD finds the optimal escape direction: Our numerical experiments suggest that wider networks are able to find the optimal escape direction with GD, even when this optimal escape direction has some higher rank layers. The intuition is that the more neurons, the more likely it is that a subset of neurons implement a ‘circuit’ that is similar to an optimal escape direction, and that this group will out-compete the other neurons and end up dominating. Note that even in shallow networks, finding this optimal escape direction is known to be NP-hard [ 8], which implies that an exponential number of neurons might be required in the worst case. (2) Exact rank 1 at most layers: Inspired by our illustrating example, we believe that it is likely that the optimal escape directions might only have a finite number of high-rank layers at the beginning, followed rank 1 identity layers until the outputs. Note that if we assume that the optimal escape direction s∗(L), plateaus after a certain L0, i.e. s∗(L) =s∗(L0)for all L≥L0, then Proposition 3.2 already implies that there is an optimal escape direction where all layers ℓ≥L0are rank 1. Conversely, if there is an optimal escape directions with only rank 1 layers after the L0-th layer, then s∗(L) =s∗(L0)for all L≥L0. 3It is worth noting that there may exist an even faster escape direction than the rank-two solution we identify (though we doubt it; see subsequent numerical experiments), but in any case we may be assured that the fastest escape direction is notrank one. 8 (3) Rank 1 layers remain rank 1 until the next saddle: Assuming that GD does find the optimal escape direction, it will have approximately rank 1 layers as it escapes the saddle. The next step is to show that these layers remain approximately rank until reaching a second saddle. In linear networks, this follows from the fact that the optimal escape direction is rank 1 and balanced (i.e.WT ℓWℓ=Wℓ−1WT ℓ−1for all layers ℓ), and that the space of rank 1 and balanced network is an invariant space under GF. The ReLU case is more difficult because we have only approximate rank 1 layers. More precisely to guarantee that there is a layer that is ϵ-approximately rank 1, we need both a small initialization and a large depth, in contrast to linear networks where a small enough initialization is sufficient. Our second conjecture would help with this aspect. The next difficulty is to show that the approximate rank 1 layers remain so for a sufficient amount of time. The key tool to prove this in linear networks is the balancedness. ReLU networks only satisfy weak balancedness in general , i.e. diag(WT ℓWℓ) =diag(Wℓ−1WT ℓ−1), however the stronger balancedness applies at layers where the pre-activations have non-negative entries: Zℓ≥0. (4) BN-rank incremental learning
|
https://arxiv.org/abs/2505.21722v1
|
The final goal is to prove that these Saddle-to-Saddle dynamics allow ReLU networks to implement a form of greedy low BN-rank search where a minimal BN-rank interpolator is greedily searched by first searching among BN-rank 1 functions then gradually among higher rank functions, stopping at the smallest BN-rank sufficient to fit the data. Again, this is inspired by an analogy to linear network, which implement a greedy low-rank algorithm to minimize the traditional rank. In parameter space, the GD dynamics visits a sequence of saddles of increasing rank. It starts close to the saddle at the origin (the best rank 0 fit), before escaping along a rank 1 direction until reaching a rank 1 critical point (locally optimal rank 1 fit). If the loss is zero at this point, the GD dynamics stop, otherwise this best rank 1 fit is a saddle where GD plateaus for some time until escaping along a rank 2 direction, and so on and so forth [27]. The so-called Bottleneck rank Rank BN(f)[24] is the smallest integer ksuch that fcan be represented as the composition of two functions f=h◦gwith inner dimension k. Several recent papers have shown how the BN rank plays a central role in deep ReLU networks trained with weight-decay/ L2-regularization [ 24,23,48,28]. In particular, these works observe the emergence of a bottleneck structure as the depth grows, where all middle layers of the network share the same low rank (discarding small singular values of Wℓ), which equals the BN rank of the network, with only a finite number of high-rank layers at the beginning and end of the network. Our results can be interpreted as saying that the optimal escape direction of the saddle at the origin exhibits a ‘half-bottleneck’ (because there are only high-dimensional layers at the beginning of the network, not at the end) with BN-rank 1. This suggests that the Saddle-to-Saddle dynamics in deep ReLU networks could correspond to a greedy low-BN-rank search, where the BN-rank increases gradually between each plateau/saddle. Interestingly, previous theoretical analysis of the bottleneck structure were only able to prove the existence of low-rank layers but not necessarily locate them [ 23], our ability to prove that the deeper layers are all approximately low-rank is therefore a significant improvement over the previous proof techniques. It is possible that in contrast to linear network, the complete Saddle-to-Saddle dynamics would require both a small initialization and large depth. This matches our numerical experiments in Figure 1 and Figure 4 in the appendix, where we observe more distinct plateaus in depth 6 layer compared to a depth 4 layer. This suggests that in contrast to linear networks, where the plateaus can be made longer and more distinct by taking smaller initialization, for ReLU networks we need to also increase the depth to achieve the same effect. References [1]Emmanuel Abbe, Enric Boix Adsera, and Theodor Misiakiewicz. The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks. In Conference on Learning Theory , pages 4782–4887. PMLR, 2022. [2]Emmanuel Abbe, Enric Boix-Adserà, Matthew Stewart
|
https://arxiv.org/abs/2505.21722v1
|
Brennan, Guy Bresler, and Dheeraj Mysore Nagaraj. The staircase property: How hierarchical structure can guide deep 9 learning. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , 2021. [3]Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning , pages 242–252. PMLR, 2019. [4]Dyego Araújo, Roberto I Oliveira, and Daniel Yukimura. A mean-field limit for certain deep neural networks. arXiv preprint arXiv:1906.00193 , 2019. [5]Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems , 32, 2019. [6]Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. Advances in Neural Information Processing Systems , 32, 2019. [7]Alexander Atanasov, Alexandru Meterez, James B Simon, and Cengiz Pehlevan. The optimization landscape of sgd across the feature learning strength. arXiv preprint arXiv:2410.04642 , 2024. [8]Francis Bach. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research , 18(1):629–681, 2017. [9]Zhiwei Bai, Tao Luo, Zhi-Qin John Xu, and Yaoyu Zhang. Embedding principle in depth for the loss landscape analysis of deep neural networks. arXiv preprint arXiv:2205.13283 , 2022. [10] Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. arXiv preprint arXiv:2002.02561 , 2020. [11] Blake Bordelon and Cengiz Pehlevan. Self-consistent dynamical field theory of kernel evolution in wide neural networks. Advances in Neural Information Processing Systems , 35:32240–32256, 2022. [12] Etienne Boursier and Nicolas Flammarion. Early alignment in two-layer networks training is a two-edged sword. arXiv preprint arXiv:2401.10791 , 2024. [13] Etienne Boursier, Loucas Pillaud-Vivien, and Nicolas Flammarion. Gradient flow dynamics of shallow reLU networks for square loss and orthogonal inputs. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. [14] Lenaic Chizat and Francis Bach. A note on lazy training in supervised differentiable programming. arXiv preprint arXiv:1812.07956 , 2018. [15] Lénaïc Chizat and Francis Bach. On the Global Convergence of Gradient Descent for Over- parameterized Models using Optimal Transport. In Advances in Neural Information Processing Systems 31 , pages 3040–3050. Curran Associates, Inc., 2018. [16] Lénaïc Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. In Jacob Abernethy and Shivani Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory , volume 125 of Proceedings of Machine Learning Research , pages 1305–1338. PMLR, 09–12 Jul 2020. [17] Alex Damian, Eshaan Nichani, and Jason D Lee. Self-stabilization: The implicit bias of gradient descent at the edge of stability. arXiv preprint arXiv:2209.15594 , 2022. [18] Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations , 2019. [19] Weinan E, Chao Ma, and Lei Wu. Barron spaces and the compositional function spaces for neural network models. arXiv preprint arXiv:1906.08039 , 2019. [20]
|
https://arxiv.org/abs/2505.21722v1
|
Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems , 33:14820–14830, 2020. [21] Gauthier Gidel, Francis Bach, and Simon Lacoste-Julien. Implicit regularization of discrete gradient dynamics in linear neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. 10 [22] Suriya Gunasekar, Blake Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nathan Srebro. Implicit regularization in matrix factorization. In Proceedings of the 31st International Conference on Neural Information Processing Systems , NIPS’17, page 6152–6160, Red Hook, NY , USA, 2017. Curran Associates Inc. [23] Arthur Jacot. Bottleneck structure in learned features: Low-dimension vs regularity tradeoff. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 23607–23629. Curran Associates, Inc., 2023. [24] Arthur Jacot. Implicit bias of large depth networks: a notion of rank for nonlinear functions. In The Eleventh International Conference on Learning Representations , 2023. [25] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In Advances in Neural Information Processing Systems 31 , pages 8580–8589. Curran Associates, Inc., 2018. [26] Arthur Jacot, Franck Gabriel, and Clément Hongler. The asymptotic spectrum of the hessian of dnn throughout training. In International Conference on Learning Representations , 2020. [27] Arthur Jacot, François Ged, Berfin ¸ Sim¸ sek, Clément Hongler, and Franck Gabriel. Saddle-to- saddle dynamics in deep linear networks: Small initialization training, symmetry, and sparsity, 2022. [28] Arthur Jacot and Alexandre Kaiser. Hamiltonian mechanics of feature learning: Bottleneck structure in leaky resnets, 2024. [29] Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. CoRR , abs/1810.02032, 2018. [30] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. In Advances in neural information processing systems , pages 8572–8583, 2019. [31] Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The large learning rate phase of deep learning: the catapult mechanism. arXiv preprint arXiv:2003.02218 , 2020. [32] Zhiyuan Li, Yuping Luo, and Kaifeng Lyu. Towards resolving the implicit bias of gradient descent for matrix factorization: Greedy low-rank learning. In International Conference on Learning Representations , 2020. [33] Chaoyue Liu, Libin Zhu, and Mikhail Belkin. Toward a theory of optimization for over- parameterized systems of non-linear equations: the lessons of deep learning. arXiv preprint arXiv:2003.00307 , 2020. [34] Tao Luo, Zhi-Qin John Xu, Zheng Ma, and Yaoyu Zhang. Phase diagram for two-layer relu neural networks at infinite-width limit. Journal of Machine Learning Research , 22(71):1–47, 2021. [35] Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences , 115(33):E7665– E7671, 2018. [36] Greg Ongie, Rebecca Willett, Daniel Soudry, and Nathan Srebro. A function space view of bounded norm infinite width relu
|
https://arxiv.org/abs/2505.21722v1
|
nets: The multivariate case. In International Conference on Learning Representations , 2020. [37] Scott Pesme and Nicolas Flammarion. Saddle-to-saddle dynamics in diagonal linear networks. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 7475–7505. Curran Associates, Inc., 2023. [38] Scott Pesme, Loucas Pillaud-Vivien, and Nicolas Flammarion. Implicit bias of sgd for diagonal linear networks: a provable benefit of stochasticity. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 29218–29230. Curran Associates, Inc., 2021. 11 [39] Grant Rotskoff and Eric Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in Neural Information Processing Systems 31 , pages 7146–7155. Curran Associates, Inc., 2018. [40] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks, 2014. [41] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. A mathematical theory of semantic development in deep neural networks. Proceedings of the National Academy of Sciences , 116(23):11537–11546, 2019. [42] James B Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J Fetterman, and Joshua Albrecht. On the stepwise nature of self-supervised learning. In International Conference on Machine Learning , pages 31852–31876. PMLR, 2023. [43] Berfin Simsek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, and Johanni Brea. Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 9722–9732. PMLR, 18–24 Jul 2021. [44] Samuel L Smith, Benoit Dherin, David GT Barrett, and Soham De. On the origin of implicit regularization in stochastic gradient descent. arXiv preprint arXiv:2101.12176 , 2021. [45] Zhenfeng Tu, Santiago Aranguri, and Arthur Jacot. Mixed dynamics in linear networks: Unifying the lazy and active regimes. to appear at NeurIPS , 2024. [46] Loucas Pillaud Vivien, Julien Reygner, and Nicolas Flammarion. Label noise (stochastic) gradient descent implicitly solves the lasso for quadratic parametrisation. In Conference on Learning Theory , pages 2127–2159. PMLR, 2022. [47] Zihan Wang and Arthur Jacot. Implicit bias of SGD in l2-regularized linear DNNs: One- way jumps from high to low rank. In The Twelfth International Conference on Learning Representations , 2024. [48] Yuxiao Wen and Arthur Jacot. Which frequencies do CNNs need? Emergent bottleneck structure in feature learning. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 52779–52800. PMLR, 21–27 Jul 2024. [49] Blake Woodworth, Suriya Gunasekar, Pedro Savarese, Edward Moroshko, Itay Golan, Jason Lee, Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models, 2020. [50] Greg Yang and Edward J. Hu. Feature learning in infinite-width neural networks, 2020. 12 A Proofs of Theorems A.1 Gradient Flow on
|
https://arxiv.org/abs/2505.21722v1
|
Homogeneous Losses Proposition A.1. On a homogeneous loss, the GF dynamics decompose into dynamics of the norm ∥θ∥and of the normalized parameters ¯θ=θ/∥θ∥: ∂t∥θ(t)∥=−¯θ(t)T∇L0(θ(t)) =−L∥θ(t)∥L−1L0(¯θ(t)) ∂t¯θ(t) =−∥θ(t)∥L−2 I−¯θ(t)¯θ(t)T ∇L0(¯θ(t)). Proof. SinceL0satisfies gradient flow with respect to θ, we have dθ dt=−∇L 0(θ). Because L0isL-homogeneous, Euler’s homogeneous function theorem implies: θ⊤∇L0(θ) =LL0(θ). Now, define the normalized parameter ¯θ=θ ∥θ∥. Differentiating ¯θwith respect to time tusing the quotient rule yields: d¯θ dt=d dtθ ∥θ∥ =dθ dt∥θ∥ −θd∥θ∥ dt ∥θ∥2. Substitutingdθ dt=−∇L 0(θ)gives: d¯θ dt=−∇L 0(θ)∥θ∥ −θd∥θ∥ dt ∥θ∥2. To computed∥θ∥ dt, note that ∥θ∥= (θ⊤θ)1/2. Differentiating, we obtain: d∥θ∥ dt=1 ∥θ∥θ⊤dθ dt=1 ∥θ∥θ⊤ −∇L 0(θ) . Using the homogeneity property θ⊤∇L0(θ) =LL0(θ), this simplifies to: d∥θ∥ dt=−LL0(θ) ∥θ∥. Substitute this back into the expression ford¯θ dt: d¯θ dt=−∇L 0(θ)∥θ∥+θLL0(θ) ∥θ∥ ∥θ∥2. This can be simplified as: d¯θ dt=−∇L 0(θ) ∥θ∥+θ ∥θ∥3LL0(θ). We wish to express the right-hand side in terms of ¯θ. Using the scaling property of the gradient for a homogeneous function, we have ∇L0(θ) =∥θ∥L−1∇L0(¯θ), and recalling that θ⊤∇L0(θ) =LL0(θ), we finally obtain: d¯θ dt=−∥θ∥L−2∇L0(¯θ) +∥θ∥L−4θ θ⊤∇L0(¯θ) =−∥θ∥L−2 I−¯θ¯θ⊤ ∇L0(¯θ). 13 A.2 Explosion in Escape Direction Proposition A.2. If at some time t0the parameter satisfies θ(t0) =ρwith ρ∈L1/2SP−1and∇L0(ρ) =−s ρ, then for all t≥t0the normalized direction remains constant, and the norm ∥θ(t)∥satisfies ∥θ(t)∥= ∥θ(t0)∥2−L+ (2−L)L s(t−t0)1 2−L,ifL̸= 2, ∥θ(t0)∥exp 2s(t−t0) , ifL= 2. Proof. Using the chain rule we have d dt∥θ(t)∥=1 ∥θ(t)∥θ(t)⊤dθ dt=−1 ∥θ(t)∥θ(t)⊤∇L0 θ(t) . Using Euler’s theorem, θ(t)⊤∇L0 θ(t) =LL0 θ(t) , we obtain d dt∥θ(t)∥=−L ∥θ(t)∥L0 θ(t) . Since θ(t) =∥θ(t)∥¯θ(t)and by homogeneity L0 θ(t) =∥θ(t)∥LL0 ¯θ(t) , and because ¯θ(t) =¯θ(t0)for all t≥t0withL0 ¯θ(t0) =−s, we deduce L0 θ(t) =−s∥θ(t)∥L. Substituting this back, we have d dt∥θ(t)∥=−L ∥θ(t)∥ −s∥θ(t)∥L =L s∥θ(t)∥L−1. Defining R(t) =∥θ(t)∥, the above becomes the separable ordinary differential equation dR dt=L s RL−1, R (t0) =∥θ(t0)∥. Case 1: L̸= 2. We separate variables: R1−LdR=L s dt. Integrate from t0tot:ZR(t) R(t0)R1−LdR=Zt t0L s dτ. The left-hand side integrates to R2−L 2−L R(t) R(t0)=R(t)2−L−R(t0)2−L 2−L. Hence, R(t)2−L−R(t0)2−L 2−L=L s(t−t0). Solving for R(t)gives R(t)2−L=R(t0)2−L+ (2−L)L s(t−t0), or equivalently, ∥θ(t)∥= ∥θ(t0)∥2−L+ (2−L)L s(t−t0)1 2−L. Case 2: L= 2. The ODE reduces to dR dt= 2s R, which is linear. Its unique solution is R(t) =R(t0) exp 2s(t−t0) , that is, ∥θ(t)∥=∥θ(t0)∥exp 2s(t−t0) . 14 A.3 Optimal Speed is Increasing in Depth Proposition A.3. Given a depth Lnetwork with L0(θ) =−c0fors0>0and∥θ∥2=L, we can construct a network of depth L+kfor any k≥1with parameters θ′that satisfies ∥θ′∥2=L+k andL0(θ′)≤ L0(θ). Therefore, the optimal escape speed s∗(L)is a non-decreasing function. Furthermore, in the deeper network, we have Rank( ZL′) = Rank WL′= 1 for all L′≥Land ZL′=Zσ L′for all L′> L. Proof. We denote with Wℓ,·ithei-th column of Wℓand with Wℓ,i·thei-th row of Wℓ. We can decompose the trace using the columns WL,·iand rows WL−1,i·in the following way: Tr G⊤ZL =wLX i=1Tr G⊤WL,·iσ(WL−1,i·ZL−2) . The negative contribution is entirely due to the WLmatrix as the application of the activation function yields a non-negative matrix. For this sum there exists some i∗∈[wL]that maximizes the negative contribution so that for all i∈[wL]: Tr G⊤¯WL,·i∗σ(¯WL−1,i∗·ZL−2) ≤Tr
|
https://arxiv.org/abs/2505.21722v1
|
G⊤¯WL,·iσ(¯WL−1,i·ZL−2) where ¯xdenotes the normalized vector ¯x=x ∥x∥2. We define a new network of depth L+kusing the following matrices ˜Wℓ: ˜WL−1=sX i∥WL,·i∥∥WL−1,i·∥ ¯WL−1,i∗· 0 0 ... 0 , ˜WL+k=sX i∥WL,·i∥∥WL−1,i·∥ ¯WL,·i∗0 0 ··· 0 , ˜Wℓ= 1 0 0 ··· 0 0 0 ··· ............ 0 0 ··· 0 , ℓ =L, . . . , L +k−1, ˜Wℓ=Wℓ, ℓ = 1,2, . . . , L −2 We observe that the trace of the new network is lower or equal to the trace of the original network: Trh G⊤˜ZL+ki = Trh G⊤˜WL+kσ(˜WL+k−1,i·ZL+k−2)i = Tr G⊤¯WL,·i∗σ(¯WL−1,i∗·ZL−2)wLX i=1∥WL,·i∥∥WL−1,i·∥ ≤wLX i=1∥WL,·i∥∥WL−1,i·∥Tr G⊤¯WL,·iσ(¯WL−1,i·ZL−2) =wLX i=1Tr G⊤WL,·iσ(WL−1,i·ZL−2) = Tr G⊤ZL . 15 The norm of the new network is : ∥˜θ∥2=L−2X ℓ=1∥Wℓ∥2+ 2wLX i=1∥WL,·i∥∥WL−1,i·∥+k ≤LX ℓ=1∥Wℓ∥2+k ≤L+k A.4 Low Rank Bias A.4.1 Weak Control Proposition A.4. Given that Tr[G⊤ZL]≤ −c0, for some constant c0>0and∥θ∥2≤Lthen for any ratio p∈(0,1)there are at least (1−p)Llayers that are approximately rank 1 in the sense that the singular values siofZσ ℓsatisfy P i≥2s2 iPr i=1s2 i≤2log∥X∥F+ log∥G∥F−logc0 pL. (3) Proof. We expand the activations, ∥Zσ 0∥2 F ∥ZL∥2 F=∥Zσ L−1∥2 op ∥ZL∥2 FL−1Y ℓ=1∥Zσ ℓ−1∥2 op ∥Zσ ℓ∥2 FL−1Y ℓ=0∥Zσ ℓ∥2 F ∥Zσ ℓ∥2op. Where the operator norm of a matrix is its largest singular value. Since Zℓ=WℓZσ ℓ−1, we have ∥Zℓ∥2 F≤ ∥Wℓ∥2 F∥Zσ ℓ−1∥2 op. So by using the above lemma, ∥Zσ L−1∥2 op ∥ZL∥2 FL−1Y ℓ=1∥Zσ ℓ−1∥2 op ∥Zσ ℓ∥2 FL−1Y ℓ=0∥Zσ ℓ∥2 F ∥Zσ ℓ∥2op≥L−1Y ℓ=0∥Zσ ℓ∥2 F ∥Zσ ℓ∥2op On the other hand we have that ∥Zσ 0∥2 F∥G∥2 F Tr[G⊤ZL]2≥∥Zσ 0∥2 F ∥ZL∥2 F since the inner product is always lower than the norm product. Now by combining the above we get L−1Y ℓ=0∥Zσ ℓ∥2 F ∥Zσ ℓ∥2op≤∥Zσ 0∥2 F∥G∥2 F Tr[G⊤ZL]2. Taking the log on both sides, L−1X ℓ=1log∥Zσ ℓ∥2 F−log∥Zσ ℓ∥2 op≤log∥Zσ 0∥2 F∥G∥2 F Tr[G⊤ZL]2. 16 By contradiction, we see that for any ratio p∈(0,1), there can be at most pLlayers where log∥Zℓ∥F−log∥Zℓ∥op≥log∥X∥F+ log∥G∥F−logc0 pL. That is there is at least (1−p)Llayers where P i≥2s2 iPr i=1s2 i= 1−∥Zℓ∥2 op ∥Zℓ∥2 F≤2 log∥Zℓ∥F−2 log∥Zℓ∥op≤2log∥X∥F+ log∥G∥F−logc0 pL. A.4.2 Strong Control on Almost Rank 1 Input The following result shows that if the input of the network is approximately Rank-1, here encoded as uvT+X, where u, vare non-negative entry-wise vectors and Xis a matrix of small norm ∥X∥F≤ϵ, then all layers are approximately Rank-1 too at the optimal escape direction. Proposition A.5. Consider the minimizer θ⋆= arg min∥θ∥2≤LTr G⊤Yθ(uv⊤+X) where u, v∈ Rn,u, v≥0entry wise and ∥X∥F≤ϵfor some ϵ >0. Then for all ℓwe have: P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ),P i≥2s2 i(Zσ ℓ)Pr i=1s2 i(Zσ ℓ),∥Zσ ℓ−Zℓ∥2 F ∥Zℓ∥2 F≤8∥G∥F s∗− ∥G∥Fϵϵ. Proof. In the case where the input is only the rank 1 matrix uv⊤we can see that Tr G⊤Yθ(uv⊤) = Tr v⊤G⊤Yθ(u) =v⊤G⊤Yθ(u) and therefore the minimum is achieved when the alignment is maximized: min∥θ∥2≤Lv⊤G⊤Yθ(u) =−∥Gv∥∥u∥. When∥θ∥2≤Lit is true that ∥Yθ(uv⊤)−Yθ(uv⊤+X)∥F≤LY ℓ=1∥Wℓ∥F∥X∥F≤ϵ as a consequence of the Cauchy-Schwarz inequality. We can also see that |Tr G⊤Yθ(uv⊤+X) −Tr G⊤Yθ(uv⊤) | ≤ ∥G∥Fϵ. At the minimum θ⋆= arg min∥θ∥2=LTr G⊤Yθ(uv⊤+X) we observe that Tr G⊤Yθ⋆(uv⊤+X)
|
https://arxiv.org/abs/2505.21722v1
|
≤Tr G⊤Yˆθ(uv⊤+X) ≤ −∥ Gv∥∥u∥+∥G∥Fϵ where ˆθ= arg min Tr G⊤Yθ(uv⊤) . On the other direction we get Tr G⊤Yθ⋆(uv⊤+X)) ≥ −Tr G⊤Yθ⋆(uv⊤) − ∥G∥Fϵ ≥ −∥ Gv∥∥Yθ⋆(u)∥ − ∥ G∥Fϵ (4) where we used the Cauchy-Schwarz inequality in the last line. Combining the two, we get 17 ∥Yθ⋆(u)∥ ∥u∥≥1−2∥G∥F ∥Gv∥∥u∥ϵ and since ∥θ∥2≤Lit is also true that the LHS of the above inequality is upper bounded by 1. We can also see that ∥Y⋆ θ(uv⊤+X)∥ ∥uv⊤+X∥≥∥Y⋆ θ(uv⊤)∥ −ϵ ∥uv⊤∥+ϵ=∥Y⋆ θ(u)∥ ∥u∥−ϵ ∥u∥∥v∥ 1 +ϵ ∥u∥∥v∥ and by using the above inequality we get ∥Y⋆ θ(uv⊤+X)∥ ∥uv⊤+X∥≥1−2∥G∥F ∥Gv∥∥u∥−21 ∥u∥∥v∥ 1 +ϵ ∥u∥∥v∥ϵ≥1−2(∥G∥F ∥Gv∥∥u∥+1 ∥u∥∥v∥)ϵ. Now we can expand the activations, ∥Y⋆ θ(uv⊤+X)∥ ∥uv⊤+X∥=LY ℓ=1∥Zℓ∥F ∥Zσ ℓ−1∥F∥Zσ ℓ−1∥F ∥Zℓ−1∥F and using the fact thatQ ℓ∥Wℓ∥F≤1 LY ℓ=1∥WℓZℓ−1∥F ∥Wℓ∥F∥Zσ ℓ−1∥F∥Zσ ℓ−1∥F ∥Zℓ−1∥F≥LY ℓ=1∥WℓZℓ−1∥F ∥Zσ ℓ−1∥F∥Zσ ℓ−1∥F ∥Zℓ−1∥F. (5) We split the norm of the activations using the inequalities ∥WℓZσ ℓ−1∥F ∥Wℓ∥F∥Zσ ℓ−1∥F≤∥Wℓ∥op∥Zσ ℓ−1∥F ∥Wℓ∥F∥Zσ ℓ−1∥F=∥Wℓ∥op ∥Wℓ∥F and ∥WℓZσ ℓ−1∥F ∥Wℓ∥F∥Zσ ℓ−1∥F≤∥Wℓ∥F∥Zσ ℓ−1∥op ∥Wℓ∥F∥Zσ ℓ−1∥F=∥Zσ ℓ−1∥op ∥Zσ ℓ−1∥F so we get LY ℓ=1min{∥Wℓ∥op ∥Wℓ∥F,∥Zσ ℓ−1∥op ∥Zσ ℓ−1∥F}∥Zσ ℓ−1∥F ∥Zℓ−1∥F≥1−2(∥G∥F ∥Gv∥∥u∥+1 ∥u∥∥v∥)ϵ. By squaring and rearranging the terms we get for the first ratio: P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ)≤4(∥G∥F ∥Gv∥∥u∥+1 ∥u∥∥v∥)ϵ which further simplifies to P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ)≤8∥G∥F ∥Gv∥∥u∥ϵ. Using inequality 4 we observe that: ∥Gv∥∥u∥ ≥s⋆− ∥G∥Fϵ 18 and hence P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ)≤8∥G∥F s⋆− ∥G∥Fϵϵ. We proceed similarly for the singular values of Zℓ. For the matrices ZℓandZσ ℓwe note that their Frobenius inner product is zero, so ∥Zℓ∥2 F=∥Zσ ℓ∥2 F+∥Zℓ−Zσ ℓ∥2 F Re-arranging gives the inequality ∥Zσ ℓ−Zℓ∥2 F ∥Zℓ∥2 F≤8∥G∥F s⋆− ∥G∥Fϵϵ. A.4.3 Strong Control We can combine the above two statements to show that at the maximum escape speed, the final layers will be almost rank-1. To prove that we first apply the first result to find a layer ℓ0that is almost rank-1. We need an additional lemma that ensures that we can select non-negative singular vectors for the largest singular value of the activation Zℓ1. Then we can apply the second result to conclude that all layers ℓ≥ℓ0will be approximately rank-1. Lemma A.6. ForA∈Rm×nwith non-negative entries and s1its largest singular value we can find right and left singular values u1, v1fors1which are non-negative entry-wise. Proof. The right singular vector for s1satisfies A⊤Au1=s1u1 and since Ais non-negative A⊤Ais also non-negative. We can now apply an extended version of the Perron-Frobenius theorem for non-negative matrices to select the eigenvector u1≥0entry-wise. Now we select v1=Au1 s1 which is a left singular vector as it satisfies Av1=s1u1and since A, u are non-negative, vis also non-negative. Theorem A.7. Consider an optimal escape direction θ⋆= arg min ∥θ∥2=LTr G⊤fθ(X) with optimal speed s∗=min∥θ∥2=LTr G⊤fθ(X) , then for all layers ℓwe have: P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ),P i≥2s2 i(Zσ ℓ)Pr i=1s2 i(Zσ ℓ),∥Zσ ℓ−Zℓ∥2 F ∥Zℓ∥2 F≤8c s∗−cℓ−1 2ℓ−1 2 where c=√ 2∥X∥F∥G∥Fp log∥X∥F+ log∥G∥F−logs∗. 19 Proof. We denote fℓ2:ℓ1(X) =Wℓ2σ(Wℓ2−1...σ(Wℓ1X)...)the network when only the layers from ℓ1toℓ2are applied. Using proposition A.4 we can find layers ℓ0< ℓ1< ... < ℓ n≤Lthat satisfy 3 and are approximately rank-1. We can select the minimum
|
https://arxiv.org/abs/2505.21722v1
|
of those, ℓ0. Because the argument is valid for at least (1−p)Lof the Ltotal layers, the earliest layer ℓ0must occur on or before the pL-th layer. It is true that Zσ ℓ0is non-negative entry-wise and so we can apply lemma A.6 to find non-negative singular vectors u1, v1that additionally satisfy ∥Zσ ℓ0−s1(Zσ ℓ0)u1v⊤ 1∥2 F=rX i=2s2 i≤2∥Zσ ℓ0∥2 Flog∥X∥F+ log∥G∥F−logs∗ pL ≤2∥X∥2 Flog∥X∥F+ log∥G∥F−logs∗ pL. We now use the fact that for the layers ℓ≥ℓ0we are at the optimal escape direction θ∗. θ∗= arg min ∥θL:ℓ0+1∥2=L−ℓ0Tr G⊤fL:ℓ0+1(Zσ ℓ0) . We can do that since: min∥θ∥2=LTr G⊤fθ(X) ≤min∥θL:ℓ0+1∥2=L−ℓ0Tr G⊤fL:ℓ0+1(Zσ ℓ0) We can now apply proposition 3.3 on the sub-network fL:ℓ0+1. For all layers ℓ≥ℓ0we have that: P i≥2s2 i(Wℓ)Pr i=1s2 i(Wℓ),P i≥2s2 i(Zσ ℓ)Pr i=1s2 i(Zσ ℓ),∥Zσ ℓ−Zℓ∥2 F ∥Zℓ∥2 F≤8∥G∥F s∗−c√pL∥G∥Fc√pL where c=√ 2∥X∥Fp log∥X∥F+ log∥G∥F−logs∗. We see that, since the p∈(0,1)was chosen arbitrarily, we can select any ℓ=pL, so the bound will hold for general ℓ. 20 B MNIST Training Details We train a 6-layer fully connected neural network (multilayer perceptron, MLP) without biases on the MNIST dataset, using the cross-entropy loss. The network comprises one input layer, four hidden layers, and one output layer. Each hidden layer contains 1000 neurons. The weight matrices have the following dimensions: •Input layer: W1∈R784×1000 •Hidden layers: Wi∈R1000×1000fori= 2,3,4,5 •Output layer: W6∈R1000×10 The weights are initialized from a normal distribution with mean 0 and standard deviation 1/1000 . We train the model for 1000 epochs using a batch size of 32. The learning rate at each step is adjusted dynamically according to: lr(t) =10 ∥θ(t)∥4 where ∥θ(t)∥2=6X i=1∥Wi(t)∥2 F and∥ · ∥Fdenotes the Frobenius norm. Each MNIST image xis normalized using the dataset-wide mean µand standard deviation σof the pixel values: x7→x/255−µ σ This standardization ensures that the input distribution has approximately zero mean and unit variance, which helps stabilize training. A more complete picture of how the singular values of the weight matrices evolve during training is presented in 3. We repeated the same experiment with depth-4 fully connected network and we report our findings in figure 4. 21 1.0 1.2 1.4 1.6 1.8 step1e601234Singular ValueLayer 1 1.0 1.2 1.4 1.6 1.8 step1e60.00.51.01.52.02.53.03.54.0Singular ValueLayer 2 1.0 1.2 1.4 1.6 1.8 step1e601234Singular ValueLayer 3 1.0 1.2 1.4 1.6 1.8 step1e6012345Singular ValueLayer 4 1.0 1.2 1.4 1.6 1.8 step1e6012345Singular ValueLayer 5 1.0 1.2 1.4 1.6 1.8 step1e6024681012Singular ValueLayer 6 1.0 1.2 1.4 1.6 1.8 step1e60.00.51.01.52.0 1st saddle escape 2nd saddle escapeFigure 3: Deeper layers show a stronger bias toward low-rank structure than earlier layers on MNIST. Top two rows: Top 10 singular values of the weight matrices for layers 1–6 including input and output layer over training time. Bottom: Training loss trajectory on MNIST. C Supporting material for Section 4 C.1 Finding the maximal rank-one escape speed Picking up the argument from the proof sketch of Example 1, we have a network function equal to f(X) =±σ(W1X), where W1= [cos( ϕ),sin(ϕ)]and the sign is chosen to give a positive escape speed. Applied to the dataset of Example 1 and noting that at most four points will
|
https://arxiv.org/abs/2505.21722v1
|
have nonzero function value at a given time, one finds an escape speed is equal to s= cos ξ+π 4 −cos(ξ) + cos ξ−π 4 −cos ξ−π 2 , (6) where ξ=ϕmod(π 4). See Figure 5 for a depiction of this periodic function. Its maximal value of s=√ 2−1falls at multiples ofπ 4. 22 0 100000 300000 500000 step0.00.51.01.52.0Singular ValueLayer 1 0 100000 300000 500000 step0.00.51.01.52.02.53.0Singular ValueLayer 2 0 100000 300000 500000 step0.00.51.01.52.02.53.0Singular ValueLayer 3 0 100000 300000 500000 step0.00.51.01.52.02.53.03.5Singular ValueLayer 4 0 100000 300000 500000 step0.51.01.52.0Figure 4: Depth-4 MLP with small initialization on MNIST. Top two rows: Top 10 singular values of the weight matrices for layers 1–4 including input and output layer over training time. Bottom: Training loss trajectory on MNIST. 23 0π 2π3π 22π φ0.00.10.20.30.4escape speed s(φ)Figure 5: Visualization of Equation 6. 0 20000 epoch0.00.5losswidth = 4 0 20000 epoch0.00.5width = 8 0 20000 epoch0.00.5width = 16 0 20000 epoch0.00.5width = 32 0 20000 epoch0.00.5losswidth = 64 0 20000 epoch0.00.5width = 128 0 20000 epoch0.00.5width = 256 0 20000 epoch0.00.5width = 512 0 20000 epoch0.00.5losswidth = 1024 101102103 width0.51.0successful frac Figure 6: Visualization of all training runs of projected gradient descent on Example 1. This plot shows all training runs in the experiment of Figure 2. 24 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: The abstract and introduction describes our theoretical results at a higher abstraction level, with less technical details, but the claims reflect the results. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: In the discussion section, we discuss in details what is still missing for a complete description of the saddle-to-saddle dynamics, and we also motivate some potential improvements. Guidelines: •The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect on the scope of the claims made, e.g.,
|
https://arxiv.org/abs/2505.21722v1
|
if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? 25 Answer: [Yes] Justification: The statements of the Theorems/Proposition include all assumptions. All proofs are in the appendix. Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: The experimental details can be found in the appendix. Guidelines: • The answer NA means that the paper does not include experiments. •If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution
|
https://arxiv.org/abs/2505.21722v1
|
is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code 26 Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: MNIST is easily accessible, and the experimental details are given in the appendix. Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). •Providing as much information as possible in supplemental material
|
https://arxiv.org/abs/2505.21722v1
|
(appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: The details of the experiments can be found in the appendix. Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: The experiments are mostly there for illustration purpose, we therefore favored readability over adding error bars. Also most plots are not really amenable to error bars. Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). 27 •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). •It should be clear whether the error bar is the standard deviation or the standard error of the mean. •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [No] Justification: Our experiments are small scale, and can easily be reproduced on any gpu capable computer. Guidelines: • The answer NA means that the paper does not include experiments. •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full
|
https://arxiv.org/abs/2505.21722v1
|
research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We have read the ethic guidelines and made sure that our work conform to them. Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: Our work is theoretical/fundamental, there is therefore no social impact to meaningfully discuss. Guidelines: • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. 28 •Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: No dataset is released. Guidelines: • The answer NA means that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere
|
https://arxiv.org/abs/2505.21722v1
|
to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: No dataset. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. •If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. 29 •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: No new assets. Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: No crowdsourcing. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does
|
https://arxiv.org/abs/2505.21722v1
|
the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: No crowdsourcing. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 30 16.Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA] Justification: No LLM use. Guidelines: •The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. •Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM ) for what should or should not be described. 31
|
https://arxiv.org/abs/2505.21722v1
|
arXiv:2505.21724v1 [cs.CV] 27 May 2025OmniResponse: Online Multimodal Conversational Response Generation in Dyadic Interactions Cheng Luo1, Jianghui Wang1, Bing Li1∗, Siyang Song2, Bernard Ghanem1 1King Abdullah University of Science and Technology,2University of Exeter Project Page: https://omniresponse.github.io/ Abstract In this paper, we introduce Online Multimodal Conversational Response Gener- ation (OMCRG), a novel task that aims to online generate synchronized verbal and non-verbal listener feedback, conditioned on the speaker’s multimodal input. OMCRG reflects natural dyadic interactions and poses new challenges in achieving synchronization between the generated audio and facial responses of the listener. To address these challenges, we innovatively introduce text as an intermediate modal- ity to bridge the audio and facial responses. We hence propose OmniResponse, a Multimodal Large Language Model (MLLM) that autoregressively generates high- quality multi-modal listener responses. OmniResponse leverages a pretrained LLM enhanced with two novel components: Chrono-Text, which temporally anchors generated text tokens, and TempoV oice, a controllable online TTS module that produces speech synchronized with facial reactions. To support further OMCRG research, we present ResponseNet, a new dataset comprising 696 high-quality dyadic interactions featuring synchronized split-screen videos, multichannel audio, transcripts, and facial behavior annotations. Comprehensive evaluations conducted on ResponseNet demonstrate that OmniResponse significantly outperforms base- line models in terms of semantic speech content, audio-visual synchronization, and generation quality. 1 Introduction Generating realistic human conversational responses has substantial potential across numerous appli- cations, spanning from human-computer interactions [ 33], immersive metaverse experiences [ 23], to mental health interventions [ 24]. However, human communication is inherently multimodal and com- plex. In face-to-face interactions, speakers convey their messages not only through spoken language but also through non-verbal cues, such as lip movements and facial expressions. Correspondingly, listeners provide multimodal responses consisting of verbal (e.g., audible affirmations or disapprovals) and non-verbal responses (e.g., subtle head nods). While considerable efforts [ 8,60] have been dedi- cated to modeling text dialogue, particularly in language-based interfaces [ 29], modeling multimodal conversational interactions has been much underexplored. In this paper, we explore a new task: learning to simultaneously generate verbal and non-verbal listener1responses in an online dyadic conversation setting, conditioned on the speaker’s verbal and non-verbal inputs (see Figure 1). We refer to this task as Online Multimodal Conversational Response Generation. Although various audio-to-video generation methods (e.g. talking head generation [ 74]) have shown impressive performance, these methods focus on synthesizing visual content aligned ∗Corresponding author. 1Previous studies [ 6,17] defined a speaker–listener framework for dyadic interactions, in which the listener both attends to the speaker’s utterances and provides verbal and nonverbal feedback. Preprint. Under review. (a) Offline Multimodal Conversational Response Generation (b) Online Multimodal Conversational Response Generation Offline Multimodal Conversational Response Generation Inputs: Outputs : Online Multimodal Conversational Response Generation Time Time Inputs: Outputs: Figure 1: Illustration of the new OMCRG task. (a) In offline tasks, the generation model generates the listener’s full response only after receiving the entire input sequence from the speaker. (b) Differ- ently, OMCRG task requires sequentially processing the speaker’s incoming input and generating multi-modal responses for the listener on the fly. with input audio signals, which ignores explicitly modeling multimodal conversational
|
https://arxiv.org/abs/2505.21724v1
|
interactions. Recent studies [ 37,43,54] propose to generate facial reactions for a listener; however, these methods overlook verbal responses, which are essential to engage in dialogue fully. The OMCRG task is complex and poses major challenges in three aspects. First, it is non-trivial to directly achieve synchronization between the generated audio and facial reactions of the listener for OMCRG task. As revealed in existing talking-head works [ 74,59], achieving precise alignment between facial motion and audio is already challenging, even when the entire audio signal is given. In contrast, OMCRG is to generate both audio and facial reactions simultaneously and incrementally. Such online and multimodal generation settings make face-audio synchronization much more difficult, due to the high variability and semantic ambiguity of audio modality. Second, due to the online setting, the model has to reason over partial speaker input and generate audio-visual responses on the fly, which requires both powerful audio-visual understanding and generation abilities. While powerful pre-trained models have been developed for language and vision, audio modeling remains comparatively underdeveloped, making it more challenging to generate expressive and appropriate audio and facial reactions. Third, the lack of high-quality datasets for dyadic multimodal interaction significantly hinders the development of OMCRG. We address the above challenges by proposing a unified framework, OmniResponse, which autore- gressively generates high-quality multimodal listener responses. Rather than directly synchronizing generated audio and facial reactions, our key insight is to introduce text as an intermediate modality for the OMCRG task. Compared with audio, text offers clearer semantics and reduces uncertainty, making it more tractable for learning multimodal reaction generation. However, text is a static modal- ity without inherent temporal information, posing challenges for synchronizing spoken words with visual frames in an autoregressive generation setting. To overcome this, we introduce a Multimodal Large Language Model augmented with two innovative modules: Chrono-Text and TempoV oice. The Chrono-Text module temporally anchors generated textual tokens by incorporating additional tokens (markers) that explicitly encode time, ensuring alignment between words and visual frames. TempoV oice is a controllable, online text-to-speech module designed to produce synchronized audio from these temporally annotated textual embeddings, ensuring accurate synchronization between audio and facial reactions. In addition, we construct a high-quality dataset named ResponseNet, comprising 696 dyadic conver- sation pairs. Each pair includes synchronized split-screen video streams of both speaker and listener, multichannel audio recordings, verbatim text transcriptions, and detailed facial-behavior annota- tions ( i.e., facial expressions and head movements). Through extensive retrieval for scarce dyadic video data, rigorous content filtering, meticulous camera-shift alignment, and manual annotation, ResponseNet delivers a unique and valuable resource for benchmarking OMCRG. Our contributions are summarized as follows: (1) we present OmniResponse, the first online model to jointly process and generate synchronized streams of conversational human behavior, establishing a 2 foundation for future work in human–agent interaction; (2) we introduce ResponseNet, an annotated dyadic conversation dataset and benchmark, enabling standardized evaluation of OMCRG models. 2 Related Work Facial Reaction Generation. Facial Reaction Generation (FRG) [ 56,75] is a particularly challenge new task as it requires to predict the non-deterministic human facial reactions under different contexts.
|
https://arxiv.org/abs/2505.21724v1
|
The early approaches to FRG [ 21,22] relied on Generative Adversarial Networks (GANs) [ 39,16] typically conditioned the generation process on the speaker visual-speech behaviors. Since FRG is a non-deterministic process ( i.e., different facial reactions can be triggered by the same speaker behavior [ 56]), recent advances have shifted towards more sophisticated generative frameworks. For example, Ng et al. [43] introduces a non-deterministic approach based on Variational Autoencoders (V AEs) [ 25], which enabled sampling diverse human facial motions. This work was complemented by a novel dataset containing paired recordings of active speakers and silent listeners, providing essential training data for modeling natural reactions. Zhou et al. [75] developed a specialized speaker-listener video dataset for head motion generation, which is somewhat limited by its relatively short clip durations (median length of 9.0 sec) and modest dataset scale (1.58 hours total), and thus constraining their model’s ability to learn long-term temporal dependencies. More recent works have attempted to address these limitations through innovative architectural choices or larger-scale datasets [ 54,55]. Luoet al. [37] and Zhu et al. [76] proposed transformer-based [ 63] V AE and diffusion models [ 57,18], respectively, training them on a hybrid collection of videos from three different human-human dyadic interaction datasets [ 10,52,46]. While such FRG approaches achieved temporal alignment and more diverse facial reactions, they fail to produce multi-modal human behavior responses such as voice and spoken words. Furthermore, the datasets they used are multilingual without listenr’s audio stream, comprehensive multimodal annotations or precise timestamps, limiting their utility for training online models for multimodal human response generation. Spoken Dialogue Models. Spoken dialogue models generate natural speech responses in real-time, requiring systems to process both verbal content and paralinguistic elements of communication. Early approaches, including AudioPALM [ 53], Spectron [ 42], and SpeechGPT [ 69] adopted pipelines that combine automatic speech recognition (ASR), text generation, and text-to-speech (TTS) synthesis. However, their requirement to complete the entire response before speech generation makes them unsuitable for live human-computer interactions. Recent developments [ 40,14,45] have shifted towards end-to-end approaches that directly model speech-to-speech generation. Representative examples include Moshi et al. [ 14] and dGSLM [ 45], which operate as full-duplex speech dialogue systems capable of processing continuous speaker input while generating appropriate vocal responses. Although these advances are significant, they focus exclusively on speech and text modalities, overlooking the crucial visual aspects of human communication. Even recent work by Park et al. [ 47] that includes visual-speech data is limited to intermittent speaker-listener interactions. Autoregressive Generative Model. Transformer-based autoregressive models [ 63] have revolu- tionized numerous domains in AI, demonstrating remarkable success in language modeling [ 8,60], multi-modal processing [ 34,3,28,41], and generative tasks [ 51,73,67,66,65,58]. Their success can be attributed to their inherent scalability and ability to unify multi-modal training under a single autoregressive objective, enabling seamless integration of different data modalities. The adaptation of transformers to visual tasks was pioneered by approaches such as VQV AE [ 62] and VQGAN [ 15], which introduced effective methods for quantizing visual information into discrete tokens. They align visual generation with the
|
https://arxiv.org/abs/2505.21724v1
|
successful paradigm of language modeling by employing decoder- only transformers to predict sequences of image tokens. Subsequent research [ 11] has focused on enhancing both the efficiency of tokenization processes [ 38,31] and sampling procedures [ 68], while simultaneously scaling up model architectures to handle increasingly complex tasks. Recent trends have further pushed towards unified architectures capable of handling multiple modalities [ 41,71] and diverse tasks [73, 67] within a single autoregressive framework. 3 Methodology Problem Definition. LetFs tandAs tbe the speaker’s facial and audio cues at time t, respectively. Given the speaker’s steaming facial sequence Fs 1:tand audio sequence As 1:tfrom time 1 to t, the goal of OMCRG is to online generate facial reactions Fl tand audio feedback Al tat time step t. 3 Figure 2: Overview of the proposed OmniResponse . The model takes textual conversational history and newly coming multimodal information (e.g., facial cues) from the speaker and listener as input, and generates temporally synchronized facial and textual responses for the listener by leveraging a pre-trained LLM enhanced with our proposed Chrono-Text Markup. The generated text embeddings are converted into audio synchronized with the facial response by the proposed TempoV oice module. Such multi-modal generation has been much less underexplored, different from recent works [ 75, 37,14,69] mainly focusing on single-modal response generation. To provide natural responses, it is crucial to ensure that the generated facial reactions and audio are temporally synchronized and react appropriately to the speaker. However, this is significantly challenging due to the inherent difficulty of online audio-visual understanding and generation. Instead of direct audio-visual generation, we introduce text as an intermediate modality and de- compose OMCRG into two subproblems: (1) text-face-response generation: generating temporally aligned facial reactions Fl tand textual response Wl t; (2) synchronous text-to-speech synthesis: converting the textual response to the audio waveform segments Al taligned with the facial reac- tions. However, the absence of temporal information in text prevents temporal alignment with facial reactions and audio, challenging the two subproblems. We address this issue by two novel modules. Overview. We present OmniResponse, a novel framework for the OMCRG task (see Figure 2), where OmniResponse is a new MLLM enhanced by two proposed key components: Chrono-Text Markup andTempoVoice . In particular, our OmniResponse leverages the capability of a pretrained LLM to understand and interpret the speaker’s multimodal inputs and autoregressively generate meaningful responses in terms of text and facial reactions. To address the lack of temporal information in text, the proposed Chrono-Text Markup embeds explicit temporal marks between text tokens, endowing the input and output text with time-aware embeddings and ensuring precise alignment with the generated facial reactions. Furthermore, the proposed TempoVoice generates audio responses temporally synchronized with both the generated textual response and the listener’s facial movements. 3.1 OmniResponse Model Architecture. As shown in Figure 2, OmniResponse processes multiple modalities from the speaker and the listener, temporally aligns different modalities, and outputs synchronous multimodal responses to the speaker. In particular, at each time step t, OmniResponse consumes: (1) Static text inputs : a task-specific instruction prompt Winstruct and the conversation history
|
https://arxiv.org/abs/2505.21724v1
|
prior to time τ(τ < t ), denoted Whistory ,<τ; and (2) Temporal inputs : the previously generated facial features of the listener ˆFl τ:t−1, the facial features of the speaker Fs τ:t−1and the accumfd udfsalated text sequences from both participants ( Ws τ:t−1,ˆWl τ:t−1) over the interval [τ, t−1]. Using these inputs, OmniResponse predicts the next facial features ˆFl t, the verbal response ˆWl t, and the corresponding 4 speech segment ˆAl µin the current frame, ensuring precise temporal alignment in all modalities. Formally, we defined this process as: {ˆFl t,ˆAl µ,ˆWl t}=M Winstruct ,Whistory ,<τ,Fs τ:t−1,ˆFl τ:t−1,Ws τ:t−1,ˆWl τ:t−1 . Vision Projection. We introduce the vision projection layer to enable the pretrained LLM (Phi-3.5 mini-instruct with 3.8B parameters [ 1]) to process visual facial features. The layer is implemented as a multilayer perceptron (MLP) that maps the the listener’s and speaker’s past facial features ˆFl 1:t−1 andFs 1:t−1into embedding features V1:t−1aligned with the LLM token space. During autoregressive generation, the MLLM employs causal self-attention to model temporal dependencies between the next token and previous one, and outputs the next listener vision embedding ˆVl t. Vision Decoder. A learnable vision decoder, comprising transformer layers, converts ˆVl tback into the original coefficient space to produce the predicted listener facial coefficients ˆFl t. Subsequently, a pre-trained visual renderer maps these visual coefficients to 2D frames, using a given portrait image. Please refer to the appendix for additional details. Chrono-Text Markup. Visual frames inherently encode temporal information, whereas text tokens are static and lack any temporal dimension. Additionally, visual frames and textual tokens typically differ in length due to their fundamentally different modalities, making unified autoregressive prediction challenging. To resolve this mismatch, we propose Chrono-Text Markup , a novel yet straightforward approach that explicitly embeds temporal information into textual data, aligning the textual sequence precisely with the visual frame sequence. Unlike prior approaches such as TimeMarker [ 12], which inserts timestamps only between visual frames or the method by Ng et al. [44], which integrates timestamp embeddings into textual tokens, our method employs only two special markers, ensuring that the textual and visual sequences have identical lengths. Specifically, we insert two special tokens into the transcript: [PAUSE] to denote silent intervals between utterances, and[LASTING] to indicate that the previous textual word continues speaking to the current time. Each text token is placed between pause and lasting tokens. Multimodal Context Modeling. Our synchronous Multimodal LLM integrates both static and dynamic inputs: Static inputs : the instruction prompt and the accumulated conversation history. Dynamic inputs : frame-aligned visual embeddings and timestamped textual tokens for both speaker and listener. All tokens are jointly processed by an omni-attention mechanism that enforces causal, cross-modal interactions. Under this operation, each visual token attends to preceding visual tokens and to text tokens marked by chrono-text markers at earlier timestamps; similarly, each dynamic text token attends to past visual and textual tokens. However, this omni-attention prevents dynamic tokens from looking at future tokens. This ensures the generation adheres to temporal dynamics and cross-modal interactions. Meanwhile, static tokens remain globally accessible, ensuring that
|
https://arxiv.org/abs/2505.21724v1
|
every dynamic update remains guided by the overarching instructions. Audio De-Tokenizer Positional EncodingTrans Decoder LayerLinear Projection Next Audio Response: Generated TextHidden States VoiceprintPositional EncodingQueryKey & ValueZero-Initialized Placeholders Figure 3: Architecture of TempoVoice. TempoV oice transforms textual hidden- state embeddings into audio segments.TempoVoice. Generating natural speech that is precisely synchronized with text and facial frames poses a signifi- cant challenge. To address this, we introduce a dedicated synthesis pipeline, TempoVoice . Our framework begins by combining the listener’s voiceprint, extracted via the Spark-tts global tokenizer [ 64] to capture speaker iden- tity, with the hidden states of the generated text (see Figure 3). We then apply sinusoidal positional encod- ings to the merged embeddings. Since audio-token se- quences typically differ in length from visual frames and textual tokens, we prepend a series of zero-initialized placeholder tokens, each endowed with positional in- formation. These placeholders serve as queries in a cross-attention module within a Transformer decoder, attending over the fused text–voice representations. This mechanism enables fully synchronous, autoregressive generation of audio tokens in lockstep with visual frames and text tokens. Finally, a linear projection layer maps the decoder outputs to logits over the discrete audio- codec vocabulary. 5 The decoder logits are then quantized into discrete audio semantic tokens ˆAµ, as defined by the Spark-tts audio tokenizer [ 64]. Conditioned on these semantics and the global speaker-identity embeddings, the tokenizer reconstructs the continuous waveform segment. 3.2 Training Objectives To train OmniResponse, the training objective is a weighted combination of text generation loss Ltext, vision reconstruction Lvision, and audio generation loss Laudio: L=Ltext+λvisionLvision+λaudioLaudio, (1) where λvision andλaudioare the scaling factors balancing text, vision, and audio loss terms. Text Generation Loss. The text loss encourages accurate next-token prediction conditioned on both speaker context and past listener states: Ltext=−X tlogpθ Wl t Winstruct ,Whistory ,<τ,Fs τ:t−1,ˆFl τ:t−1,Ws τ:t−1,ˆWl τ:t−1 .(2) Vision Reconstruction Loss. To align predicted and ground-truth facial dynamics, we apply an ℓ2 reconstruction loss on the listener’s feature embeddings: Lvision=X t ˆFl t−Fl t 2 2. (3) Audio Generation Loss. The audio loss operates over discrete semantic tokens Al µ, indexed by µ, which correspond to frame indices t=µk(kreconciles the higher audio sampling rate with the frame-based rate of visual/text tokens). We maximize the likelihood of each token conditioned on previous audio semantics and the listener’s hidden states: Laudio=−X µlogpθ Al µ Al <µ,Ht−k+1:t , (4) whereHt−k+1:tdenotes the model’s hidden representations for the corresponding listener text tokens ˆWl t−k+1:t. This formulation ensures coherent alignment across modalities throughout generation. 4 Dataset Construction Existing publicly available dyadic video datasets do not satisfy the requirements of the OMCRG task (Figure 1). For example, mono-view talking-head datasets and offline dialogue corpora (e.g., MultiDialog [ 47]) do not offer split-screen recordings that capture speaker and listener simultaneously. Others, such as IEMOCAP [ 9], feature predominantly side profile views recorded in noisy environ- ments and provide only mixed audio channels, thus preventing separate analysis of each participant’s speech. Furthermore, datasets such as ViCo [ 74], ICD [ 43], and REACT2024 [ 54] lack comprehen- sive textual annotations, suffer from low video
|
https://arxiv.org/abs/2505.21724v1
|
resolution [ 74,9,54], or exhibit inconsistent spoken languages [ 54]. To fill the dataset gap, we introduce ResponseNet that comprises 696 temporally synchronized dyadic video pairs, totaling over 14 hours of natural conversational exchanges. Each pair provides high-resolution ( 1024×1024 ) frontal-face streams for both speaker and listener, along with separated audio channels to support fine-grained analysis of verbal and nonverbal behavior. Figure 1 shows ResponseNet is the only dataset that satisfies the key requirements: (1) online video streaming, (2) separate audio channels, and (3) textual word-level annotations for both participants. The construction of ResponseNet follows a rigorous workflow that integrates automated tools with extensive human-in-the-loop curation. (1) Initially, split-screen videos featuring simultaneous appear- ances of speaker and listener are sourced from YouTube according to predefined topic and quality criteria. These clips are then filtered to remove low-resolution, noisy, or frequently camera transitions. (2) Human annotators perform a thorough review to correct camera-view mis-alignments and ensure precise temporal synchronization between streams. (3) Next, mixed-channel audio tracks are auto- matically separated into discrete speaker and listener channels using speaker separation tools such as MossFormer2 [ 72] and subsequently verified and refined by experts. Finally, word-level transcripts are generated via automatic speech recognition [ 50] and meticulously proofread to guarantee accuracy. 6 Table 1: Comparison of conversation datasets. /userand/userdenote speaker and listener data re- spectively. ResponseNet provides complete multimodal data (speaker+listener) with their separated audios. Dataset Video Audio Text Online Separated Audios # Dialogues Total Duration MultiDialog [47] /user+/user /user +/user /user +/user ✗ ✓ 8,733 339.7h ICD [43] /user+/user /user +/user ✗ ✓ ✓ 182,132 72h ViCo [75] /user+/user /user ✗ ✓ ✗ 483 1.6h REACT2024 [55] /user+/user /user +/user ✗ ✓ ✓ 5,919 71.8h IEMOCAP [9] /user+/user /user +/user /user +/user ✓ ✗ 151 11.5h ResponseNet /user+/user/user+/user/user+/user ✓ ✓ 696 14.2h Figure 4: Statistics of ResponseNet . (a) Distribution of video clip durations. (b) Distribution of dyadic conversation topics. (c) Word cloud of spoken words in dyadic conversations. By combining automation with meticulous manual oversight across data sourcing, preprocessing, alignment, audio separation, and annotation, this pipeline yields a high-quality, richly annotated dyadic video corpus ideally suited for multimodal conversational response generation. The statistics of ResponseNet are shown in Figure 4. The durations of speaker-listener video clips range from 27.13 seconds (short conversations) to 863.13 seconds (long conversations) in Respon- seNet. Figure 4.(a) shows that the average clip duration in ResponseNet is 73.39 seconds, significantly longer than that of other dyadic datasets such as REACT2024 (30 seconds), and Vico (9 seconds). This extended duration ensures that each clip captures sufficient conversational exchanges. Fig- ure 4.(b) illustrates that the conversations span a diverse range of topics, including professional discussions (e.g., economic interviews, news commentaries), emotionally driven interactions (e.g., intimate conversations), educational settings (e.g., teaching interviews), and interdisciplinary expert discussions. Figure 4.(c) presents a word cloud highlighting the most frequent words in the conversa- tions. Such diversity shows that ResponseNet captures rich and varied human-human interactions rather than being restricted to narrow or monotonic conversation patterns. Words related to personal relationships (e.g., "love," "family," "friends")
|
https://arxiv.org/abs/2505.21724v1
|
and broader real-world topics (e.g., "world," "market," "history," "school") are prominent. 5 Experiments Implementation Details. Our framework was implemented using PyTorch [ 48] and trained on four NVIDIA Tesla A100 GPUs. The model optimization was performed using the AdamW optimizer [ 26] with a learning rate of 2×10−5,β1= 0.9,β2= 0.999, and a weight decay of 10−4, accompanied by a cosine learning rate scheduler. Training was executed with a batch size of one for 2,000 epochs. Additionally, we fine-tuned the LLM using the LoRA [ 20] technique. More implementation details are provided in the supplementary material. Evaluation Metrics. Quantitatively evaluating the quality of multimodal response generation remains non-trivial. We thereby employ comprehensive metrics to evaluate generation results across text, audio, and visual modalities. For text response, we use METEOR [ 7], BERTScore F1[70], and ROUGE-L [ 32] to measure how appropriate andnatural the generated responses are, based on reference responses from the ResponseNet test set. We also adopt Distinct-2 [ 30] to evaluate diversity through the ratio of unique bi-grams. For audio response, we adopt UTMOSv2 [ 5], a neural MOS predictor that estimates the perceptual naturalness, and employ LSE-D (Lip–Speech Error 7 Table 2: Quantitative Results on ResponseNet test set. ModelText Audio Video METEOR ↑BERTScore F1↑ROUGE-L ↑Distinct-2 ↑LSE-D ↓UTMOSv2 ↑ FD↓ FVD↓ Ground-Truth – – – 0.835 8.96 1.56 – – Offline Text Dialogue Generation System GPT-4o [2] 0.167 0.805 0.079 0.928 – – – – GPT-4 [2] 0.163 0.822 0.082 0.960 – – – – GPT-o1 [2] 0.189 0.822 0.113 0.948 – – – – Online Auditory Dialogue Generation System Moshi [14] 0.120 0.818 0.078 0.499 – 2.21 – – Facial Reaction Generation System ReactFace [37] – – – – – – 32.72 340.28 ViCo [75] – – – – – – 57.13 325.65 Online Multimodal Conversational Response Generation Baseline LSTM [19] 0.042 0.716 0.000 0.000 9.72 1.21 6.51 320.92 Audio-visual LLM 0.030 0.662 0.020 0.155 10.03 1.32 580.86 681.55 OmniResponse (Ours) 0.141 0.806 0.081 0.882 9.56 1.41 15.46 314.94 Distance) [ 49,13] to evaluate synchronization between generated speech and lip movements. For facial response, we compute Fréchet Distance (FD) [ 4] between real and generated facial-feature distributions, and Fréchet Video Distance (FVD) [ 61] to assess the spatial–temporal visual quality of generated video sequences. 5.1 Quantitative Results To the best of our knowledge, few works have explored the OMCRG task before. We build two base- lines and compare them in Table 2: (1) LSTM-based method employing a recurrent neural network [19] for temporal sequence modeling; (2) Audio-visual LLM taking speaker–listener audio and visual inputs and leveraging pre-trained LLM to generate audio–visual frames autoregressively. Table 2 additionally lists the generation performance of representative single-modality generation methods, including offline text-only dialogue models (e.g., GPT variants), online audio-only generation models (e.g., Moshi), and facial reaction generation approaches. Different from these methods focusing on generating a single modality, our method enables online, synchronized generation across audio, visual, and textual modalities for modeling human conversation. Table 2 shows that our OmniResponse achieves the best performance in dialogue speech content (METEOR, BERTScore F1, ROUGE-L, Distinct-2), audio
|
https://arxiv.org/abs/2505.21724v1
|
quality (UTMOSv2), audio–visual syn- chronization (LSE-D), as well as temporal consistency and visual quality (FVD). Although the LSTM baseline achieves a lower FD owing to its tendency to produce repetitive static visual output, it fails to generate rich, synchronized multimodal responses. Audio-Visual LLM achieves much lower speech content quality (METEOR and BertScore F1) and struggles with audio–visual synchronization (LSE-D). Although Audio-Visual LLM leverages a powerful LLM, it is still challenging to directly synchronize generated audio with facial reactions, especially in the absence of a strong audio founda- tion model. Instead, we introduce a novel framework that effectively adapts pre-trained LLMs for audio–visual generation with the proposed Chrono-Text Markup and Tempo V oice. 5.2 Qualitative Results Figure 5 presents a qualitative result. The synthesized listener remains silent while the speaker is speaking, but then produces an immediate or delayed response at the end of each speaker turn. This behavior demonstrates that OmniResponse effectively captures the temporal dynamics of online dyadic conversation and generates responses at appropriate timestamps. For example, between 100.97 and 132.05 s, the listener interjects briefly between 120.13 and 121.57 s in response to the speaker’s ongoing content, reflecting natural human conversational interaction. In contrast, a conventional pipeline that integrates ASR, dialogue generation, TTS, and talking-head components waits for a predefined silence threshold before producing an offline multimodal response, thus diminishing conversational behaviors such as interruptions, backchannels, questions, and immediate feedback. In contrast, OmniResponse maintains the continuous flow of dyadic conversation by continuously modeling and generating synchronized time series streams of textual, visual, and audio outputs. 8 <PAUSE><PAUSE>… …. <PAUSE><PAUSE>… <PAUSE><PAUSE>… We were both so you’re freeThis part of the conversation and then I want to turn to your AI project, but just the idea of the masses. People are born into circumstances that they can't help. …. Now taking that context, what is the appropriate way to help the poor, the weak in the sense of being born somewhere without advantage, without being able to maybe create wealth, without access to some of the opportunities other people have access to? …. How do we help the weak, the poor, those that really could use just a step up on a ladder? I’m so excited that’s going to be a really interesting question. I think it would just of the time and what you do with your life but that’s a good thing to be able about something like this. Everyone has helped some way. There have been people who helped me. There have been people generous along the way. …. We need to steal from someone or confiscate from someone to give and redistribute somewhere else. But when you just give and something is, I don't know, handed to you too easily. and you don't appreciate it, that's not really helpful either.Well, obviously, the honest one. I’m sure too. But what is their justification and were going to be talking about this topic. So I’m actually curious if you know the people that were on your side, and I’m curious about why it was so difficult for you to ….
|
https://arxiv.org/abs/2505.21724v1
|
Oh yeah, I was just going to say that you know we need them in the world they do a lot of good stuff and then there’s like this other thing.0.00 91.50 96.43 100.3797.31 98.51 100.97 132.05<PAUSE><PAUSE>… <PAUSE><PAUSE>… <PAUSE><PAUSE>… …. …. …. …. …. …. …. …. …. …. <PAUSE><PAUSE>… <PAUSE><PAUSE>… <PAUSE><PAUSE>… 120.13 121.57 131.70 142.57 <PAUSE><PAUSE>… …. <PAUSE><PAUSE>…Timeline (second) Speaker Listener Generated Listener Multi-model Reponse Figure 5: Qualitative Results. Given the speaker’s audio and video streams and corresponding utterances (left), OmniResponse autoregressively generates synchronized visual, audio, and textual response streams (right). For clarity, [LASTING] tokens are removed from the generated dialogue. 5.3 Ablation Studies Effectiveness of Chrono-Text Markup. We construct baselines removing the proposed Chrono-Text Markup from our OmniResponse. In the baselines, each predicted word is assigned a timestamp indicating when it emerges; if this timestamp falls within a temporal window around the current time, the word is retained and appended to the spoken output; otherwise, it is discarded. As shown in the last rows of Table 3, incorporating Chrono-Text Markup significantly improves audio-visual synchronization, reducing the LSE-D score from 11.51 to 9.56. In addition, it enhances the semantic alignment of speech with conversational context, increasing METEOR from 0.122 to 0.141 and BERTScore F1from 0.766 to 0.806. Improvements in FD and UTMOSv2 further indicate that Chrono- Text Markup boosts the quality of the generated audio and facial responses. These results demonstrate the effectiveness of Chrono-Text Markup in generating high-quality multimodal responses. Table 3: Ablation study on the effects of the proposed Chrono- Text Markup and TempoV oice. Chrono-Text MarkupTempo VoiceMETEOR BERTScore F1LSE-D UTMOSv2 FD ✗ ✗ 0.090 0.755 13.64 1.21 596.27 ✓ ✗ 0.128 0.778 11.91 1.23 19.58 ✗ ✓ 0.122 0.766 11.51 1.39 23.42 ✓ ✓ 0.141 0.806 9.56 1.41 15.46Effectiveness of TempoVoice. To study the effect of our Tem- poV oice, we remove it from our framework and instead directly feed the hidden states, which are trimmed or padded to match the target audio length, into a multi- layer perceptron to predict audio token logits. As shown in Ta- ble 3, removing TempoV oice degrades audio–visual synchronization and reduces the quality of generated audio responses, where UTMOSv2 drops from 1.41 to 1.23, and LSE-D increases from 9.56 to 11.91. These results highlight the importance of TempoV oice in temporally aligning audio with the other modalities and enhancing the quality of the generated audio. 9 6 Conclusion We have presented OmniResponse, an online multimodal generation model that produces online verbal and non-verbal listener responses to the multimodal behaviors of a speaker. OmniResponse integrates techniques for processing multimodal inputs, synchronizing across modalities, and aligning responses with the speaker’s content. To enable evaluation of this task, Online Multimodal Conversational Response Generation in Dyadic Interactions, we introduce ResponseNet, a dataset containing parallel recordings of speaker and listener streams. Our model and dataset lay the foundation for future research in this emerging field. Experimental results demonstrate that OmniResponse significantly increases speech semantic content, audio-visual synchronization, audio, and visual quality. Acknowledgments. This work is supported by the KAUST Center of Excellence for Generative AI under
|
https://arxiv.org/abs/2505.21724v1
|
award number 5940. The computational resources are provided by IBEX, which is managed by the KAUST Supercomputing Core Laboratory. References [1]Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Qin Cai, Vishrav Chaudhary, Dong Chen, Dongdong Chen, Weizhu Chen, Yen-Chun Chen, Yi-Ling Chen, Hao Cheng, Parul Chopra, Xiyang Dai, Matthew Dixon, Ronen Eldan, Victor Fragoso, Jianfeng Gao, Mei Gao, Min Gao, Amit Garg, Allie Del Giorno, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Wenxiang Hu, Jamie Huynh, Dan Iter, Sam Ade Jacobs, Mojan Javaheripi, Xin Jin, Nikos Karampatziakis, Piero Kauffmann, Mahoud Khademi, Dongwoo Kim, Young Jin Kim, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Yunsheng Li, Chen Liang, Lars Liden, Xihui Lin, Zeqi Lin, Ce Liu, Liyuan Liu, Mengchen Liu, Weishung Liu, Xiaodong Liu, Chong Luo, Piyush Madan, Ali Mahmoudzadeh, David Majercak, Matt Mazzola, Caio César Teodoro Mendes, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Liliang Ren, Gustavo de Rosa, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Yelong Shen, Swadheen Shukla, Xia Song, Masahiro Tanaka, Andrea Tupini, Praneetha Vaddamanu, Chunyu Wang, Guanhua Wang, Lijuan Wang, Shuohang Wang, Xin Wang, Yu Wang, Rachel Ward, Wen Wen, Philipp Witte, Haiping Wu, Xiaoxia Wu, Michael Wyatt, Bin Xiao, Can Xu, Jiahang Xu, Weijian Xu, Jilong Xue, Sonali Yadav, Fan Yang, Jianwei Yang, Yifan Yang, Ziyi Yang, Donghan Yu, Lu Yuan, Chenruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren Zhou. Phi-3 technical report: A highly capable language model locally on your phone. https://arxiv.org/abs/2404.14219 , 2024. [2]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [3]Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems , 35:23716–23736, 2022. [4]Helmut Alt and Michael Godau. Computing the fréchet distance between two polygonal curves. International Journal of Computational Geometry & Applications , 5(01n02):75–91, 1995. [5]Kaito Baba, Wataru Nakata, Yuki Saito, and Hiroshi Saruwatari. The t05 system for the voicemos challenge 2024: Transfer learning from deep image classifier to naturalness mos prediction of high-quality synthetic speech. In IEEE Spoken Language Technology Workshop , pages 818–824, 2024. [6]Mikhail Bakhtin. The problem of speech genres. In Modern genre theory , pages 82–97. Routledge, 2014. 10 [7]Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization , pages 65–72, 2005. [8]Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. [9]Carlos Busso, Murtaza Bulut,
|
https://arxiv.org/abs/2505.21724v1
|
Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation , 42:335–359, 2008. [10] Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth André, and Michel Valstar. The noxi database: multimodal recordings of mediated novice-expert interactions. In Proceedings of ACM International Conference on Multimodal Interaction , pages 350–359, 2017. [11] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11315–11325, 2022. [12] Shimin Chen, Xiaohan Lan, Yitian Yuan, Zequn Jie, and Lin Ma. Timemarker: A versatile video-llm for long and short video understanding with superior temporal localization ability. arXiv preprint arXiv:2411.18211 , 2024. [13] Joon Son Chung and Andrew Zisserman. Out of time: automated lip sync in the wild. In Computer Vision–ACCV 2016 Workshops: ACCV 2016 International Workshops , pages 251–263. Springer, 2017. [14] Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave, and Neil Zeghidour. Moshi: a speech-text foundation model for real-time dialogue. arXiv preprint arXiv:2410.00037 , 2024. [15] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 12873–12883, 2021. [16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM , 63(11):139–144, 2020. [17] Dirk KJ Heylen. Understanding speaker-listener interaction. In Annual Conference of the International Speech Communication Association , pages 2151–2154, 2009. [18] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems , 33:6840–6851, 2020. [19] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997. [20] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. [21] Yuchi Huang and Saad M Khan. Dyadgan: Generating facial expressions in dyadic interactions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops , pages 11–18, 2017. [22] Yuchi Huang and Saad M Khan. Generating photorealistic facial expressions in dyadic interac- tions. In British Machine Vision Conference , page 201, 2018. [23] Do Yuon Kim, Ha Kyung Lee, and Kyunghwa Chung. Avatar-mediated experience in the metaverse: The impact of avatar realism on user-avatar relationship. Journal of Retailing and Consumer Services , 73:103382, 2023. 11 [24] Everlyne Kimani, Timothy Bickmore, Ha Trinh, and Paola Pedrelli. You’ll be great: vir- tual agent-based cognitive restructuring to reduce public speaking anxiety. In International Conference on Affective Computing and Intelligent Interaction , pages 641–647, 2019. [25] Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013. [26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [27] Raynard S Kington, Stacey Arnesen, Wen-Ying Sylvia Chou, Susan J Curry, David Lazer,
|
https://arxiv.org/abs/2505.21724v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.