text
string
source
string
a focus on their application in knowledge management systems for digital assistants. The primary objective is to evaluate user preferences and performance when interacting with these interfaces, particularly in scenarios requiring structured information input, such as date handling and content categorization. While the proposed solution utilizes a voice interface for natural conversational updates, a GUI was also developed to perform a comparative study between the two approaches. Both interfaces are designed to serve the same purpose: facilitating the efficient and accurate maintenance of the digital assistant’s knowledge base. The central hypothesis posits that as task complexity increases, users will prefer the voice interface over the graphical interface. This assumption is based on the premise that voice interfaces allow for natural, conversational input, reducing the cognitive effort Schmidhuber et al. [2021] required to structure and enter complex information. The voice system further enhances usability by interpreting input in real-time, extracting key details, verifying completeness, and prompting for clarification when necessary. In contrast, following the logic behind our hypothesis, graphical interfaces, while effective for simpler tasks due to their visual feedback and perceived control, may become cumbersome for complex tasks requiring lengthy or complicated inputs. 1https://www.langchain.com/langgraph 2https://gemini.google.com/ 4 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Figure 2: V oice CMS architecture 5 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT The experiment involved nine tasks, categorized into three arbitrary levels of complexity: easy, medium, and complex. Task complexity was determined by two factors: the length of the information to be entered and the level of detail of the content. Easy tasks required short, straightforward inputs, such as reporting a single event or issue. Medium tasks involved more detailed information, including specific dates and descriptions. Complex tasks required lengthy inputs combining multiple aspects, such as price changes, item removals, and the introduction of new items with detailed descriptions. Each task was designed to reflect real-world scenarios. For example, an easy task involved reporting a jacuzzi malfunction valid for the current day. A medium task required announcing a beekeepers’ fair with specific dates, times, and details about the event. A complex task involved updating a restaurant menu, including price changes, the removal of an item, and the introduction of a new dish with ingredients and pricing. These tasks were designed to test the system’s ability to handle varying levels of complexity and the users’ ability to interact effectively with each interface. To approximate a neutral starting point, participants were given equal exposure to both interfaces through detailed instructions and a warm-up task to familiarize themselves with the systems. The warm-up task, which included example solutions for both interfaces, was excluded from the final analysis but served to reduce the learning curve and ensure participants were equally prepared as far as the understanding of both interfaces is concerned. The experiment was conducted using a web-based platform that ensured consistency across all participants. The platform randomized the order in which participants interacted with both interfaces and ensured that all required data were collected. Participants completed the same nine tasks
https://arxiv.org/abs/2505.22303v1
in a fixed order ( [M, M, E, C, C, E, C, M, E ], where the letters translates into Easy,Medium, Complex), which was pre-randomized with respect to task difficulty to minimize learning effects. This controlled setup allowed for a reliable comparison between the two interfaces, isolating the effects of task complexity and interface design on user performance and preferences. After each task, participants rated its difficulty using the Single Ease Question (SEQ) Sauro and Dumas [2009]. Upon completing all tasks for a given interface, participants evaluated its overall usability using the System Usability Scale (SUS) Brooke [1996]. Finally, after completing both sets of tasks, participants indicated their preferred interface (GUI, V oice CMS, or no preference) for each of the tasks and were also given the opportunity to share any additional observations or feedback in an open-ended comment field. In addition to these subjective measures, the study also collected objective metrics to assess interface performance. These included task completion time and the quality of task execution, which was rated on a 5-point scale based on predefined rules (handled manually in post-processing). Ratings of 4 or 5 indicated that the information produced was suitable for guest-facing dialogues, while lower scores reflected varying degrees of inaccuracy or incompleteness, with a score of 1 indicating that the content was essentially unusable. For the V oice CMS, further interaction-specific data were gathered, such as the number of user utterances during the dialogue, the need for corrections after presenting summaries, and whether the summaries were utilized. These supplementary metrics provided deeper insights into user interaction with the voice interface. 5 Results This section presents the analysis of data collected from a user study comparing GUI and V oice CMS. The study involved 7 participants (2 female, 5 male), aged between 33 and 51 years (M = 42, SD = 6.1). Each participant completed a series of tasks using both interfaces, along with corresponding usability surveys. All participants successfully finished the study, yielding a complete and balanced dataset of 126 task observations (63 for GUI, 63 for V oice CMS). 5.1 Descriptive Statistics Overall usability assessments indicated a preference for the graphical interface. The mean SUS score for the GUI was 78.6 (SD = 21.9, range [40, 100], N=7), falling in the ’Good’ range according to usability benchmarks Bangor et al. [2009] and approaching ’Excellent’. The V oice CMS received a mean SUS score of 67.5 (SD = 22.1, range [32.5, 95], N=7), which is close to the historical average for usability studies Sauro and Lewis [2012], often interpreted as ’Okay’ or marginally ’Good’. Examining task-level metrics, objective performance ( score , scale 1-5) was generally high for both interfaces, though slightly better for the GUI (M = 4.54, SD = 0.74) than for the V oice CMS (M = 4.40, SD = 0.89). A more distinct difference emerged in subjective ease-of-use ( seq_score , scale 1-7, where 7=Very Easy). Participants rated tasks performed with the GUI as noticeably easier (M = 5.86, SD = 1.27) compared to the V oice CMS (M = 5.10, SD =
https://arxiv.org/abs/2505.22303v1
1.73). As expected, perceived ease-of-use decreased for both interfaces as nominal task difficulty increased; for instance, mean SEQ scores dropped from 6.52 (GUI) and 6.00 (VCMS) for ’Easy’ tasks to 5.05 (GUI) and 4.14 (VCMS) for ’Complex’ tasks. Efficiency metrics also favoured the GUI, which yielded shorter average processing times (M = 179 s, SD = 124 6 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT s) compared to the V oice CMS (M = 204 s, SD = 134 s). Task completion time predictably increased with task difficulty for both systems. These initial descriptive results suggest potential advantages for the GUI in terms of overall usability, perceived ease-of-use, and efficiency, although objective performance was comparable. The following sections delve deeper into these findings through inferential statistical modeling, exploring how factors such as task difficulty, subjective perceptions, and task completion time influence user interface preference, while accounting for the dependencies inherent in the repeated-measures study design. 5.2 Influence of Task Difficulty on Preference Figure 3: Preference Distribution by Task Difficulty The initial hypothesis of the experiment was that user preference would shift towards the voice-controlled interface as task complexity increased. This assumption stemmed from the understanding that voice interfaces, with their natural language input, might reduce cognitive load during complex interactions. However, a preliminary analysis of preference distributions across task difficulty levels, visualized in Figure 3, suggested a potential trend reversal, with users possibly favoring the GUI for more complex tasks. It’s important to acknowledge that this initial observation did not account for the repeated-measures design of the study, where each participant interacted with both interfaces across all difficulty levels. To examine the relationship between task complexity and interface preference while accounting for the nested structure of the data (126 task observations nested within 7 participants), a Bayesian multilevel categorical logistic regression model using the brms package in Rwas employed. This approach allows for isolating the effect of task difficulty on interface preference from individual variations in baseline preferences and testing the hypothesis that GUI is chosen more frequently as task complexity increases. The model predicted the user’s interface preference (a three-level categorical outcome: ’Neutral’, ’GUI’, ’V oice CMS’, with ’V oice CMS’ serving as the reference category) based on task difficulty (a three-level categorical predictor: ’Easy’ as baseline, ’Medium’, ’Complex’). Random intercepts were included for each participant ( scenarioId ) to account for individual differences in baseline preference patterns. The model was fitted using the No-U-Turn Sampler (NUTS) across 4 chains, and convergence diagnostics confirmed the reliability of the obtained estimates (all Rhat≤1.01). Significant variability in baseline preferences was observed across participants, confirming the necessity of the multilevel approach. The estimated standard deviation for the random intercepts was substantial both for the log-odds of preferring ’Neutral’ versus ’V oice CMS’ (SD = 2.41, 95% CrI [1.08, 4.65]) and for preferring ’GUI’ versus ’V oice CMS’ (SD = 2.19, 95% CrI [0.81, 4.70]). Examining the fixed effects revealed the influence of task difficulty on interface preference at the population level. For the baseline ’Easy’ difficulty level, users showed
https://arxiv.org/abs/2505.22303v1
a strong and statistically credible preference for ’V oice CMS’ over ’GUI’ (Intercept Log-Odds = -11.92, 95% CrI [-36.27, -3.26]). The credible interval being entirely below zero indicates lower odds of choosing GUI compared to V oice CMS for easy tasks. No significant difference was found between ’Neutral’ and ’V oice CMS’ preference at this baseline difficulty (Intercept Log-Odds = 0.16, 95% CrI [-2.02, 2.10]). 7 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Figure 4: SEQ Difference (GUI - V oice CMS) by Preference A key finding is that task difficulty significantly modulated the interface preference. Compared to ’Easy’ tasks, the log-odds of preferring ’GUI’ over ’V oice CMS’ increased substantially and credibly for both ’Medium’ difficulty tasks (Estimate = 11.21, 95% CrI [2.62, 35.58]) and ’Complex’ difficulty tasks (Estimate = 13.69, 95% CrI [4.90, 38.23]). This indicates a strong shift towards preferring the GUI relative to V oice CMS as tasks become more complex. In contrast, the relative preference between ’Neutral’ and ’V oice CMS’ did not change significantly with increasing task difficulty (Medium vs. Easy: Log-Odds Change = -0.53, 95% CrI [-1.71, 0.60]; Complex vs. Easy: Log-Odds Change = -0.88, 95% CrI [-2.38, 0.49]). In summary, while V oice CMS held a clear preference advantage for easy tasks, this advantage was overcome as task difficulty increased, with users becoming significantly more likely to prefer the GUI for medium and complex tasks relative to V oice CMS. The significant positive coefficients for the GUI preference at higher difficulty levels provide strong statistical support for the hypothesis that the GUI interface is increasingly preferred as tasks become more complex within this experimental context. 5.3 Influence of SEQ Difference on Preference To investigate how the perceived difference in task difficulty between the interfaces influenced user preference, we analyzed the relationship between seq_diff (SEQ Score GUI - SEQ Score V oice CMS) and the chosen interface (preference ). Initial exploration using visualizations (Figure 4) suggested that V oice CMS or Neutral preferences were more common when the subjective difficulty was perceived as similar for both interfaces (low absolute seq_diff ), while GUI preference appeared to increase notably when it was rated as substantially easier than V oice CMS (positive seq_diff ). To formally test this relationship while accounting for non-independent observations from the 7 participants (scenarioId ), two separate generalized linear mixed-effects models (GLMMs) with a binomial error distribution and logit link function were fitted using the lme4 package in R. Both models included seq_diff as a fixed effect predictor and a random intercept for scenarioId . The first model predicted the log-odds of preferring GUI versus other options (Neutral or V oice CMS), while the second predicted the log-odds of preferring V oice CMS versus other options (Neutral or GUI), based on N=63 observations where both scores were available. The model predicting GUI preference revealed a significant effect of the subjective difficulty difference. When the interfaces were rated similarly difficult ( seq_diff = 0), users were significantly less likely to prefer the GUI (Intercept Estimate = -2.25, SE
https://arxiv.org/abs/2505.22303v1
= 0.72, z = -3.13, p = 0.0018). However, there was a strong positive association between seq_diff and choosing the GUI (Estimate = 1.15, SE = 0.41, z = 2.78, p = 0.0054). This indicates that as the GUI was perceived as relatively easier (i.e., seq_diff increased), the odds of preferring the GUI increased - the corresponding odds ratio (OR) was 3.16 (95% CI [1.40, 7.09]). Complementary results were found for the model predicting V oice CMS preference. The baseline preference for V oice CMS when perceived difficulty was equal ( seq_diff = 0) was not significantly different from preferring other options (Intercept Estimate = -0.28, SE = 0.34, z = -0.82, p = 0.411). There was a significant negative association between seq_diff and choosing V oice CMS (Estimate = -0.61, SE = 0.23, z 8 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Figure 5: Predicted Probability of Preference vs SEQ Difference = -2.61, p = 0.0091). As the GUI was perceived as relatively easier ( seq_diff increased), the odds of preferring the V oice CMS significantly decreased with the corresponding odds ratio equal to 0.54 (95% CI [0.34, 0.86]). Both models indicated some variability between participants in their baseline preferences, as evidenced by the random intercept variances (Variance = 0.61 for GUI model, 0.11 for VCMS model). The predicted probability plot (Figure 5) derived from these models further illustrates the findings. It shows that for tasks where subjective difficulty ratings were similar ( seq_diff near 0), the predicted probability of choosing V oice CMS was higher than that for GUI. The crossover point, where the predicted probability of choosing GUI surpasses that of V oice CMS, occurred when the SEQ score was just above 1. In conclusion, the subjective difficulty difference ( seq_diff ) was a significant predictor of preference, confirming that users generally tended to favour the interface they perceived as easier for a given task, even after accounting for user variability. This relationship, however, displayed an interesting asymmetry: V oice CMS held an advantage when perceived difficulty was comparable (GUI was less preffered at seq_diff ≈0). Consequently, the GUI required a clear perceived ease-of-use advantage, specifically being rated more than ≈1 point easier on the 7-point SEQ scale, before it consistently surpassed the V oice CMS in user preference according to model predictions. 5.4 Influence of Processing Time Difference on Interface Preference To further explore factors influencing user choice, the relationship between the difference in task completion time (time_diff = GUI Time - V oice CMS Time) and interface preference ( preference ) was examined. Analysis of the distribution of completion time depending on preference (Figure 6) suggests potential differences in how time influenced the choice between interfaces. To test the hypothesis that faster task completion favours an interface’s preference, while accounting for user variability ( scenarioId ), two binomial generalized linear mixed-effects models (GLMMs) were fitted (N=63 observations, 7 users), predicting the preference for GUI and V oice CMS respectively, using time_diff as the predictor. Analysis of the random effects indicated moderate variability
https://arxiv.org/abs/2505.22303v1
between participants in their baseline tendency to prefer the GUI interface (Intercept SD = 0.78), but considerably less variability in their baseline tendency to prefer the V oice CMS interface (Intercept SD = 0.24) after accounting for the time difference. Examining the fixed effects, the model for GUI preference showed that when processing times were equal ( time_diff = 0), users were significantly less likely to prefer the GUI (Intercept Estimate = -1.23, SE = 0.47, z = -2.62, p = 0.009). A negative trend was observed for the time_diff coefficient (Estimate = -0.0060, SE = 0.0033, z = -1.81, p = 0.071), indicating that the odds of preferring the GUI decreased as its relative completion time increased. This trend approached statistical significance (with OR = 0.994). Conversely, the model for V oice CMS preference revealed a statistically significant positive effect of time_diff (Estimate = 0.0059, SE = 0.0029, z = 2.08, p = 0.038). As the GUI took relatively longer (positive time_diff ), the 9 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Figure 6: Processing Time Difference (GUI - V oice CMS) by Preference Figure 7: Predicted Probability of Preference vs Processing Time Difference odds of preferring V oice CMS significantly increased (OR = 1.006). The baseline preference for V oice CMS when times were equal showed only a non-significant tendency to be disfavoured (Intercept Estimate = -0.55, SE = 0.30, z = -1.85, p = 0.064). Visualization of the predicted probabilities from these models in Figure 7 revealed an interesting crossover point. The predicted probability of choosing the GUI only surpassed that of choosing the V oice CMS when the completion time in the GUI was noticeably faster. This intersection occurred at a time difference of approximately -57 seconds. In conclusion, the analysis indicates that while preference generally shifted towards the faster interface, the effect was statistically robust for favouring V oice CMS when the GUI was slower (p = 0.038) and showed a strong trend towards favouring the GUI when it was faster (p = 0.071). This dynamic, supporting the "faster is better" hypothesis, must be viewed alongside the baseline finding: the GUI was significantly less preferred at equal speeds and required completing tasks approximately 57 seconds faster to become the more probable choice. 10 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Figure 8: Trend of Mean Summary Count over Tasks 5.5 Voice CMS Interaction Metrics This subsection details the analysis of user interaction patterns specifically with the V oice CMS interface, focusing on the number of messages exchanged between experiment participants and the digital assistant, the number of summaries required before accepting the correctness of gathered information, and the influence of task characteristics like difficulty and length. The analysis is based on the 63 observations collected during V oice CMS interactions. Statistics from the experiment data revealed the nature of user interactions with the V oice CMS. The number of summaries (at least one for each task) per single task was relatively low, with an average of 1.43
https://arxiv.org/abs/2505.22303v1
(SD = 1.00, Median = 1, Range [1, 7]). Notably, the analysis showed that partial summaries, a part of a user feedback mechanism after each new part of information received by the assistant, were consistently enabled during message exchanges in this dataset. The average total number of messages exchanged per task was 4.3 (SD = 2.76, Median = 4, Range [2, 16]), and it was identical to the number of messages sent with summaries on. Both predefined task difficulty and the objective task length (measured in characters) were found to be strongly related, with task length increasing significantly across ’Easy’ (M = 94.7), ’Medium’ (M = 284.0), and ’Complex’ (M = 425.0) categories (LMM fixed effects for difficulty: p < 0.01). Task difficulty significantly influenced interaction volume. The mean number of total messages increased progressively from ’Easy’ (M = 2.76) through ’Medium’ (M = 4.19) to ’Complex’ (M = 5.95) tasks. A Generalized Linear Mixed Model (GLMM) confirmed that difficulty level was a significant positive predictor of the number of messages exchanged (Poisson GLMM, linear effect of difficulty: Estimate = 0.54, p < 0.001), after accounting for random intercepts per user. A similar, though less pronounced, increasing trend was observed for the mean number of summaries generated across difficulty levels (Easy=1.05, Medium=1.48, Complex=1.76). Furthermore, task length independently predicted interaction metrics, even after controlling for the categorical difficulty level and user variability. Longer tasks were significantly associated with longer processing times (LMM, Estimate = 0.87 s/char, p < 0.001) and a higher number of total messages exchanged (Poisson GLMM, Estimate = 0.0027 log-count/char, p = 0.033). The tasks were executed in a random (but fixed order). The plot in Figure 8 displays them grouped by difficulty but without changing relative order. Visual inspection of interaction metrics did not reveal consistent learning effects or clear trends indicating systematic changes in message (plot not provided) or summary counts as users progressed through the experiment. The study was not designed in a way that allowed for measuring the impact of the learning interface and gaining experience. Regarding performance outcomes, the number of summaries generated was not found to be a significant predictor of the objective task score when controlling for task difficulty and user effects in a Linear Mixed Model (LMM, p = 0.97). Finally, confirming the intuitive relationship between interaction volume and duration, the total number of messages 11 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT exchanged was a strong, significant positive predictor of task completion time (LMM, Estimate = 40.6 s/message, p < 0.001), even when accounting for task difficulty and user variability. In summary, interactions with the V oice CMS were strongly influenced by task characteristics. Both increasing task difficulty and objective task length led to significantly longer interactions (more messages) and increased processing times. Task difficulty was linked to the number of required summaries, but this number did not significantly impact task score in this analysis. 6 Discussion The comparative analysis revealed distinct performance and usability profiles for the GUI and V oice CMS in the context
https://arxiv.org/abs/2505.22303v1
of knowledge management tasks. Although objective accuracy was similar, some differences emerged in perceived usability, task completion speed, and notably, in user preference patterns. These differences were not uniform but were influenced by the complexity of the tasks performed, prompting a closer look at how task demands interacted with interface choice. Our central hypothesis posited that users would prefer the V oice CMS over the GUI for more complex tasks, assuming the natural language interaction would alleviate the cognitive load associated with structuring complex information typical in GUI forms. However, our statistical analysis contradicted this initial assumption. The multilevel categorical logistic regression model showed that while V oice CMS was indeed preferred over GUI for ’Easy’ tasks (where GUI was never chosen as the preferred option), preference significantly shifted towards the GUI as task difficulty increased from ’Easy’ to ’Medium’ and ’Complex’. This suggests that, within the context of this study, factors potentially favouring the GUI — such as visual oversight, direct manipulation capabilities for precise data entry (dates, specific entities mentioned by users), ease of error correction, or potentially greater user familiarity with graphical paradigms — outweighed the hypothesised benefits of conversational input for managing increased task complexity. User comments align with this, noting difficulties in conveying complex, nuanced information via voice and challenges in correcting ASR/NLU misinterpretations ( "conveying intricacies... can be difficult and time-consuming" ,"hard to correct afterwards" ). Despite the overall trend favouring GUI for harder tasks, the analyses of subjective ease-of-use difference and processing time difference revealed a more nuanced picture of user preference. While users generally preferred the interface they perceived as easier or faster, the V oice CMS demonstrated a degree of resilience. The analysis showed that the GUI needed to be perceived as more than one point easier on the 7-point SEQ scale before it became the statistically more probable choice. Similarly, the GUI had to be substantially faster, completing the task approximately 57 seconds quicker than the V oice CMS, to overcome a baseline disadvantage and surpass the voice interface in predicted preference probability. This suggests that users might tolerate a degree of perceived inefficiency or difficulty with the V oice CMS. This tolerance could stem from the perceived benefit of reduced physical typing effort, the novelty, or the more natural interaction, as hinted by comments describing the voice interface as an "attractive alternative" that"reduces the effort associated with typing" . A deeper investigation into the factors driving this tolerance would be a valuable direction for future work to fully understand the adoption potential of V oice CMS. Analysis specific to the V oice CMS interactions highlighted the importance of feedback and the impact of task characteristics on dialogue patterns. The number of messages exchanged and overall processing time increased with both predefined task difficulty and objective task length, confirming that more complex or longer tasks required more extensive interaction. Notably, users consistently utilised the partial summary feature provided after each piece of information was processed by the assistant. This universal usage underscores the perceived necessity of feedback for verification and control in
https://arxiv.org/abs/2505.22303v1
a voice-only interaction, a point strongly echoed in user comments ( "wanted to be sure I entered it correctly" ,"wouldn’t turn off summary, not because of AI itself" ). However, while essential for user confidence, the number of summaries generated was not significantly associated with objective task success, suggesting that feedback alone did not guarantee perfect outcomes, perhaps due to difficulties in catching subtle errors ( "harder to catch errors like 8.20 instead of 8-20" ) or correcting them effectively via voice. Several limitations should be considered when interpreting these findings. The study involved a small sample size (N=7), limiting the generalizability of the results. The interaction was short-term, preventing conclusions about long-term usability, learning effects, or adaptation to the V oice CMS, which users commented might improve their experience ("would require ’training’" ,"if I knew... my rating would be higher" ). Although tasks were presented textually, the inherent structure might have unintentionally favoured GUI interaction patterns if users mentally mapped them to form- filling, potentially underutilizing the conversational capabilities of the V oice CMS, as one user speculated ( "impression might change if information wasn’t provided so concisely" ). Furthermore, the V oice CMS prototype exhibited technical limitations noted by users (ASR/NLU errors, interruptions), which likely impacted usability scores and preferences. Finally, while not measured, participants likely had significantly more experience with GUIs, potentially influencing their baseline preferences and performance. Future research should involve a larger participant pool, incorporate 12 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT improvements addressing the reported limitations, and examine long-term adoption dynamics and learning curves associated with V oice CMS use. Despite these limitations, the study highlights the potential of voice interfaces as an "attractive alternative" for data entry tasks, particularly simpler ones — this attractiveness becomes even more relevant in a hotel setting, where GUI deployment could be less feasible. The strong user reliance on summaries, coupled with comments about the burden of auditory verification and the desire for visual confirmation, points towards a promising avenue for future research and development: hybrid interfaces. As suggested by multiple participants, combining voice input for its naturalness and efficiency in capturing information with a synchronized visual display for real-time verification and summary could mitigate the core weaknesses observed in the voice-only modality. Such a hybrid system might significantly reduce interaction time (by lessening the need for lengthy spoken summaries) and enhance user confidence, potentially shifting preference towards voice-centric workflows even for more complex tasks. Future research should also explore these hybrid designs. 7 Conclusions This paper presents a nuanced view of user preference between GUI and V oice CMS interfaces. While the GUI held advantages in overall usability ratings, subjective ease-of-use, and efficiency, particularly as task difficulty increased, the V oice CMS demonstrated strong user preference for simpler tasks and a degree of user tolerance for moderate inefficiencies. The need for robust feedback in voice interactions was evident. The results suggest that while voice interfaces hold significant potential as an alternative for data entry, overcoming challenges related to handling complexity and ensuring user
https://arxiv.org/abs/2505.22303v1
confidence through effective, potentially visual, feedback mechanisms is key to broader adoption. Hybrid voice-visual interfaces represent a compelling direction to harness the strengths of conversational input while providing the assurance that users desire in the visual form, and appear to be a particularly promising direction for future development and research. References Anis Kouba, Wadii Boulila, Lahouari Ghouti, Ayyub Alzahem, and Shahid Latif. Exploring chatgpt capabilities and limitations: A survey. IEEE Access , 11(194):118698–118721, 2023. Yoonsu Kim, Jueon Lee, Seoyoung Kim, Jaehyuk Park, and Yuho Kim. Understanding users’ dissatisfaction with chatgpt responses: Types, resolving tactics, and the effect of knowledge level. In Proceedings of the 29th International Conference on Intelligent User Interfaces , pages 385–404, 2024. doi:10.1145/3640543.3645148. URL https: //dl.acm.org/doi/pdf/10.1145/3640543.3645148 . Richard Oelschlager. Evaluating the impact of hallucinations on user trust and satisfaction in llm-based systems. Master’s thesis, Linnaeus University, Sweden, 2024. S.M Towhidul Islam Tonmoy, S. M. Mehedi Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. A comprehensive survey of hallucination mitigation techniques in large language models, 2024. Emilia Lesiak, Grzegorz Wolny, Bartosz Przybył, and Michał K. Szczerbak. Digital assistant in a point of sales. InProceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graph- ics Theory and Applicationss - Volume 1: GRAPP , HUCAPP and IVAPP , HUCAPP ’25, pages 439–451, 2025. doi:10.5220/0013042500003912. URL https://www.scitepress.org/Papers/2025/130425/130425.pdf . Lucrezia Grassi, Carmine Tommaso Recchiuto, and Antonio Sgorbissa. Knowledge triggering, extraction and storage via human–robot verbal interaction, 2021. Juan C. Olivares-Rojas, J. Gabriel González-Serna, J. Guadalupe Ramos-Díaz, Noe A. Castro-Sánchez, and Johan W. González-Murueta. From gui to vui: A natural language approach to multimodal medical system. Computación y Sistemas , 29(1):295–309, 2025. Leon Reicherts, Yvonne Rogers, Licia Capra, Ethan Wood, Tu Dinh Duang, and Neil Sebire. It’s good to talk: A comparison of using voice versus screen-based interactions for agent-assisted tasks. ACM Transactions on Computer-Human Interaction , 29(3):1–41, 2022. Fermin Chavez-Sanchez and Lucila Mercado Colin. Apples and oranges: A framework for the usability evaluation of voice vs graphical user interfaces: A command-event based method proposal. In Proceedings of the 2nd Conference on Conversational User Interfaces , pages 1–3, 2020. doi:10.1145/3405755.3406169. URL https: //dl.acm.org/doi/pdf/10.1145/3405755.3406169 . Rui Zhang, Stephen North, and Eleftherios Koutsofios. A comparison of speech and gui input for navigation in complex visualizations on mobile devices. In Proceedings of the 12th international conference on Human computer 13 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT interaction with mobile devices and services , pages 357–360, 2010. doi:10.1145/1851600.1851665. URL https: //dl.acm.org/doi/pdf/10.1145/1851600.1851665 . Priyanka Chandel, Devanuj Kanta Balkrishan, and Pankaj Doke. A comparative study of voice and graphical user interfaces with respect to literacy levels. In Proceedings of the 3rd ACM Symposium on Computing for Development , pages 1–2, 2013. doi:10.1145/2442882.2442921. URL https://dl.acm.org/doi/pdf/10.1145/2442882. 2442921 . Laurie Damianos, Dan Loehr, Carl Burke, Steve Hansen, and Michael Viszmeg. The msiia experiment: Using speech to enhance human performance on a cognitive task. International Journal of Speech Technology , 6(2):133–144, 2003. Min Chul Cha and Yong Gu Ji. Context matters: Understanding the effect of usage contexts on users’ modality selection
https://arxiv.org/abs/2505.22303v1
in multimodal systems. International Journal of Human–Computer Interaction , 40(20):6287–6302, 2024. Hannah Limerick, James W. Moore, and David Coyle. Empirical evidence for a diminished sense of agency in speech interfaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems , pages 3967–3970, 2015. doi:10.1145/2702123.2702379. URL https://dl.acm.org/doi/pdf/10.1145/2702123. 2702379 . Ludovic Le Bigot, Patrice Terrier, Virginie Amiel, Gérard Poulain, Eric Jamet, and Jean-François Rouet. Effect of modality on collaboration with a dialogue system. International Journal of Human-Computer Studies , 65(12): 983–991, 2007. Pernilla Qvarfordt, Arne Jönsson, and Nils Dahlbäck. The role of spoken feedback in experiencing multimodal interfaces as human-like. In Proceedings of the 5th international conference on Multimodal interfaces , pages 250–257, 2003. doi:10.1145/958432.958478. URL https://dl.acm.org/doi/pdf/10.1145/958432.958478 . Akshay Madhav Deshmukh and Ricardo Chalmeta. User experience and usability of voice user interfaces: A systematic literature review. Information , 15(9):579, 2024. Faruk Lawal Ibrahim Dutsinma, Debajyoti Pal, Suree Funilkul, and Jonathan H. Chan. A systematic review of voice assistant usability: An iso 9241–11 approach. Computer Science , 3(4):267, 2022. Qian Chen and Yeming Gong. User experience of digital voice assistant: Conceptualization and measurement. ACM Transactions on Computer-Human Interaction , 31(1):1–35, 2024. Caroline Nowacki, Anna Gordeeva, and Anne-Hélène Lizé. Improving the usability of voice user interfaces: A new set of ergonomic criteria. In Proceedings of the 9th International Conference: In Design, User Experience, and Usability. Design for Contemporary Interactive Environments , 2020. Christine Murad, Heloisa Candello, and Cosmin Munteanu. What’s the talk on vui guidelines? a meta-analysis of guidelines for voice user interface design. In Proceedings of the 5th International Conference on Conversational User Interfaces , pages 1–16, 2023. doi:10.1145/3571884.3597129. URL https://dl.acm.org/doi/pdf/10. 1145/3571884.3597129 . Johanna Schmidhuber, Stephan Schlögl, and Christian Ploder. Cognitive load and productivity implications in human- chatbot interaction. In 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS) , pages 1–6, 2021. doi:10.1109/ICHMS53169.2021.9582445. Jeff Sauro and Joseph s. Dumas. Comparison of three one-question, post-task usability questionnaires. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , CHI ’09, pages 1599–1608. ACM, 2009. doi:10.1145/1518701.1518946. URL https://dl.acm.org/doi/10.1145/1518701.1518946 . John Brooke. Sus-a quick and dirty usability scale. Usability evaluation in industry , 189(194):4–7, 1996. Aaron Bangor, Philip Kortum, and James Miller. Determining what individual sus scores mean: adding an adjective rating scale. J. Usability Studies , 4(3):114–123, May 2009. Jeff Sauro and James R. Lewis. Quantifying the User Experience: Practical Statistics for User Research . Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1st edition, 2012. ISBN 9780123849687. 14
https://arxiv.org/abs/2505.22303v1
arXiv:2505.22306v1 [cs.LG] 28 May 2025Versatile Cardiovascular Signal Generation with a Unified Diffusion Transformer Zehua Chen1†, Yuyang Miao1,2†, Liyuan Wang1*†, Luyun Fan3, Danilo P. Mandic2and Jun Zhu1* 1Department of Computer Science & Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint Center for ML, Tsinghua University, Beijing, China. 2Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom. 3Beijing Anzhen Hospital of Capital Medical University, Beijing Institute of Heart Lung and Blood Vessel Diseases, Chinese Institutes for Medical Research, Beijing, China. *Corresponding author(s). E-mail(s): wly19@tsinghua.org.cn; dcszj@tsinghua.edu.cn; Contributing authors: zhc23thuml@tsinghua.edu.cn; ym520@ic.ac.uk; katevan@163.com; d.mandic@imperial.ac.uk; †These authors contributed equally to this work. Abstract Cardiovascular signals such as photoplethysmography (PPG), electro- cardiography (ECG), and blood pressure (BP) are inherently correlated and complementary, together reflecting the health of cardiovascular sys- tem. However, their joint utilization in real-time monitoring is severely limited by diverse acquisition challenges from noisy wearable record- ings to burdened invasive procedures. Here we propose UniCardio, a multi-modal diffusion transformer that reconstructs low-quality signals and synthesizes unrecorded signals in a unified generative framework. Its key innovations include a specialized model architecture to man- age the signal modalities involved in generation tasks and a continual learning paradigm to incorporate varying modality combinations. By exploiting the complementary nature of cardiovascular signals, UniCar- dio clearly outperforms recent task-specific baselines in signal denoising, 1 2 imputation, and translation. The generated signals match the perfor- mance of ground-truth signals in detecting abnormal health conditions and estimating vital signs, even in unseen domains, while ensuring interpretability for human experts. These advantages position Uni- Cardio as a promising avenue for advancing AI-assisted healthcare. Keywords: diffusion model, transformer, cardiovascular signal, continual learning, multi-modal generative modeling 1 Introduction Cardiovascular diseases account for nearly 18 million deaths annually, rep- resenting 32% of global mortality [1]. This immense burden underscores the urgent need for effective real-time monitoring to reduce mortality and healthcare costs. Various cardiovascular signals, such as photoplethysmogra- phy (PPG) [2], electrocardiography (ECG) [3], and arterial blood pressure (ABP) [4], are commonly used to assess health conditions and detect abnor- malities (Fig. 1a). PPG signals track blood volume changes within the skin’s microvascular tissue, typically recorded at the wrist or fingertips using light- based sensors in wearable devices [5, 6]. ECG signals monitor the heart’s electrical activity by detecting voltage changes in cardiac muscle depolariza- tion and repolarization [7, 8], often requiring precise electrode placement and expert calibration with a sacrifice in wearability. ABP signals are considered the gold standard for blood pressure (BP) assessment, often measured via inva- sive transducers inserted into arteries [9, 10] with risks of bleeding, infection, and complications. The diverse challenges of acquiring these signals result in compromised data quality and availability (Fig. 1b), which severely limit their joint utiliza- tion in healthcare applications. For individuals with relatively normal health conditions, monitoring mainly involves signals obtained from wearable devices, as non-wearable or invasive methods are impractical for routine use. Even in severe health conditions where non-wearable and invasive methods are neces- sary, prolonged monitoring imposes significant discomfort and patient burdens. Furthermore, the individually recorded signals, particularly those from
https://arxiv.org/abs/2505.22306v1
wear- able devices, are susceptible to noise and interruptions, complicating their interpretation by human experts and automated algorithms. Recent efforts have sought to address these challenges by generating cardiovascular signals from recorded ones (Sec. 4.1), focusing on individual tasks such as denoising raw recordings [11, 12], reconstructing intermittent signals [13, 14], or trans- lating specific signal pairs (e.g., PPG to ECG) [15, 16]. While promising, such task-specific designs fail to exploit the complementary information inherent across distinct signals, resulting in limited efficacy and applicability. 3 Non-Invasive Non-WearableInvasive Non-WearableNon-Invasive WearableHeart ArteriesMeasurementCardiovascular Signals Detection and Diagnosis of Health Problems Versatile Cardiovascular Signal Generation (each block represents relevant tasks of each signal)ECGBPPPG…… a ECGBPPPGNoiseECGBPPPGProblem FormulationUnified Diffusion TransformerModel DesignTask-Specific Attention MaskTraining ParadigmNoise 1-ConNoise 2-ConNoise 3-ConContinual Learning …b cUnified ModelingVersatile GenerationCross-Modal TranslationSignal Restoration Denoising Imputation PPG-to-ECG ECG-to-BPECGBPPPGECGBPPPGNon-Wearable & Invasive: AvailabilityWearable: InterruptionExample: ECG-Relevant Tasks Modality Translation Fig. 1 :Real-time monitoring and diagnosis of cardiovascular signals. a, Cardiovascular signals, such as PPG, ECG, and BP, are widely used to monitor cardiovascular health and detect abnormalities. b, These signals face compromised data quality and availability, which can be addressed by signal restoration and modality translation. Here we take ECG signals as an exam- ple, as do the others. c, We propose a multi-modal diffusion transformer for versatile cardiovascular signal generation. Our model maps these signals into a unified latent space for flexible use, leveraging task-specific attention masks to regulate modality interactions and a continual learning paradigm to incor- porate generation tasks with an increasing number of condition modalities. Given the highly correlated physiological activities of the cardiovascular system, cardiovascular signals correspond to different modalities of a shared underlying process. Modeling their multi-modal conditional distributions can capture the relationships between available recorded signals (condition modal- ities) and desired generated signals (target modalities), thereby unifying potential generation tasks involving arbitrary signal modalities. In this work, we propose UniCardio, a specialized diffusion transformer designed to leverage multi-modal relationships among cardiovascular signals, ensuring effective and versatile signal generation (Fig. 1c). UniCardio adopts a transformer-based 4 architecture with modality-specific encoders and decoders, alongside task- specific attention masks to regulate modality interactions. Different from other deep generative models [17, 18], UniCardio captures multi-modal conditional distributions with a unified noise-to-data generative framework, mapping intra- and inter-modal relationships into a unified latent space for flexible use. To cope with varying combinations of condition and target modalities, UniCar- dio introduces a continual learning paradigm to incorporate generation tasks involving an increasing number of condition modalities, allowing the model to accommodate progressively complex relationships while explicitly balancing their contributions. We pre-train UniCardio on the Cuffless BP dataset [19], which comprises 339 hours of trimodal recordings of PPG, ECG, and BP signals collected from ICU patients with diverse abnormal health conditions. Through joint utilization of multiple cardiovascular signals, UniCardio demonstrates state- of-the-art performance across a range of generation tasks, including denoising, imputation, and translation, outperforming recent task-specific baselines. The generated signals are implemented in a tuning-free manner to facilitate car- diovascular applications over multiple unseen domains, including detection of abnormal health conditions and estimation of vital signs, achieving compara- ble or
https://arxiv.org/abs/2505.22306v1
even better performance than the ground-truth signals. The generated signals further ensure interpretability through displaying diagnostic charac- teristics of typical abnormalities, validated by clinician assessments. To our knowledge, UniCardio represents the first unified framework for cardiovascu- lar signal generation, offering both technical innovations and practical insights to advance AI-assisted healthcare. 2 Results 2.1 Unified multi-modal generative modeling for cardiovascular signals To capitalize on the interrelated nature of cardiovascular signals for versa- tile generation, UniCardio fits their multi-modal conditional distributions to model the many-to-any relationships between the condition modalities and target modality, thereby unifying signal restoration and modality translation within a shared framework (Fig. 1b, Sec. 4.1). We obtain this objective with an advanced design of conditional diffusion models [20, 21], which offer key advantages for generative modeling. These models operate through a forward process that gradually adds noise to the data, transforming it into a simple prior distribution (typically Gaussian), and a reverse process that learns to reconstruct the data by iteratively removing the noise. UniCardio develops an unconditional forward process that transforms different signal modalities into a unified prior, and a conditional reverse process that guides the iterative reconstruction of desired signals using conditional information alongside a uni- fied diffusion step, allowing for flexible handling of diverse condition modalities 5 hhKhQM+Softmax ( )hVh/uni2032 hQh/uni22A4K/dKWKWQWVCustomized Transformer Modules hthdhsEncoder 4hs,4Encoder 3hs,3 Encoder 2hs,2 Encoder 1hs,1 Modality-Specific Encoders ConcatenationModality-Specific Decoders h/uni2032 s Decoder 3h/uni2032 s,3 Decoder 2h/uni2032 s,2 Decoder 1h/uni2032 s,1Auxiliary ModalityDecoder 4h/uni2032 s,4Split by Modalitya b cdef1-Con2-Con3-Con01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOursFirst Phase, Ours 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOursSecond Phase, Ours 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOursThird Phase, Ours01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGw/o TBCFirst Phase, Ablation 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGw/o TBCSecond Phase, Ablation 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGw/o TBCThird Phase, Ablation Fig. 2 :Model architecture and training paradigm. a , UniCardio comprises modality-specific encoders, customized transformer modules with task-specific attention masks, and modality-specific decoders. hs, concatenated signal representations. h′ s, generated signal representations. h, module-wise inputs. h′, module-wise outputs. hd, diffusion embeddings. ht, time embed- dings. The key hK, query hQ, and value hVare obtained from hwith the corresponding learnable matrices WK,WQ, and WV.b, The continual learning paradigm of generation tasks with an increasing number of condition modali- ties (1-Con, 2-Con, 3-Con, etc.), achieved by learning rate scheduling (LRS), training batch composition (TBC), and task-specific attention masks (TAM). c-e, Ablation studies of LRS, TBC, and TAM, respectively. The evaluation metric is the RMSE ratio after and before ablation, where the values above 1 indicate worse performance. The quantification results are averaged by 256 independent trials. The error bars represent the standard error of the mean. f, Phase-wise visualization of catastrophic forgetting, using PPG-to-ECG trans- lation under TBC ablation as an example. in generation tasks (Fig. 1c, Sec. 4.2). The multi-modal conditional distribu- tions are learned with transformer-based architectures [22–24] to accommodate various input-output configurations (Sec. 4.3, Supplementary Fig. F1), with key innovations including modality-specific encoders of multi-scale convolu- tion, customized transformer modules with task-specific attention masks, and modality-specific decoders. As shown in Fig. 2a, the input signals for each modality are processed by the modality-specific encoders, which consist of multiple
https://arxiv.org/abs/2505.22306v1
one-dimensional (1D) 6 convolutional neural networks (CNNs) with various kernel sizes to extract rep- resentations across different time scales. The extracted representations of all modalities are concatenated and fed into the customized transformer modules, which further receive learnable diffusion embeddings to encode the current dif- fusion step and time embeddings to encode the timestamp of each signal point. Through self-attention mechanisms, the transformer modules enable signal points from different timestamps and modalities to share relevant information. To control the modalities involved in specific generation tasks, we implement task-specific attention masks that block the information flow of irrelevant tokens other than the condition-to-target ones. The output representations of multiple customized transformer modules are combined in a residual manner, split by modality, and then projected into 1D generated signals via modality- specific decoders, which are implemented as standard multi-layer perceptrons (MLPs). Given multiple cardiovascular signals as the modalities of interest, arbitrary combinations of condition and target modalities can create massive genera- tion tasks with complex relationships, making it highly nontrivial to assign sufficient training samples for each task and properly balance the task-specific loss (Sec. 4.1). To address this, we propose a continual learning paradigm that incorporates generation tasks with an increasing number of condition modali- ties in multiple phases, ensuring sufficient training samples and balanced loss weights in each phase. We employ a combination of simple yet effective strate- gies (Fig. 2b, Sec. 4.4) to overcome catastrophic forgetting, a well-known issue in continual learning [25, 26]. These strategies include (1) learning rate schedul- ing, where the learning rate starts high for effective initialization and decreases gradually in later phases to stabilize knowledge; (2) training batch composi- tion, where the training batches of the current phase are supplemented with a portion of tasks from previous phases to reinforce earlier learning; and (3) task-specific attention masks, which naturally support continual learning by preventing inter-task interference. Ablation studies validate the efficacy of these strategies (Fig. 2c-f, Supple- mentary Table F1). Removing either the learning rate scheduling (Fig. 2c), training batch composition (Fig. 2d), or task-specific attention masks (Fig. 2e) results in a significant increase in the root mean squared error (RMSE) across a variety of generation tasks. In particular, removing task-specific attention masks destroys almost the entire model performance, highlighting their criti- cal role in capturing conditional distributions. Detailed visualization examples further illustrate the effect of alleviating catastrophic forgetting (Fig. 2f). The generated signals with full strategies become progressively aligned with ground-truth signals as the model learns from additional modalities during continual learning. In contrast, the generated signals with an ablation become progressively less informative as more phases are introduced. Interestingly, these strategies correspond to common continual learning methods in terms of regularization [27, 28], replay [29, 30], and architecture [31, 32], suggesting a natural fit between UniCardio’s model design and training paradigm. 7 12340507090110130150 Time (s)Original Signal (mmHg)BPOurs 012345075100125150 Time (s)Original Signal (mmHg)BPOurs01234507090110130150 Time (s)Original Signal (mmHg)BPOurs 1.05740.95610.93330.87560.800.901.001.101.20RMSEECGPPG---+-+++Denoising: BP 04123-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs4.39781.68881.39951.08270.02.04.06.08.010.0RMSEECGPPG---+-+++Imputation: BP12340-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 04123-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 0.02900.02800.02660.02650.0200.0250.0300.0350.040RMSEPPGBP---+-+++Denoising: ECG0.27530.17310.16640.100.150.200.250.300.350.40RMSEPPGBP-+-+++Translation: ECG0.17230.06600.04000.03860.000.050.100.150.200.25RMSEPPGBP---+-+++Imputation: ECG12340-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalPPGOurs 04123-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalPPGOurs04123-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalPPGOursabc
https://arxiv.org/abs/2505.22306v1
def ghi 0.02850.02750.02570.02560.0200.0250.0300.0350.040RMSEECGBP---+-+++Denoising: PPG0.18080.14870.13330.000.050.100.150.200.250.30RMSEECGBP-+-+++Translation: PPG0.11470.06080.04460.04440.000.050.100.150.200.25RMSEECGBP---+-+++Imputation: PPG8.087910.14516.47610.03.06.09.012.015.018.0RMSEECGPPG-+-+++Translation: BP Fig. 3 :Overall performance of versatile generation tasks. a -c, Denois- ing tasks, restoring clean signals from noisy raw recordings in each modality. d-f, Imputation tasks, reconstructing missing segments (i.e., the masked regions) from intermittent signals. g-i, Translation tasks, synthesizing signals of a target modality from one or more condition modalities. For visualization, we present the original values for BP while the normalized values for PPG and ECG. The quantification results are averaged by 256 independent trials. The error bars represent the standard error of the mean. 2.2 Versatile high-quality cardiovascular signal generation To demonstrate the advantages of unifying multiple signal modalities within a shared framework, we first evaluate UniCardio on three representative gen- eration tasks of practical significance, including denoising, imputation, and translation. These tasks involve diverse input-output configurations and sim- ulate varying degrees of signal degradation, reflecting common challenges in healthcare applications. Denoising tasks restore clean signals from their noisy raw recordings, which may be affected by powerline interference and mus- cle contractions [35]. Imputation tasks reconstruct missing segments from intermittent signals, such as filling gaps caused by temporary sensor discon- nections or interruptions during long-term monitoring [13]. Translation tasks synthesize signals of a target modality from one or more condition modali- ties, enables non-invasive or wearable alternatives to traditionally invasive or non-wearable measurements [15, 16]. From denoising, imputation to transla- tion, the generation tasks become progressively more challenging as the degree of signal degradation increases, requiring greater reliance on complementary information from other modalities. In Fig. 3, we employ PPG, ECG, and BP as the target modality during the testing stage, respectively. We present visualization results of the generated signals alongside the ground-truth signals, as well as quantitative results to evaluate the average difference between them. Denoising and imputation tasks 8 Table 1 :Comparison of UniCardio against task-specific baselines. We benchmark UniCardio on four well-studied yet challenging tasks, including PPG imputation, ECG imputation, PPG-to-ECG translation, and PPG-to-BP translation. We report the performance of UniCardio after the multi-modal pre-training, alongside two enhanced variants including UniCardio-F for fur- ther fine-tuning on the task of interest and UniCardio-M for incorporating more condition modalities during the testing stage. Performance metrics include RMSE, MAE, and KS-Test (lower values indicate better performance), averaged by 256 independent trials. The error bars represent the standard error of the mean. “-I” denotes the intermittent signals. Task Method Input Signal RMSE ( ↓) MAE ( ↓) KS-Test ( ↓) Model Size ( ↓) PPG ImputationPulseImpute [13] PPG-I 0.0763 ±0.0036 0.0571 ±0.0023 0.1085 ±0.0034 3.06M DeepMVI [14] PPG-I 0.1745 ±0.0063 0.1268 ±0.0044 0.1395 ±0.0042 2.78M UniCardio PPG-I 0.1146 ±0.0063 0.0797 ±0.0043 0.1084 ±0.0039 4.94M UniCardio-F PPG-I 0.0710 ±0.0043 0.0515 ±0.0028 0.0918 ±0.0030 4.94M UniCardio-M PPG-I, ECG, BP 0.0443 ±0.0016 0.0347 ±0.0012 0.0907 ±0.0031 4.95M ECG ImputationPulseImpute [13] ECG-I 0.1391 ±0.0033 0.1224 ±0.0027 0.4679 ±0.0089 3.06M DeepMVI [14] ECG-I 0.2420 ±0.0031 0.1395 ±0.0022 0.2400 ±0.0061 2.78M UniCardio ECG-I 0.1756 ±0.0042 0.0750 ±0.0026 0.1486 ±0.0055 4.94M UniCardio-F ECG-I 0.0938 ±0.0045 0.0448 ±0.0024 0.1199 ±0.0037 4.94M UniCardio-M PPG, ECG-I, BP 0.0385 ±0.0027 0.0241 ±0.0017 0.1024 ±0.0032
https://arxiv.org/abs/2505.22306v1
4.95M PPG-to-ECG TranslationRDDM [15] PPG 0.5710 ±0.0140 0.5155 ±0.0153 0.7706 ±0.0137 138.77M CardioGAN [16] PPG 0.4313 ±0.0100 0.3226 ±0.0104 0.5208 ±0.0146 5.97M UniCardio PPG 0.2747 ±0.0067 0.1937 ±0.0070 0.4407 ±0.0154 4.94M UniCardio-F PPG 0.1960 ±0.0062 0.1165 ±0.0059 0.2698 ±0.0131 4.94M UniCardio-M PPG, BP 0.1663 ±0.0076 0.1199 ±0.0070 0.3302 ±0.0147 4.95M PPG-to-BP TranslationABPNet [33] PPG 7.2994 ±0.2725 5.6647 ±0.2434 0.1745 ±0.0071 1.59M PPG2ABP [34] PPG 6.1882 ±0.2584 4.7835 ±0.2076 0.1621 ±0.0060 19.44M UniCardio PPG 10.1538 ±0.4213 8.3721 ±0.3889 0.2668 ±0.0097 4.94M UniCardio-F PPG 5.7924 ±0.3250 4.5893 ±0.2897 0.1567 ±0.0079 4.94M UniCardio-M PPG, ECG 6.5066 ±0.3443 5.4908 ±0.3077 0.2004 ±0.0086 4.95M (Fig. 3a-c and d-f) involve partial signal degradation, with the target modal- ity also included as a condition modality. The generated signals closely match the ground-truth signals, and achieve strong performance with only one con- dition modality (i.e., the degraded version of target modality). Incorporating additional condition modalities further reduces the average differences, under- scoring the benefit of multi-modal relationships. Translation tasks (Fig. 3g-i) involve complete signal degradation and are inherently more challenging. These tasks rely entirely on additional condition modalities to generate the target modality. Despite the difficulty, UniCardio produces high-quality signals with only minor deviations from the ground-truth signals. Again, incorporating more condition modalities further improves the performance. While UniCardio supports versatile generation tasks across all modalities of pre-training data, we benchmark its performance on four particularly chal- lenging tasks well-studied in the literature, including PPG imputation, ECG imputation, PPG-to-ECG translation, and PPG-to-BP translation. These tasks vary in difficulty, with more complex tasks (e.g., translation compared to imputation) demanding stronger task-specific customizations. This trend 9 is reflected in recent strong baselines. For example, PulseImpute [13] and DeepMVI [14] are designed for PPG/ECG imputation, RDDM [15] and Car- dioGAN [16] are designed for PPG-to-ECG translation, and ABPNet [33] and PPG2ABP [34] are designed for PPG-to-BP translation. As shown in Table 1, UniCardio after multi-modal pre-training has already achieved competing or even better performance across all tasks compared to the corresponding baselines. This explicitly demonstrates the benefits of leveraging multi-modal relationships during the pre-training stage. We further evaluate two variants of UniCardio to explore additional enhancements: The first variant UniCardio-F fine-tunes the model using previous pre-training data for a specific generation task, making it more tailored to the task of interest. The second variant UniCardio-M employs more available condition modal- ities during the testing stage to leverage their complementary information. Both variants deliver substantial improvements, outperforming all baselines by a large margin. Moreover, UniCardio performs all tasks with a compara- bly small amount of parameters, where the use of different modalities only requires adding the corresponding encoders and decoders (around 0.004M for each modality), making it applicable to wearable devices. In contrast, implementing these functions with task-specific methods would require com- bining multiple proprietary models, leading to tens of times greater parameter overhead. Such efficiency underscores the particular benefits of incorporating multi-modal relationships within a unified framework, as UniCardio does. 2.3 Robust real-time health monitoring with UniCardio UniCardio’s integrated functions of signal restoration and modality transla- tion provide a comprehensive enhancement to cardiovascular
https://arxiv.org/abs/2505.22306v1
signal processing (Fig. 1). To demonstrate its practical effectiveness, we apply UniCardio to the publicly available datasets of unseen domains and explore representative applications spanning two major areas: detecting abnormal health conditions and estimating vital signs. Depending on the specific characteristics of each dataset (e.g., availability of multi-modal signals or disease annotations) as well as the unique demands of each application, we assess UniCardio’s generative capabilities in a targeted manner, centering around ECG as a typical example in cardiovascular diagnostics. All evaluations are conducted in a tuning-free manner, enabling the pre-trained generative model to be directly applied to diverse healthcare scenarios of unseen domains without additional fine-tuning. We first evaluate the detection of multiple cardiovascular abnormalities with the PTBXL dataset [36], which includes only ECG signals annotated with abnormal health conditions. Of these, we adopt the ST change (Fig. 4a-c) and hypertrophy (Fig. 4d-f) as representative examples. This scenario focuses on UniCardio’s denoising capability, which mitigates noise-induced degradation during recording and transmission. The noisy signals exhibit low accuracy, extremely low specificity, and inflated sensitivity due to excessive false-positive classifications. In contrast, the denoised signals generated by UniCardio resolve these issues, achieving performance comparable to the ground-truth signals. 10 46.8784.5571.2080.8182.34 ECG-NECG-GECG-IECG-GECG20406080100ST Change Accuracy (%)DenoisingImputation84.9770.1775.7473.1273.12 ECG-NECG-GECG-IECG-GECG20406080100ST Change Sensitivity (%)DenoisingImputation26.2668.7584.9687.40 ECG-NECG-GECG-IECG-GECG20406080100ST Change Specificity (%)DenoisingImputation92.43 19.466.6716.212.020.00PPGECG-GECG-IECG-GECG0510152025HR MAE Ground TruthTranslationImputation80.4684.1267.7577.1183.47 PPGECG-GECG-IECG-GECG5060708090100AF Accuracy (%)TranslationImputation90.4192.0077.8186.8793.65 PPGECG-GECG-IECG-GECG5060708090100AF Sensitivity (%)TranslationImputation69.0055.9565.6871.71 PPGECG-GECG-IECG-GECG304050607080AF Specificity (%)TranslationImputation74.88 18.0915.2218.30 PPGECG-GECG101214161820Systolic BP MAETranslation9.197.107.67PPGECG-GECG5678910Diastolic BP MAETranslation45.8285.0873.9983.6984.67 ECG-NECG-GECG-IECG-GECG20406080100Hypertrophy Accuracy (%)DenoisingImputation90.2167.3672.0772.0773.05 ECG-NECG-GECG-IECG-GECG20406080100Hypertrophy Sensitivity (%)DenoisingImputation27.9374.7788.3789.36 ECG-NECG-GECG-IECG-GECG20406080100Hypertrophy Specificity (%)DenoisingImputation92.22abc def ghi jklST Change: Accuracy (↑)ST Change: Sensitivity (↑)ST Change: Specificity (↑) Hypetrophy: Accuracy (↑)Hypetrophy: Sensitivity (↑)Hypetrophy: Specificity (↑) Atrial Fibrillation: Accuracy (↑)Atrial Fibrillation: Sensitivity (↑)Atrial Fibrillation: Specificity (↑) Heart Rate: MAE (↓)Systolic BP: MAE (↓)Diastolic BP: MAE (↓) Fig. 4 :Versatile generation assisted cardiovascular applications. a -c, Detection of ST change using the PTBXL dataset [36]. d-f, Detection of hyper- trophy using the PTBXL dataset [36]. g-i, Detection of atrial fibrillation (AF) using the MIMIC PERform AF dataset [37]. j, Estimation of heart rate (HR) using the WESAD dataset [38]. k-l, Estimation of diastolic BP and systolic BP using unseen data of the Cuffless BP dataset [19]. “-N” indicates noisy sig- nals. “-I” indicates intermittent signals. “-G” indicates generated signals. The generated signals are produced by UniCardio in a tuning-free manner. Here we do not include the error bar since most cases are averaged over the entire test set without repeated sampling. Additionally, missing segments that may result from sensor disconnections can moderately affect both abnormalities, which are addressed by UniCardio’s imputation capability. We further assess the detection of atrial fibrillation (AF) using the MIMIC PERform AF dataset [37], which includes both PPG and ECG signals anno- tated with AF (Fig. 4g-i). AF is a common cardiac arrhythmia associated with an elevated risk of stroke and heart failure, demanding accurate and timely detection. UniCardio addresses two parallel challenges in this scenario: translating high-quality ECG signals from wearable PPG signals and imput- ing intermittent ECG signals to restore missing segments. The generated ECG 11 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOursST Elevation 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalT-Wave Inversion ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOursST Elevation 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalST
https://arxiv.org/abs/2505.22306v1
Depression ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalAtrial Premature Contraction ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalST Depression ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalAtrial Fibrillation ECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalPeaked T Waves ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalPeaked T Waves ECGOurs Fig. 5 :Visualization of typical ECG abnormalities. We visualize a range of ECG signals exhibiting typical abnormalities, alongside our generated signals from PPG-to-ECG translation. ECG grids are added to facilitate rec- ognizing the corresponding abnormalities. The diagnostic characteristics are further validated by clinician assessments. signals significantly outperform both PPG and intermittent ECG signals in terms of accuracy, sensitivity, and specificity. Next, we evaluate the estimation of vital signs, specifically heart rate (HR) using the WESAD dataset [38] (Fig. 4j), and systolic and diastolic BP using unseen data from the Cuffless BP dataset [19] (Fig. 4k-l). Both datasets encompass multi-modal cardiovascular signals, allowing to validate the comple- mentary benefits of generating non-wearable signals from wearable devices. For HR estimation, UniCardio can effectively translate high-quality ECG signals from wearable PPG signals and impute intermittent ECG signals to restore missing segments. The generated ECG signals achieve significantly more accu- rate HR estimation compared to the original PPG signals and intermittent ECG signals, substantially reducing the mean absolute error (MAE). For BP estimation, we focus on PPG-to-ECG translation, as intermittent signals have limited impact on systolic and diastolic BP. Combining the original PPG sig- nals with the generated ECG signals markedly improves BP estimation, with a notable reduction in MAE for both systolic and diastolic values. To demonstrate UniCardio’s applicability in concrete terms, we visualize a range of ECG signals exhibiting typical abnormalities, such as ST changes, biphasic P waves, T-wave inversion, peaked T waves, atrial premature con- traction (APC), and AF, alongside our generated signals from PPG-to-ECG translation (Fig. 5). The generated signals align closely with the ground-truth 12 signals, faithfully reproducing the diagnostic characteristics of corresponding abnormalities. For example, APC is characterized by an early-occurring P wave, usually followed by a shortened PR interval and a compensatory pause before the subsequent beat, whereas AF often presents a complete absence of discrete P waves, replaced by rapid, irregular fibrillatory waves [39–41]. Signif- icant ST changes are critical electrocardiographic findings that may indicate acute coronary syndrome, which requires careful evaluation in conjunction with patient symptoms (e.g., chest pain), cardiac biomarkers (e.g., troponin I and tropinin T), and risk factors (e.g., smoking, obesity, and family his- tory) [42–44]. The diagnostic characteristics of these ECG signals are validated by clinician assessments (Sec. 4.4), underscoring UniCardio’s validity and interpretability. The tuning-free denoising and imputation results demonstrate similarly strong performance, with the ST change of the PTBXL dataset [36] as a typical example (Supplementary Fig. F2 and Fig. F3). Besides, the diffu- sion process produces step-wise intermediate results that allow human experts to analyze how signals evolve throughout generation (Supplementary Fig. F4 and Fig. F5), which further enhances interpretability. To address the efficiency concerns commonly raised with diffusion mod- els, we validate that UniCardio maintains comparable performance even with significantly reduced sampling steps (Supplementary Sec. D and Table F2). Such efficient sampling creates an average delay of only 0.4
https://arxiv.org/abs/2505.22306v1
seconds, making UniCardio well-suited for real-time monitoring. Together, these advantages establish UniCardio as a practical, robust, and interpretable tool for advancing cardiovascular healthcare. 3 Discussion UniCardio introduces a unified framework for multi-modal cardiovascular sig- nal generation. Departing from traditional task-specific methods in cardiovas- cular signal processing, UniCardio develops a specialized diffusion transformer to capture the multi-modal conditional distributions of cardiovascular sig- nals, including PPG, ECG, and BP, enabling versatile signal restoration and modality translation. By integrating modality-specific encoders and decoders, task-specific attention masks, and a continual learning paradigm, UniCardio achieves superior performance across a broad spectrum of generation tasks with diverse input-output configurations. These capabilities offer considerable benefits in improving monitoring applicability and diagnostic performance for healthcare applications. The model’s generative nature further allows human experts to validate the generated signals, analyze their evolution, and ensure alignment with clinical expectations. UniCardio also demonstrates remark- able efficiency in parameter costs and inference times, making it promising for deployment in wearable devices. Cardiovascular diseases often develop silently over extended periods, with sudden acute events posing significant health risks. Continuous monitoring in daily life is essential to detect early warning signs and facilitate timely medical 13 intervention. UniCardio enables accurate data collection via adaptive signal restoration, especially for wearable signals that are susceptible to noise and interruptions. Additionally, certain cardiovascular signals remain inaccessible to wearable sensors, which can be synthesized by UniCardio to provide more comprehensive assessments. For patients with severe health conditions, pro- longed clinical monitoring that involves non-wearable and invasive procedures can cause considerable discomfort. In such cases, modality translation offers an effective alternative for real-time alerts that prompt necessary clinical exam- inations. Additionally, integrating generated data with AI-driven diagnostic models further enhances automated diagnoses and provides clinicians with valuable insights, supporting proactive healthcare management. UniCardio is also expected to facilitate research in fields such as psycholog- ical and cognitive sciences, where physiological signals are pivotal for analyzing stress [45, 46], cognitive workload [47, 48], emotion recognition [49, 50], etc. In such non-critical care scenarios, where wearable devices serve as the primary means of monitoring, the ability to generate high-fidelity non-wearable signals or even invasive signals from wearable inputs is expected to provide more com- prehensive assessments. Compared to the complex ICU data targeted in this work, generating physiological signals for non-critical care scenarios is techni- cally less challenging. As such, UniCardio represents a promising solution for harnessing underutilized physiological signals and expanding their applicability to broader scientific domains. As a contribution to AI-generated content (AIGC) [51, 52], UniCardio exemplifies the powerful synergy between diffusion models and transformer- based architectures in multi-modal generation, with its specialized designs effectively coordinating multi-modal relationships among versatile generation tasks. Furthermore, UniCardio highlights the importance of continual learn- ing [25, 26] in generative and multi-modal contexts. Traditionally, continual learning has been applied to reduce training overheads during model updates, albeit sacrificing the overall performance. Here, the proposed continual learn- ing paradigm has proven to be a necessary approach to accommodate intricate multi-modal relationships. The proposed strategies for overcoming catas- trophic forgetting allow UniCardio to continually integrate newly collected
https://arxiv.org/abs/2505.22306v1
data, modalities, and tasks, ensuring its scalability and adaptability. We emphasize that UniCardio’s full potential can be further unleashed by collecting more pre-training data covering a variety of signal modalities with both normal and abnormal conditions. Due to the limited availability of pub- lic datasets, our demonstration uses a modest amount of ICU patient data of three modalities, yet achieves compelling results. Future improvements in model performance are expected by increasing the amount of pre-training data for currently involved signal modalities, incorporating additional relevant sig- nal modalities, or introducing supervised information for abnormal conditions. Notably, UniCardio’s model architecture and training paradigm inherently support its continual refinement. 14 Looking ahead, UniCardio holds the promise of establishing a foundational framework for AI-assisted healthcare, enabling real-time generation of high- fidelity signals for diverse applications, continually integrating knowledge from heterogeneous datasets, and personalizing solutions to individual needs. We expect subsequent work to further refine these capabilities and broaden the scope of multi-modal generation, driving translational breakthroughs in per- sonalized healthcare, next-generation diagnostics, and population-level health monitoring. 4 Methods 4.1 Problem Formulation Given multi-modal cardiovascular signals s1,s2, ...,skwith a joint distribution q(s1,s2, ...,sk), our objective is to capture all conditional distributions among these signals, which naturally covers the two main categories of cardiovascular signal processing: signal restoration and modality translation. Both categories may receive one or more cardiovascular signals as the condition modalities. For the sake of clarity, here we specifies sifori∈[k] into x,y,zunder k= 3, corresponding to PPG, ECG, and BP, and their joint distribution becomes q(x,y,z). Signal Restoration involves two typical tasks. One is to remove the unde- sired information from observed recordings, such as denoising the background noise that affects the signal quality index (SQI) [53]. The other is synthesizing the desired information, such as the imputation of the observed intermittent signals [13, 14]. Both tasks involve transforming the low-quality data ˜xinto its high-quality counterpart, modeled by the conditional distribution p(x|˜x). When multi-modal condition information is available for signal restoration, the conditional distribution changes to p(x|˜x,y) orp(x|˜x,y,z). Modality Translation refers to synthesizing signals of a target modality from off-the-shelf one(s). For example, PPG-to-ECG generation [15, 16] and PPG-to-BP estimation [33, 34] have been separately explored to capture the conditional distribution p(y|x) and p(z|x), respectively. Similarly, when multi- modal condition information is available for modality translation, such as the BP estimation from a joint observation of PPG and ECG signals [15], the conditional distribution changes to p(z|x,y). In summary, the restoration and translation of multi-modal cardiovascu- lar signals can be formulated as an assembly of conditional distributions at both condition and modality levels. Without loss of generality, we first assume the signal of target modality as xand the available cardiovascular signals are sampled from q(x,y,z). At the condition level, we then aim to cap- ture the modality-specific conditional distributions p(x|cx), where cxincludes 2k−1 possibilities: (1) signal restoration p(x|˜x); (2) cross-modality translation p(x|y) and p(x|z); (3) multi-modal signal restoration p(x|˜x,y),p(x|˜x,z), and p(x|˜x,y,z); and (4) multi-modal signal translation p(x|y,z). At the modality level, each signal of target modality (e.g., x,y, orz) corresponds to the 2k−1 15 possibilities
https://arxiv.org/abs/2505.22306v1
of conditional distributions, expanding the total number of tasks tok×(2k−1) with p(x|cx),p(y|cy), and p(z|cz). 4.2 Generative Framework Diffusion Models. To capture the multi-modal conditional distributions inherent in cardiovascular signals, we adopt diffusion models [20, 21] that offer distinct advantages over conventional mapping-based neural networks [54–56]. Diffusion models are composed of two processes. In model training, a forward process gradually transforms the data distribution p(x0) into a known prior distribution p(xT), typically a standard Gaussian distribution N(0,I), with a sufficiently large number of forward time steps T. Among the data x0and the noise prior xT, the time-dependent intermediate representations xtare the noisy versions of data with increasing noise scales, which can be constructed with unconditional Gaussian transition kernel q(xt|xt−1): q(x1:T|x0) =TY t=1q(xt|xt−1), p 0(x0)∼pdata(x). (1) In model inference, a reverse process starts from the prior distribution p(xT) and reconstructs the data distribution p(x0) with iterative denoising steps. Each inference step p(xt−1|xt) is a mirror of the corresponding forward step q(xt|xt−1), and can be guided with provided condition information c: pθ(x0:T−1|xT,c) =p(xT)TY t=1pθ(xt−1|xt, t,c), p T(xT)∼ N(0,I).(2) The generative framework of diffusion models provides two key advantages for UniCardio. First, because the forward process is unconditional, it enables to generate the distributions of different signal modalities p(x),p(y), and p(z) from the same prior distribution N(0,I), which supports the modeling of multiple cardiovascular signals with a unified noise-to-data generation process. Second, because the reverse process can be conditional, it allows the model to perform specific generation tasks from different conditional distributions, such asp(x|cx). Collectively, the unconditional forward process and the conditional reverse process can capture the distributions at both condition and modality levels, unifying multi-modal signal restoration and translation within a single framework. Further details are provided in Supplementary Sec. A. Multi-Modal Generative Modeling. Unlike previous customized dif- fusion models designed for specific tasks in cardiovascular signal generation, such as RDDM [15] for PPG-to-ECG translation, UniCardio captures a vari- ety of conditional distributions at both condition and modality levels. While recent methods [22] demonstrate success in fitting conditional distributions for text and image data, they often rely on modality-specific time steps txfor dif- fusion models shown in Eq. (1). For example, PPG-to-ECG generation would 16 require learning the distribution pθ(yt−1|xtx, tx,yty, ty,ztz, tz), where tx= 0, xtx=x0,tz=T,ztz=zT, in order to define the condition information as clean PPG signal x0∼p(x) without BP signal zT∼ N(0,I). Moreover, UniCardio scales to more signal modalities, e.g., at least three modalities (PPG, ECG, and BP) versus two modalities (text and image). Providing modality-specific time steps to diffusion models combinatorially increases the complexity of generation tasks, resulting in limited training effi- ciency and overall performance. To address this, we propose a novel design that simplifies the several modality-specific time steps [22] to a unified time step for all modalities, e.g., from pθ(yt−1|xtx, tx,yty, ty,ztz, tz) topθ(yt−1|xt,yt,zt, t). This unified time step enables a seamless integration of multi-modal cardio- vascular signal restoration and translation, as shown in Eq. (1) and Eq. (2). Further details are provided in Supplementary Sec. B. 4.3 Model Architecture Modality-Specific Encoders with Multi-Scale Convolution. Cardiovas- cular signals such
https://arxiv.org/abs/2505.22306v1
as PPG, ECG, and BP are inherently complex and exhibit diverse temporal patterns that vary across different physiological states. When generating one of them from others, the correspondence between these modal- ities essentially spans multiple time scales. To handle the multi-frequency components, we design a multi-scale convolutional encoder for each signal modality, capturing features at multiple levels of temporal granularity. Specifically, the input signals s1,s2, ...,skcorrespond to kdistinct modali- ties, where each signal sifori∈[k] is a 1D time series of shape ( B, L, 1). Here, Bdenotes the batch size and Ldenotes the signal length. These signals are processed by modality-specific encoders Ei(·) for i∈[k], which consist of mul- tiple 1D CNNs with various kernel sizes {1,3,5,7,9,11}to extract features at different temporal scales. The outputs from the kernels are concatenated along the feature dimension, producing a feature vector of shape ( B, L, C ), where Crepresents the number of channels. The feature vectors from all modalities are concatenated along the temporal dimension into a joint feature vector of shape ( B, kL, C ): hs= [hs,1;hs,2;...;hs,k], (3) where hs,i=Ei(si) for i∈[k]. We further introduce an auxiliary modality (AM) as a placeholder for signal restoration tasks where the target modality is occupied as a condition modality. The input signals for AM are non- informative, serving as the start of diffusion process to facilitate the generative modeling, as will be detailed later. For notation clarity, we update k←k+ 1 in subsequent descriptions, with k= 4 corresponding to PPG, ECG, BP, and AM. Customized Transformer Modules. The joint feature vector hsis processed through multiple customized transformer modules (Supplementary Fig. F1), which facilitate intra- and inter-modal interactions among signal points at different timestamps. For clarity, we denote the input to a module 17 ash(h=hsfor the first module) and its output as h′, omitting the mod- ule identity where unnecessary. In each module, the diffusion embedding hdis first added to the input feature vector, updating h←h+hd. The updated vector his then passed to a multi-head self-attention (MSA) layer. In order to control the involved modalities in different generation tasks, each MSA layer employs a task-specific attention mask Mof shape ( kL, kL ), which constrains the information flow between condition and target modalities. Specifically, the keyhK, query hQand value hVare computed as linear transformation of h: hK=hWK,hQ=hWQ,hV=hWV, (4) where WK∈RC×dK,WQ∈RC×dQ, and WV∈RC×dVare learnable matrices. Here, dK,dQ, and dVdenote the dimensions of the key, query, and value, respectively, with dK=dQ. Then, we implement Minto the self-attention operation as follows: h′= MSA( h, M) = SoftmaxhQh⊤ K√dK+M hV, (5) where h′andhkeep the same shape ( B, kL, C ). The task-specific attention mask Mcontrols the information flow with pre- defined values. Each modality i∈[k] maps to the token range [( i−1)L:iL] ofh(Supplementary Table F3). For translation tasks from modality ito modality j,Mpermits intra-modal interactions within i, intra-modal inter- actions within j, and inter-modal interactions from itoj. The elements of Mcorresponding to these interactions are set to zero, while other elements are assigned large negative values to block irrelevant interactions during the softmax operation. For denoising or imputation tasks,
https://arxiv.org/abs/2505.22306v1
where AM acts as the target modality ( j=k),Msimilarly ensures information flow of intra-modal interactions within i, intra-modal interactions within k, and inter-modal inter- actions from itok, while blocking other irrelevant interactions. This unified masking mechanism enables consistent handling of generative tasks involving different modalities and input-output configurations. The output h′from the MSA layer is refined by a fully connected (FC) layer F1(·) and combined with the time embedding ht, updating h′←F1(h′) +ht. This operation expands the feature dimension, resulting in a feature vector of shape ( B, kL, 2C). We then implement a gated activation unit [57, 58] to capture the complex conditional distributions of signal modalities given the updated feature vector h′. Specifically, we split h′along the feature dimension into two parts, i.e., h′= [h′ 1;h′ 2], each of shape ( B, kL, C ). These two parts are processed separately through tanh and σactivations, respectively, and combined element-wise as h′←tanh(h′ 1)⊙σ(h′ 2). This operation produces a transformed feature vector h′with a reduced dimension ( B, kL, C ). We further implement residual and skip connections [59, 60] across cus- tomized transformer modules to facilitate convergence of the entire model. Specifically, the transformed feature vector h′is projected back to an expanded 18 dimension of ( B, kL, 2C) though another FC layer F2(·) and split again into two parts, h′= [h′ 1;h′ 2], each of shape ( B, kL, C ). The first part h′ 1is added to the input of the current transformer module, serving as the input to the next transformer module. The second part h′ 2is accumulated across all transformer modules and passed through an additional FC layer F3(·). This produces the final feature vector h′ s=F3(Ph′ 2), retaining the shape ( B, kL, C ). Modality-Specific Decoders. The final feature vector h′ sis split into modality-specific components h′ s,1,h′ s,2, ...,h′ s,k, each of shape ( B, L, C ). These components are processed through modality-specific decoders Di(·), i∈[k−1], where each decoder is implemented as a two-layer MLP with ReLU activation. The decoders project the split feature vectors into their respective generated signals of shape ( B, L, 1): ˆsi=Di(h′ s,i), i∈[k]. (6) In particular, the first k−1 decoders Di, i∈[k−1] are optimized from network training, whereas the last decoder Dkof AM inherits weights from one of thek−1 decoders depending on the generation tasks. For example, when performing ECG imputation with k= 4 representing PPG, ECG, BP and AM, the decoder Dkreuses the weights of D2to generate ECG signals. This strategy ensures consistency across tasks and reduces the need for additional training of the final decoder. 4.4 Experimental Setup Dataset. UniCardio is pre-trained with the Cuffless BP dataset [19], which contains 339 hours of trimodal recordings (PPG, ECG, and BP) collected from ICU patients. PPG and ECG signals are processed using a 0.5 Hz high-pass Butterworth filter (order = 5) and a 50 Hz pulse filter, followed by z-score normalization [61]. BP signals are left unfiltered and unnormalized to preserve their absolute magnitude. To ensure data quality, we address recording errors such as sensor displacements and
https://arxiv.org/abs/2505.22306v1
device disconnections by filtering out noisy signals based on sample entropy. This metric quantifies the regularity and complexity of time series by measuring the likelihood that a given pattern remains consistent in subsequent points. We set thresholds of 0.2, 0.3, and 0.2 for PPG, ECG, and BP signals, respectively, to exclude highly irregular or complex signals. ECG signals are more prone to corruption due to their high- frequency components and therefore require additional pre-processing. We first detect and rectify inverted ECG signals caused by sensor displacements, and then apply SQI-based selection [61] for further refinements. Finally, PPG and ECG signals are min-max normalized, while BP signals are z-scored using the mean and standard deviation calculated from the training set. The processed signals are segmented into 4-second pairs and split into training, validation, and test sets in an 80%-10%-10% ratio. 19 Training Regime. For continual learning, the entire training process (a total of eepochs) is divided into kphases ( e/kepochs per phase), incorpo- rating generation tasks conditioned on an increasing number of modalities. Corresponding to the three cardiovascular signals, i.e., PPG, ECG, and BP, we have a total of four phases ( k= 4). The first three phases are used to learn the generation tasks conditioned on one, two, and three modalities, respectively, while the last one is used to balance the task distribution. In the first phase, the model is trained exclusively on one-condition tasks, with training batches equally allocated between translation and imputation for a single condition modality. In the second phase, two-condition tasks are introduced. The train- ing batches for one- and two-condition tasks are 50% each, equally allocated between translation and imputation. In the third phase, three-condition tasks are incorporated. The training batches allocate 25% to one- and two-condition tasks of translation and imputation, and 50% to three-condition tasks of impu- tation only. In the final phase, the model is fine-tuned to balance the one-, two-, and three-condition tasks, allocated equal proportions of training batches. The task-specific attention masks control the condition modalities for each generation task throughout the training process. The model is optimized using an SGD optimizer over the total eepochs. The learning rate starts as 1 ×10−3 at the beginning of the first e/kepochs to provide a strong initialization for one-condition tasks. It is then reduced to 1 ×10−4at epoch 0 .7×e/kas the condition modalities increases. In the final e/kepochs, the learning rate is further decreased to 1 ×10−5to enable precise fine-tuning and ensure balanced performance across all tasks. We empirically identify that e= 800 already achieves superior performance. Further details are provided in Supplementary Sec. C, and the pseudo codes are provided in Supplementary Sec. E. Downstream Application. After the pre-training stage, we consider a variety of downstream applications on different datasets, including the PTBXL dataset [36] for detection of abnormal health conditions, the MIMIC PERform AF dataset [37] for detection of AF, the WESAD dataset [38] for HR estima- tion, and the unseen data from the Cuffless BP dataset [19] for BP estimation. These datasets are processed with a similar pipeline as the pre-training
https://arxiv.org/abs/2505.22306v1
dataset. The detection of AF, ST change, and hypertrophy is performed by train- ing respective classification models based on 1D VGG-16 architectures [62]. HR estimation is conducted by detecting heartbeat peaks with common algo- rithms [63]. BP estimation is achieved by training a regression model based on a CNN-LSTM architecture [64]. Evaluation Metric. We evaluate the quality of generated signals with three common metrics, including the root mean squared error (RMSE), mean absolute error (MAE), and Kolmogorov-Smirnov test (KS-Test). Additionally, we assess the ability to detect cardiovascular abnormalities with accuracy, specificity, and sensitivity, which reflect different proportions of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). 20 •RMSE quantifies the square root of the mean of the squared differences between the generated signal ˆxand the ground-truth signal x: RMSE =vuut1 NNX i=1(ˆxi−xi)2, (7) where Nis the total number of samples. •MAE evaluates the average absolute differences between the generated signal ˆxand the ground-truth signal x: MAE =1 NNX i=1|ˆxi−xi|. (8) •KS-Test evaluates the maximum distance between the cumulative distri- bution functions of the generated signal Fˆx(a) and the ground-truth signal Fx(a), with the supremum over all possible values of a: KS = sup a|Fˆx(a)−Fx(a)|. (9) •Accuracy measures the rate of correctly classified samples, defined as: Accuracy =TP + TN TP + TN + FP + FN. (10) •Specificity quantifies the true negative rate, defined as: Specificity =TN TN + FP. (11) •Sensitivity quantifies the true positive rate, defined as: Sensitivity =TP TP + FN. (12) Clinician Assessment. The diagnostic characteristics of ground-truth and generated ECG signals are validated by two cardiology clinicians in a back-to-back manner (a third clinician is asked to verify in case of incon- sistency) for routine quality assurance. The employed datasets [19] are fully anonymized, de-identified and have been previously released for research pur- poses. We confirmed with the Tsinghua University Ethics Committee that such routine analysis and processing of publicly available data does not require special approval. 21 Data Availability All benchmark datasets used in this paper are publicly available, including the Cuffless BP dataset [19], the PTBXL dataset [36], the MIMIC PERform AF dataset [37], and the WESAD dataset [38]. Code Availability Our code will be released at https://github.com/thu-ml/UniCardio. Acknowledgments This work was supported by the NSFC Projects (62406160, 62350080, 92270001, U24A20342), Tsinghua Institute for Guo Qiang, and the High Per- formance Computing Center, Tsinghua University. J.Z. is also supported by the XPlorer Prize. Author Contributions Statement Z.C., Y.M., L.W. and J.Z. conceived the project. Z.C., Y.M. and L.W. designed the computational model. Y.M. performed main experiments, assisted by Z.C. and L.W.. Z.C., Y.M. and L.W. analyzed the data. L.W. wrote the paper, assisted by Z.C., Y.M. and L.F.. All authors revised the paper. L.W. and J.Z. supervised the project. Competing Interests Statement The authors declare no competing interests. References [1] Organization, W.H.: Cardiovascular Diseases (CVDs). https://www.who. int/health-topics/cardiovascular-diseases (2021) [2] Alian, A.A., Shelley, K.H.: Photoplethysmography. Best Practice & Research Clinical Anaesthesiology 28(4), 395–406 (2014) [3] Mirvis, D.M., Goldberger, A.L.: Electrocardiography. Heart Disease 1, 82–128 (2001) [4] McGhee, B.H., Bridges, E.J.: Monitoring
https://arxiv.org/abs/2505.22306v1
arterial blood pressure: what you may not know. Critical Care Nurse 22(2), 60–79 (2002) [5] Elgendi, M., Haugg, F., Fletcher, R.R., Allen, J., Shin, H., Alian, A., Menon, C.: Recommendations for evaluating photoplethysmography- based algorithms for blood pressure assessment. Communications Medicine 4(1), 1–7 (2024) 22 [6] Tamura, T., Maeda, Y., Sekine, M., Yoshida, M.: Wearable photo- plethysmographic sensors—past and present. Electronics 3(2), 282–302 (2014) [7] Kligfield, P., Gettes, L.S., Bailey, J.J., Childers, R., Deal, B.J., Han- cock, E.W., Van Herpen, G., Kors, J.A., Macfarlane, P., Mirvis, D.M., et al.: Recommendations for the standardization and interpretation of the electrocardiogram. Circulation 115(10), 1306–1324 (2007) [8] Trobec, R., Tomaˇ si´ c, I., Rashkovska, A., Depolli, M., Avbelj, V.: Body sensors and electrocardiography (2018) [9] Pascual, J.L., Horak, J., Gracias, V.H., Neligan, P.J.: Chapter 19 - volume status and cardiac function. In: Le Roux, P.D., Levine, J.M., Kofke, W.A. (eds.) Monitoring in Neurocritical Care, pp. 176–1883. W.B. Saunders, Philadelphia (2013) [10] Saugel, B., Dueck, R., Wagner, J.Y.: Measurement of blood pressure. Best Practice & Research Clinical Anaesthesiology 28(4), 309–322 (2014) [11] Chiang, H.-T., Hsieh, Y.-Y., Fu, S.-W., Hung, K.-H., Tsao, Y., Chien, S.-Y.: Noise reduction in ecg signals using fully convolutional denoising autoencoders. IEEE Access 7, 60806–60813 (2019) [12] Ahmed, R., Mehmood, A., Rahman, M.M.U., Dobre, O.A.: A deep learn- ing and fast wavelet transform-based hybrid approach for denoising of ppg signals. IEEE Sensors Letters 7(7), 1–4 (2023) [13] Xu, M., Moreno, A., Nagesh, S., Aydemir, V., Wetter, D., Kumar, S., Rehg, J.M.: Pulseimpute: A novel benchmark task for pulsative physi- ological signal imputation. Advances in Neural Information Processing Systems 35, 26874–26888 (2022) [14] Bansal, P., Deshpande, P., Sarawagi, S.: Missing value imputation on mul- tidimensional time series. Proceedings of the VLDB Endowment 14(11), 2533–2545 (2021) [15] Shome, D., Sarkar, P., Etemad, A.: Region-disentangled diffusion model for high-fidelity ppg-to-ecg translation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 15009–15019 (2024) [16] Sarkar, P., Etemad, A.: Cardiogan: Attentive generative adversarial network with dual discriminators for synthesis of ecg from ppg. In: Pro- ceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 488–496 (2021) [17] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., 23 Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020) [18] De Bortoli, V., Thornton, J., Heng, J., Doucet, A.: Diffusion schr¨ odinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems 34, 17695–17709 (2021) [19] Kachuee, M., Kiani, M.M., Mohammadzade, H., Shabany, M.: Cuffless blood pressure estimation algorithms for continuous health-care mon- itoring. IEEE Transactions on Biomedical Engineering 64(4), 859–869 (2016) [20] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems (2020) [21] Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: Proceedings of the International Conference on Learning Representations (2021) [22] Bao, F., Nie, S., Xue, K., Li, C., Pu, S., Wang, Y., Yue, G., Cao, Y., Su, H., Zhu, J.: One transformer fits all
https://arxiv.org/abs/2505.22306v1
distributions in multi-modal diffu- sion at scale. In: Proceedings of the International Conference on Machine Learning, pp. 1692–1717 (2023). PMLR [23] Peebles, W., Xie, S.: Scalable diffusion models with transformers. In: Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195–4205 (2023) [24] Bao, F., Nie, S., Xue, K., Cao, Y., Li, C., Su, H., Zhu, J.: All are worth words: A vit backbone for diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023) [25] Wang, L., Zhang, X., Su, H., Zhu, J.: A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) [26] Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: A review. Neural Networks 113, 54–71 (2019) [27] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences 114(13), 3521–3526 (2017) 24 [28] Wang, L., Zhang, M., Jia, Z., Li, Q., Bao, C., Ma, K., Zhu, J., Zhong, Y.: Afec: Active forgetting of negative transfer in continual learning. Advances in Neural Information Processing Systems 34, 22379–22391 (2021) [29] Rebuffi, S.-A., Kolesnikov, A., Sperl, G., Lampert, C.H.: Icarl: Incremen- tal classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017) [30] Wang, L., Zhang, X., Yang, K., Yu, L., Li, C., Hong, L., Zhang, S., Li, Z., Zhong, Y., Zhu, J.: Memory replay with data compression for contin- ual learning. In: Proceedings of the International Conference on Learning Representations (2021) [31] Serra, J., Suris, D., Miron, M., Karatzoglou, A.: Overcoming catas- trophic forgetting with hard attention to the task. In: Proceedings of the International Conference on Machine Learning, pp. 4548–4557 (2018) [32] Wang, L., Zhang, X., Li, Q., Zhang, M., Su, H., Zhu, J., Zhong, Y.: Incor- porating neuro-inspired adaptability for continual learning in artificial intelligence. Nature Machine Intelligence 5(12), 1356–1368 (2023) [33] Paviglianiti, A., Randazzo, V., Pasero, E., Vallan, A.: Noninvasive arterial blood pressure estimation using abpnet and vital-ecg. In: IEEE Interna- tional Instrumentation and Measurement Technology Conference, pp. 1–5 (2020). IEEE [34] Ibtehaz, N., Mahmud, S., Chowdhury, M.E., Khandakar, A., Salman Khan, M., Ayari, M.A., Tahir, A.M., Rahman, M.S.: Ppg2abp: Translating photoplethysmogram (ppg) signals to arterial blood pressure (abp) waveforms. Bioengineering 9(11), 692 (2022) [35] Bing, P., Liu, W., Zhai, Z., Li, J., Guo, Z., Xiang, Y., He, B., Zhu, L.: A novel approach for denoising electrocardiogram signals to detect cardiovascular diseases using an efficient hybrid scheme. Frontiers in Cardiovascular Medicine 11, 1277123 (2024) [36] Wagner, P., Strodthoff, N., Bousseljot, R.-D., Kreiseler, D., Lunze, F.I., Samek, W., Schaeffter, T.: Ptb-xl, a large publicly available electrocar- diography dataset. Scientific Data 7(1), 1–15 (2020) [37] Charlton, P.H., Kotzen, K., Mej´ ıa-Mej´ ıa, E., Aston, P.J., Budidha, K., Mant, J., Pettit, C., Behar, J.A., Kyriacou, P.A.: Detecting beats in the photoplethysmogram: benchmarking open-source algorithms. Physiologi- cal Measurement 43(8), 085007 (2022) [38] Schmidt, P.,
https://arxiv.org/abs/2505.22306v1
Reiss, A., Duerichen, R., Marberger, C., Van Laerhoven, 25 K.: Introducing wesad, a multimodal dataset for wearable stress and affect detection. In: Proceedings of the ACM International Conference on Multimodal Interaction, pp. 400–408 (2018) [39] Kirchhof, P., Benussi, S., Kotecha, D., Ahlsson, A., Atar, D., Casadei, B., Castella, M., Diener, H.-C., Heidbuchel, H., Hendriks, J., et al. : 2016 esc guidelines for the management of atrial fibrillation developed in col- laboration with eacts. Polish Heart Journal (Kardiologia Polska) 74(12), 1359–1469 (2016) [40] January, C.T., Wann, L.S., Alpert, J.S., Calkins, H., Cigarroa, J.E., Cleve- land Jr, J.C., Conti, J.B., Ellinor, P.T., Ezekowitz, M.D., Field, M.E., et al.: 2014 aha/acc/hrs guideline for the management of patients with atrial fibrillation: executive summary: a report of the american college of car- diology/american heart association task force on practice guidelines and the heart rhythm society. Circulation 130(23), 2071–2104 (2014) [41] Zipes, D.P., Jalife, J., Stevenson, W.G.: Cardiac Electrophysiology: from Cell to Bedside E-book. Elsevier Health Sciences, ??? (2017) [42] Thygesen, K., Alpert, J.S., Jaffe, A.S., Chaitman, B.R., Bax, J.J., Morrow, D.A., White, H.D., on behalf of the Joint European Society of Cardiology (ESC)/American College of Cardiology (ACC)/American Heart Association (AHA)/World Heart Federation (WHF) Task Force for the Universal Definition of Myocardial Infarction, E.G.: Fourth universal definition of myocardial infarction (2018). Circulation 138(20), 618–651 (2018) [43] Antman, E.M., Cohen, M., Bernink, P.J., McCabe, C.H., Horacek, T., Papuchis, G., Mautner, B., Corbalan, R., Radley, D., Braunwald, E.: The timi risk score for unstable angina/non–st elevation mi: a method for prognostication and therapeutic decision making. Jama 284(7), 835–842 (2000) [44] Amsterdam, E.A., Wenger, N.K., Brindis, R.G., Casey, D.E., Ganiats, T.G., Holmes, D.R., Jaffe, A.S., Jneid, H., Kelly, R.F., Kontos, M.C., et al.: 2014 aha/acc guideline for the management of patients with non–st- elevation acute coronary syndromes: a report of the american college of cardiology/american heart association task force on practice guidelines. Journal of the American College of Cardiology 64(24), 139–228 (2014) [45] Karthikeyan, P., Murugappan, M., Yaacob, S.: A review on stress induce- ment stimuli for assessing human stress using physiological signals. In: IEEE International Colloquium on Signal Processing and Its Applications, pp. 420–425 (2011). IEEE 26 [46] de Santos Sierra, A., ´Avila, C.S., Casanova, J.G., Del Pozo, G.B.: A stress- detection system based on physiological signals and fuzzy logic. IEEE Transactions on Industrial Electronics 58(10), 4857–4865 (2011) [47] Momeni, N., Dell’Agnola, F., Arza, A., Atienza, D.: Real-time cognitive workload monitoring based on machine learning using physiological sig- nals in rescue missions. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3779–3785 (2019). IEEE [48] Ranchet, M., Morgan, J.C., Akinwuntan, A.E., Devos, H.: Cognitive work- load across the spectrum of cognitive impairments: A systematic review of physiological measures. Neuroscience & Biobehavioral Reviews 80, 516–537 (2017) [49] Shu, L., Xie, J., Yang, M., Li, Z., Li, Z., Liao, D., Xu, X., Yang, X.: A review of emotion recognition using physiological signals. Sensors 18(7), 2074 (2018) [50] Ayata, D., Yaslan, Y., Kamasak, M.E.: Emotion recognition from multi- modal physiological signals for emotion aware healthcare systems. Journal of
https://arxiv.org/abs/2505.22306v1
Medical and Biological Engineering 40, 149–157 (2020) [51] Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P.S., Sun, L.: A comprehen- sive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226 (2023) [52] Mai, W., Zhang, J., Fang, P., Zhang, Z.: Brain-conditional multimodal synthesis: A survey and taxonomy. IEEE Transactions on Artificial Intelligence (2024) [53] Elgendi, M.: Optimal signal quality index for photoplethysmogram sig- nals. Bioengineering 3(4), 21 (2016) [54] Zhu, F., Ye, F., Fu, Y., Liu, Q., Shen, B.: Electrocardiogram generation with a bidirectional lstm-cnn generative adversarial network. Scientific Reports 9(1), 6734 (2019) [55] Mazumder, O., Banerjee, R., Roy, D., Bhattacharya, S., Ghose, A., Sinha, A.: Synthetic ppg signal generation to improve coronary artery disease classification: Study with physical model of cardiovascular system. IEEE Journal of Biomedical and Health Informatics 26(5), 2136–2146 (2022) [56] Ezzat, A., Omer, O.A., Mohamed, U.S., Mubarak, A.S.: Ecg signal recon- struction from ppg using a hybrid attention-based deep learning network. EURASIP Journal on Advances in Signal Processing 2024 (1), 95 (2024) [57] Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, 27 A., et al.: Conditional image generation with pixelcnn decoders. Advances in Neural Information Processing Systems 29(2016) [58] Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., Kavukcuoglu, K., et al.: Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 12(2016) [59] Kong, Z., Ping, W., Huang, J., Zhao, K., Catanzaro, B.: Diffwave: A versatile diffusion model for audio synthesis. In: Proceedings of the International Conference on Learning Representations (2021) [60] Tashiro, Y., Song, J., Song, Y., Ermon, S.: Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in Neural Information Processing Systems 34, 24804–24816 (2021) [61] Makowski, D., Pham, T., Lau, Z.J., Brammer, J.C., Lespinasse, F., Pham, H., Sch¨ olzel, C., Chen, S.A.: Neurokit2: A python toolbox for neurophysiological signal processing. Behavior Research Methods, 1–8 (2021) [62] Liu, Z., Zhou, B., Jiang, Z., Chen, X., Li, Y., Tang, M., Miao, F.: Multiclass arrhythmia detection and classification from photoplethysmog- raphy signals using a deep convolutional neural network. Journal of the American Heart Association 11(7), 023555 (2022) [63] Nabian, M., Yin, Y., Wormwood, J., Quigley, K.S., Barrett, L.F., Ostad- abbas, S.: An open-source feature extraction tool for the analysis of peripheral physiological data. IEEE Journal of Translational Engineering in Health and Medicine 6, 1–11 (2018) [64] Jeong, D.U., Lim, K.M.: Combined deep cnn–lstm network-based multi- tasking learning architecture for noninvasive continuous blood pressure estimation using difference in ecg-ppg features. Scientific Reports 11(1), 13539 (2021) [65] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High- resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022) [66] Esser, P., Kulal, S., Blattmann, A., Entezari, R., M¨ uller, J., Saini, H., Levi, Y., Lorenz, D., Sauer, A., Boesel, F., Podell, D., Dockhorn, T., English, Z., Rombach, R.: Scaling rectified flow transformers for high-resolution image synthesis. In: Proceedings of the International Conference on
https://arxiv.org/abs/2505.22306v1
Machine Learning (2024) 28 [67] Chen, Z., Tan, X., Wang, K., Pan, S., Mandic, D.P., He, L., Zhao, S.: Infergrad: Improving diffusion models for vocoder by considering inference in training. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2023) [68] Liu, H., Chen, Z., Yuan, Y., Mei, X., Liu, X., Mandic, D.P., Wang, W., Plumbley, M.D.: Audioldm: Text-to-audio generation with latent diffu- sion models. In: Proceedings of the International Conference on Machine Learning (2023) [69] Mo, S., Chen, Z., Bao, F., Zhu, J.: Diffgap: A lightweight diffusion module in contrastive space for bridging cross-model gap. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2025) [70] Wang, X., Wang, Y., Wu, Y., Song, R., Tan, X., Chen, Z., Xu, H., Sui, G.: Tiva: Time-aligned video-to-audio generation. In: Proceedings of the ACM International Conference on Multimedia (2024) [71] Wang, Z., Lu, C., Wang, Y., Bao, F., Li, C., Su, H., Zhu, J.: Prolific- dreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. In: Advances in Neural Information Processing Systems (2023) [72] Bao, F., Xiang, C., Yue, G., He, G., Zhu, H., Zheng, K., Zhao, M., Liu, S., Wang, Y., Zhu, J.: Vidu: a highly consistent, dynamic and skilled text-to- video generator with diffusion models. arXiv preprint arXiv:2405.04233 (2024) [73] Wang, Y., Chen, Z., Xiaoyu, C., Wei, Y., Zhu, J., Chen, J.: Framebridge: Improving image-to-video generation with bridge models. In: Proceedings of the International Conference on Machine Learning (2025) [74] Miao, Y., Chen, Z., Li, C., Mandic, D.P.: Respdiff: An end-to-end multi- scale RNN diffusion model for respiratory waveform estimation from PPG signals. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2025) [75] Jenkins, A., Chen, Z., Ng, F.S., Mandic, D.P.: Improving diffusion models for ECG imputation with an augmented template prior. arXiv preprint arXiv:2310.15742 (2023) [76] Chang, P., Li, H., Quan, S.F., Lu, S., Wung, S., Roveda, J., Li, A.: A transformer-based diffusion probabilistic model for heart rate and blood pressure forecasting in intensive care unit. Comput. Methods Programs Biomed. 246, 108060 (2024) [77] Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: 29 Proceedings of the International Conference on Learning Representations (2021) [78] Ma, C., Guo, L., Zhang, H., Liu, Z., Zhang, G.: Diffcnbp: Lightweight dif- fusion model for iomt-based continuous cuffless blood pressure waveform monitoring using PPG. IEEE Internet Things J. 12(1), 61–80 (2025) [79] Zhou, L., Lou, A., Khanna, S., Ermon, S.: Denoising diffusion bridge models. In: Proceedings of the International Conference on Learning Representations (2024) [80] Hang, T., Gu, S., Li, C., Bao, J., Chen, D., Hu, H., Geng, X., Guo, B.: Efficient diffusion training via min-snr weighting strategy. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2023) [81] Popov, V., Vovk, I., Gogoryan, V., Sadekova, T., Kudinov, M.A.: Grad- tts: A diffusion probabilistic model for text-to-speech. In: Proceedings of the International Conference on Machine Learning (2021) [82] Bao, F., Li, C., Sun, J., Zhu, J.: Why are conditional generative models better than unconditional ones? arXiv preprint arXiv:2212.00362 (2022) [83] Chen, Z., Dees, B.S., Mandic, D.P.: A probabilistic beat-to-beat
https://arxiv.org/abs/2505.22306v1
filter- ing model for continuous and accurate blood pressure estimation. In: Proceedings of the International Joint Conference on Neural Networks (2020) 30 1 Introduction 2 2 Results 4 2.1 Unified multi-modal generative modeling for cardiovascular signals 4 2.2 Versatile high-quality cardiovascular signal generation . . . . . 7 2.3 Robust real-time health monitoring with UniCardio . . . . . . 9 3 Discussion 12 4 Methods 14 4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Generative Framework . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.4 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . 18 A Diffusion Models 31 A.1 Unconditional Diffusion Models . . . . . . . . . . . . . . . . . . 31 A.2 Conditional Diffusion Models . . . . . . . . . . . . . . . . . . . 32 B Unified Generation Prior 33 C Unified Training Process 34 C.1 Conditional Learning . . . . . . . . . . . . . . . . . . . . . . . . 35 C.2 Progressive Training of UniCardio . . . . . . . . . . . . . . . . 36 D Efficient Sampling Process 37 D.1 First-Order ODE Sampler . . . . . . . . . . . . . . . . . . . . . 37 D.2 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . 38 D.3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 E Pseudo Code of Training and Sampling 38 F Additional Results 41 31 Appendix A Diffusion Models A.1 Unconditional Diffusion Models Diffusion models, known as denoising diffusion probabilistic models [20] or score-based generative models [21], approximate data distributions by learn- ing time-dependent score functions that reverse a predefined data-to-noise forward process. During sampling, they iteratively remove Gaussian noise and refine the generation results into structured data through a coarse-to- fine trajectory, enabling high-fidelity generation across diverse modalities such as image [22, 24, 65, 66], audio [67–70], 3D shape [71], and video [72, 73]. Owing to their strong generative performance, diffusion models have become foundational components in modern data generation systems. Recent studies have extended diffusion models to bio-electrical signal processing [74], including applications in pulsative signals imputation and
https://arxiv.org/abs/2505.22306v1
forecasting [13, 75, 76] as well as modality translation such as PPG-to-ECG synthesis [15]. However, most previous studies remain task-specific, lacking a unified framework for multi-modal signal generation. Despite the growing importance of cardiovascular monitoring, a diffusion model capable of handling multiple signal modalities within a unified system remains underexplored. Diffusion models comprise two coupled Markov chains: a forward process that progressively transforms data distribution into a Gaussian noise distribu- tion, and a reverse process that iteratively reconstructs data from the noise. The forward process defines a transformation from a data sample x0∼pdata(x) to a noisy latent xT, governed by a predefined noise schedule over Ttime steps: q(x1:T|x0) =TY t=1q(xt|xt−1), q 0(x0)∼pdata(x). (A1) The transition kernel q(xt|xt−1) is typically defined as: q(xt|xt−1) =N(xt;p 1−βtxt−1, βtI), where βtis a small positive constant. For efficiency, the marginal distribution at time step tcan be directly computed as: q(xt|x0) =N(xt;√¯αtx0,(1−¯αt)I), (A2) where αt= 1−βtand ¯αt=Qt s=1αs. The reverse process starts from a standard Gaussian prior pT(xT) and aims to reconstruct the data distribution with iterative sampling steps: pθ(x0:T−1|xT) =p(xT)TY t=1pθ(xt−1|xt), p(xT)∼ N(0,I). (A3) 32 Each reverse transition is parameterized as: pθ(xt−1|xt) =N(xt−1;µθ(xt, t), σ2 θ(xt, t)I), (A4) where µθandσ2 θare the learned mean and variance functions. In practice, the variance σ2 tis often fixed as σ2 t=1−¯αt−1 1−¯αtor simply σ2 t=βt, which yields similar performance while simplifying training [20, 59, 77]. The mean function µθcan be derived from various parameterizations such as score, noise, or data prediction. Without loss of generality, we adopt noise prediction and therefore have: µθ(xt, t) =1√αt xt−βt√1−¯αtϵθ(xt, t) . (A5) Then, the training objective of diffusion models minimizes the prediction error of the noise added in the forward process: Ludm(θ) =Ex0,ϵ,th ϵ−ϵθ√¯αtx0+√ 1−¯αtϵ, t 2 2i . (A6) A.2 Conditional Diffusion Models In many tasks such as signal restoration or modality translation tasks, condi- tion information cis available and can be leveraged to guide the generative process. For instance, PPG-to-BP estimation uses observed PPG signals as the conditional input to indicate the generation of continuous BP waveforms [78]. Given conditioning inputs c, the training objective of conditional diffusion models becomes: Lcdm(θ) =Ex0,ϵ,c,th ∥ϵ−ϵθ(xt, t,c)∥2 2i , (A7) where cis usually taken as the model input. The corresponding sampling process becomes: pθ(x0:T−1|xT,c) =p(xT)TY t=1pθ(xt−1|xt,c), p(xT)∼ N(0,I), (A8) pθ(xt−1|xt,c) =N(xt−1;µθ(xt, t,c), σ2 tI), (A9) where the mean function is guided by the learned conditional noise estimation: µθ(xt, t,c) =1√αt xt−βt√1−¯αtϵθ(xt, t,c) . (A10) When multiple condition modalities c1, . . . ,ckare available (e.g., ECG and PPG used jointly for BP estimation), they can be incorporated into a 33 multi-conditional generative process. The training objectives and conditional sampling process become: Lmcdm (θ) =Ex0,ϵ,c1,...,ck,th ∥ϵ−ϵθ(xt, t,c1, . . . ,ck)∥2 2i , (A11) pθ(x0:T−1|xT,c1, . . . ,ck) =p(xT)TY t=1pθ(xt−1|xt,c1, . . . ,ck), p(xT)∼ N(0,I). (A12) Here, kdenotes the number of condition signals, and pT(xT) remains the unconditional Gaussian prior. Appendix B Unified Generation Prior UniCardio is designed to accommodate generation tasks across multiple car- diovascular signals within a single conditional diffusion framework. These modalities exhibit substantial differences in temporal dynamics, amplitude scaling,
https://arxiv.org/abs/2505.22306v1
and physiological semantics. To enable unified modeling under such heterogeneity, UniCardio employs a shared, uninformative Gaussian prior derived from an unconditional forward process. This prior remains agnostic to both the generation target and the conditioning configuration. As a result, UniCardio supports versatile generation of each target modality with various conditioning settings. All tasks are performed with a unified model architec- ture, with task-specific sampling trajectories guided by conditional inputs and initialized from the shared latent distribution. Unconditional Forward Process. In UniCardio, the forward process is formulated to be unconditional across all tasks. Given a clean signal x0, the process perturbs it into a noisy latent variable xtas follows: xt=√¯αtx0+√ 1−¯αtϵ,ϵ∼ N(0,I), where ¯ αtis a monotonically decreasing function of the diffusion time t. Ast→ 1, the contribution from the original data diminishes, and xtconverges toward Gaussian noise. The resulting terminal distribution pT(x)≈ N(0,I) serves as a shared latent prior across all generation tasks. All generative trajectories, regardless of the target modality or conditioning settings, originate from this common latent distribution defined by the unconditional forward process. Conditional Sampling Trajectories. UniCardio models different signal generation tasks through specific conditional reverse process. Given the same unconditional latent prior, each restoration or translation task is realized as a distinct conditional sampling trajectory defined by the observed inputs. The conditioning signals cguide the denoising 34 trajectory from xTtox0, directing generation toward the target modality while the noise-to-data generative framework remains invariant across tasks. This separation between a task-invariant forward process and a task- specific reverse process is central to the flexibility of UniCardio. It enables the model to accommodate diverse input combinations without retraining or mod- ifying the architecture. More importantly, the use of a shared, unconditional prior allows UniCardio to model all target modalities (e.g., PPG, ECG, and BP) within a single diffusion model, despite their distinct signal structures and semantic differences. During inference, various conditioning settings, including partially observed or missing signal modalities, can be seamlessly supported by adjusting the conditioning function alone, without altering the generative mechanism. Comparison with Alternative Approaches. The aforementioned formulation contrasts with encoder-decoder and mapping- based architectures, where latent representations are typically entangled with the structure of the condition or target modality. Such designs often require architectural adaptation or retraining to accommodate new generation tasks or modalities. Recent bridge models [73, 79] construct task-specific priors from degraded observations to improve performance in structured domains. While effective in settings with known alignment between source and target signals, these methods rely on assumptions of structural similarity that do not hold for cardiovascular signals: PPG, ECG, and BP vary significantly in waveform morphology, sampling scale, and physiological semantics. By maintaining an unconditional forward process and defining task-specific behavior through conditional sampling, UniCardio preserves a coherent latent space while enabling highly flexible multi-task modeling. In total, UniCardio supports 33 distinct generation tasks over 3 signal modalities, encompassing both signal restoration and modality translation. For restoration, two forms of degraded input are considered: additive noise for denoising and randomly masked observations for imputation. All tasks across PPG, ECG, and BP, and across varying conditioning
https://arxiv.org/abs/2505.22306v1
configurations are unified within a single generative framework without requiring task-specific retraining or model adaptation. Appendix C Unified Training Process UniCardio aims to model conditional distributions among PPG, ECG, and BP signals using a single model architecture within a unified generative framework . In diffusion models, training is inherently a form of multi-task learning [80], as the model must approximate data distributions conditioned on the diffusion time step t∼(0, T]. Consequently, diffusion model training typically demands millions of iterations [20, 21], extensive computational resources [65, 66], and careful optimization across varying noise scales[81]. Extending this complexity, UniCardio is designed to simultaneously learn across multiple target modalities and conditioning configurations. Specifically, 35 it learns a family of multi-modal, multi-condition, and time-dependent score functions ϵθ(xt,c, t), where the target data x0denotes the target signal, c represents the observed signal(s) as the condition(s), and tis the diffusion time step. For efficient and high-fidelity multi-modal generation without relying on task-specific specialization, it necessitates a careful integration of conditional learning principles and a systematically structured training strategy. To this end, we first revisit the foundational concepts of conditional learn- ing in diffusion models, and subsequently describe the progressive training methodology adopted by UniCardio. C.1 Conditional Learning The training of diffusion models can be categorized into different modes depending on the availability of auxiliary information. Following the framework outlined in SCDM [82], we distinguish between unconditional, conditional, and multi-conditional learning settings. Unconditional Learning. In the unconditional setting, the model is trained to fit the full data distri- bution without access to conditioning information. The training objective is given by: min θ∈ΘD(q(x)∥pθ(x)), (C13) where Ddenotes a divergence metric (e.g., KL divergence), q(x) is the empir- ical data distribution, and θdenote the model parameters. Under this setting, the model is required to capture adequate variability inherent in the data, such as modality-specific structures and inter-subject differences. For cardio- vascular signals, this involves representing critical characteristics, such as the dicrotic notch in PPG waveforms, the QRS complex in ECG signals, and systolic and diastolic peaks in BP waveforms, while accommodating typical temporal dynamics and variability across individuals. Conditional Learning. Conditional learning introduces auxiliary information cinto the modeling pro- cess. An embedding function ϕ∈Φ transforms the condition into a feature representation, leading to the following training objective: min θ∈Θ,ϕ∈ΦEq(c) [D(q(x|c)∥pθ,ϕ(x|c))]. (C14) By conditioning on c, the generative task is restricted to a subset of the data distribution, typically resulting in a narrower and less complex target. For instance, generating BP signals conditioned on observed PPG signals focuses the model’s capacity on more homogeneous temporal and morphological pat- terns, enabling more efficient learning and higher-quality synthesis relative to the unconditional case. 36 Multi-Conditional Learning. In clinical applications, multiple sources of conditional information may be available simultaneously. The training objective generalizes accordingly: min θ∈Θ,ϕ∈ΦEq(c1,...,ck)[D(q(x|c1, . . . ,ck)∥pθ,ϕ(x|c1, . . . ,ck))]. (C15) Leveraging multiple conditions further constrains the generative space, thereby simplifying the training process and improving generation fidelity. For example, BP estimation conditioned jointly on PPG and ECG signals yields superior performance compared to using a single modality alone [19, 83]. C.2
https://arxiv.org/abs/2505.22306v1
Progressive Training of UniCardio Building on the principles of conditional learning, UniCardio adopts a struc- tured training strategy to facilitate efficient multi-modal, multi-condition generation of cardiovascular signals. Rather than training task-specific models, UniCardio progressively handles increasingly constrained conditional distribu- tions within a unified framework. Motivated by the observation that additional conditions simplify the generative target, we design a continual learning paradigm with adaptive training batch composition and learning rate schedul- ing (a detailed pseudo-code is provided in Algorithm 1). The entire training process is divided into k= 4 sequential phases over a total of e= 800 epochs, with each phase lasting e/kepochs: Phase 1: The model is trained exclusively on one-condition tasks, with training batches equally split between translation and imputation. Phase 2: Two-condition tasks are introduced. The training batches allocate 50% to one- and two-condition tasks, equally distributed between translation and imputation. Phase 3: Three-condition tasks are introduced. The training batches allo- cate 25% to one- and two-condition tasks of both translation and imputation, and 50% to three-condition tasks of imputation only. Phase 4: A balanced fine-tuning stage. The training batches allocate equal proportions to one-, two-, and three-condition tasks. Throughout training, task-specific attention masks control the visibility of condition and target modalities, enabling dynamic cross-modal learning. The learning rate starts at 1 ×10−3during the first e/kepochs to provide a strong initialization for one-condition tasks. It is then reduced to 1 ×10−4at epoch 0.7×e/kas the condition modalities increase. In the final e/kepochs, the learning rate is further decreased to 1 ×10−5to enable precise fine-tuning and ensure balanced performance across all tasks. 37 Appendix D Efficient Sampling Process D.1 First-Order ODE Sampler Despite the strong generative capabilities, diffusion models suffer from inher- ently slow inference speed. Traditional samplers based on Langevin dynamics require a large number of denoising steps to gradually transform noise into a high-quality sample. While reducing the number of steps can accelerate sampling, it often leads to severe degradation in synthesis quality. Therefore, designing an efficient sampling algorithm that preserves generation fidelity while reducing inference cost is critical for the practical deployment of diffusion models, particularly in modeling cardiovascular signals where long-duration waveforms such as PPG, ECG, and BP traces demand both high temporal resolution and morphological accuracy. To address this limitation, UniCardio employs Denoising Diffusion Implicit Models (DDIM) [77], a training-free first-order ordinary differential equation (ODE) sampler. DDIM reinterprets the reverse diffusion process as a deter- ministic mapping, allowing high-fidelity sample generation with substantially fewer steps than stochastic Langevin-based approaches. Without loss of gen- erality, we take conditional sampling with a single condition cas an example. At each sampling step t, given the noisy representation xtand the estimated noiseϵθpredicted by the denoising network, DDIM [77] updates the posterior sampling from timestep tto timestep t−1 with: xt−1=√¯αt−1xt−√1−¯αtϵθ(xt, t,c)√¯αt +p 1−¯αt−1ϵθ(xt, t,c).(D16) DDIM also presents a generalized form to control the stochasticity in genera- tion process as follows: xt−1=√¯αt−1xt−√1−¯αtϵθ(xt, t,c)√¯αt +p 1−¯αt−1−η2ϵθ(xt, t,c) +ηz, (D17) where z∼ N (0,I) is isotropic Gaussian noise introducing stochasticity and η≥0 is a hyperparameter controlling stochasticity. When η= 0, DDIM yields a fully deterministic
https://arxiv.org/abs/2505.22306v1
trajectory, where Eq. (D17) naturally recovers Eq. (D16). This formulation allows DDIM to synthesize samples by progres- sively denoising the input through a deterministic mapping guided by the learned noise estimates at each timestep. By eliminating the need for stochas- tic perturbations during sampling, DDIM substantially accelerates inference while preserving the fidelity of generated data. The use of DDIM provides multiple advantages for modeling cardiovascular signals. First, it substantially reduces the number of sampling steps, enabling rapid generation of PPG, ECG, and BP waveforms without compromising the fidelity of essential morphological features. Furthermore, the deterministic sampling path ensures that quasiperiodic structures (e.g., the QRS complex 38 in ECG signals or the systolic and diastolic peaks in BP waveforms) are con- sistently preserved in generation, maintaining both temporal coherence and physiological plausibility. D.2 Quantitative Results We present a comparison of sampling efficiency between the DDIM sampler [77] and the original DDPM sampler [20] in Table F2. Across a variety of generation tasks, DDIM achieves synthesis quality comparable to DDPM while requiring significantly fewer sampling steps. Specifically, DDIM with only 6 sampling steps matches the performance of DDPM operating with 50 steps, resulting in nearly a 10-fold improvement in inference speed (reducing the generation time from over 2.5 seconds to less than 0.4 seconds). This acceleration is particularly critical for real-time monitoring of cardiovascular signals via timely generation of morphologically accurate waveforms. D.3 Case Study We conduct case studies to further demonstrate the sampling efficiency of Uni- Cardio. In Fig. F4, we present a sampling trajectory of UniCardio on PPG imputation task, where missing values are generated conditioned on observed PPG segments. Starting from Gaussian noise sampled from p(xT)∼ N(0,I), UniCardio rapidly reconstructs the large-scale structures of the PPG waveform within the first 4 sampling steps, and progressively refines small-scale morpho- logical details during the final steps approaching t= 0. At each intermediate timestep, the denoising and refinement process is clearly visible, highlighting the capability of UniCardio to generate physiologically plausible waveforms with high efficiency in a small number of steps. We further present additional generation examples of UniCardio across PPG imputation, ECG imputation, PPG-to-ECG translation, and PPG-to- BP translation (Fig. F5). Remarkably, UniCardio employs a single network trained in a unified manner to reconstruct diverse target signals from the same Gaussian noise prior, completing each generation within only 6 sampling steps. These results demonstrate the versatile functions of UniCardio to adaptively model a wide range of conditional distributions across different cardiovascular signals without task-specific modifications or re-training. Appendix E Pseudo Code of Training and Sampling To capture the massive conditional distributions among PPG, ECG, and BP signals, UniCardio is pre-trained under a unified regime. The model pro- gressively learns from tasks involving one, two, or three observed condition modalities through a phased curriculum. In each training phase, we dynami- cally sample generation targets and condition sets, ensuring balanced coverage 39 of signal restoration and modality translation tasks. The training process is detailed in Algorithm 1. During inference, UniCardio employs a subset-based sampling strategy [77] to accelerate signal generation. Given the selected target modality
https://arxiv.org/abs/2505.22306v1
and the observed condition modalities, the model progressively refines an initial Gaus- sian noise through deterministic updates. The subset of time steps is selected by linearly spacing the desired number of steps across the diffusion range [0 , T]. The sampling procedure follows the deterministic rule described in Eq. (D16), and is summarized in Algorithm 2. 40 Algorithm 1 Unified Training Process of UniCardio Input: Total training epochs e, number of training phases k= 4, batch size B Output: Trained UniCardio model ϵθ Initialize model parameters θand learning rate η←10−3 forphase i= 1tokdo Define phase-specific sampling probabilities for the number of conditions Define phase-specific threshold Tbetween 0 and 1 forepoch j=(i−1)e ktoie kdo Update learning rate ηaccording to the phase-specific schedule foreach training iteration do Sample one batch of clean PPG, ECG, and BP signals Construct the noisy version and the signals with missing values Sample the number of conditions for this training batch based on phase-specific sampling probabilities Sample from U(0,1) and compare with the threshold T, determin- ing the current batch to be restoration or translation task Determine the combination of condition and target modalities with the number of conditions and task type, and setup the task- specific attention mask Combine the sampled clean signals, noisy version, and the signals with missing values to create the condition cand target x0 Sample timestep t∼ U(0, T] Sample noise ϵt∼ N(0,I) Corrupt the target x0using ϵtto obtain noisy input xt Compute batch loss: L=1 BP (xt,c,t,ϵt)∥ϵθ(xt,c, t)−ϵt∥2 Update model parameters θusing SGD with learning rate η end end end return Trained UniCardio model ϵθ 41 Algorithm 2 Efficient Sampling Process of UniCardio Input: Trained UniCardio model ϵθ, selected generation target modality, observed condition inputs c, noise schedule {¯αt}T t=1, a set of discrete timesteps {t}linearly spaced from Tto 0 Output: Generated target signal ˆx0 Select the target modality to be generated, which can be PPG, ECG, or BP Assemble the condition set cby providing the observed modalities Sample the initial noise xTfrom the standard Gaussian distribution N(0,I) fort=T, T−1, . . . , 0do Predict the noise component: ˆϵ=ϵθ(xt, t,c) Ift >0, update the next representation deterministically using Eq. (D16): xt−1=√¯αt−1 xt−√1−¯αtˆϵ√¯αt +√1−¯αt−1ˆϵ end return Generated target signal ˆx0 Appendix F Additional Results hhKhQM+Softmax ( )hVh/uni2032 hQh/uni22A4K/dKWKWQWVCustomized Transformer Blocks hthdhsEncoder 4hs,4Encoder 3hs,3 Encoder 2hs,2 Encoder 1hs,1 Modality-Specific Encoders ConcatenationModality-Specific Decoders h/uni2032 s Decoder 3h/uni2032 s,3 Decoder 2h/uni2032 s,2 Decoder 1h/uni2032 s,1Auxiliary ModalityDecoder 4h/uni2032 s,4Split by Modalitya FC Layer F1MSA Layer(B, kL, C)(B, kL, C)(B, kL, C)(B, kL, 2C)(B, kL, 2C)tanhσ(B, kL, 2C)(B, kL, C) Time Embedding Diffusion EmbeddingDecoder 3h/uni2032 s,3Decoder 2h/uni2032 s,2Decoder 1h/uni2032 s,1h/uni2032 s hsDecoder 4h/uni2032 s,4Split by Modality(B, kL, C)Skip Connections of All LayersFC Layer F3σInput to Next ModuleInput from Last Module(B, kL, C)SplitFC Layer F2(B, kL, 2C) ConcatenationEncoder 4hs,4Encoder 3hs,3Encoder 2hs,2Encoder 1hs,1 Auxiliary ModalityCustomized Transformer Module (x m) Split(B, kL, C)(B, kL, C) Task-Specific Activation Mask(B, kL, C)(kL, kL)(B, kL, C)(B, kL, C)(B, kL, C)(B, kL, C)(B, L, C) (B, L, C)(B, L, 1) (B, L, 1) (B, kL, C) Fig. F1 :Detailed model architecture of UniCardio. Each modality-
https://arxiv.org/abs/2505.22306v1
specific encoder consists of six consecutive 1D CNNs with various kernel sizes {1,3,5,7,9,11}. The joint feature vector hsis processed through five con- secutive customized transformer modules with residual and skip connections, resulting in the final feature vector h′ s. Each modality-specific decoder is imple- mented as a two-layer MLP with ReLU activation. 42 Table F1 :Quantification results of phase-wise performance. LRS, learning rate scheduling. TBC, training batch composition. TAM, task-specific attention mask. RMSE, the root mean squared error between generated sig- nals and ground-truth signals. The quantification results are averaged by 256 independent trials. The error bars represent the standard error of the mean. Task Method Phase-1 RMSE ( ↓) Phase-2 RMSE ( ↓) Phase-3 RMSE ( ↓) Phase-4 RMSE ( ↓) PPG ImputationUniCardio 0.1160 ±0.0066 0.1144 ±0.0064 0.1180 ±0.0065 0.1120 ±0.0063 UniCardio w/o LRS 0.1461 ±0.0063 0.1856 ±0.0070 0.3834 ±0.0060 – UniCardio w/o TBC 0.1093 ±0.0062 0.3791 ±0.0056 0.4125 ±0.0037 – UniCardio w/o TAM 1.9365 ±0.0198 60.7303 ±0.7026 52.8700 ±0.3954 – ECG ImputationUniCardio 0.1755 ±0.0043 0.1700 ±0.0047 0.1882 ±0.0040 0.1717 ±0.0048 UniCardio w/o LRS 0.2350 ±0.0039 0.2205 ±0.0039 0.2907 ±0.0089 – UniCardio w/o TBC 0.1783 ±0.0045 0.2916 ±0.0034 0.2944 ±0.0034 – UniCardio w/o TAM 3.0258 ±0.0321 9.3516 ±0.2086 34.6563 ±0.6607 – PPG-to-ECG TranslationUniCardio 0.2945 ±0.0063 0.2873 ±0.0071 0.2871 ±0.0068 0.2759 ±0.0067 UniCardio w/o LRS 0.3581 ±0.0097 0.3397 ±0.0054 0.4916 ±0.0089 – UniCardio w/o TBC 0.2832 ±0.0065 0.3537 ±0.0049 0.3642 ±0.0042 – UniCardio w/o TAM 28.4177 ±0.0547 29.0761 ±0.0410 20.5246 ±0.1508 – PPG-to-BP TranslationUniCardio 11.4886 ±0.4465 10.6768 ±0.4491 10.7759 ±0.4433 10.1675 ±0.4153 UniCardio w/o LRS 15.6685 ±0.4075 11.9641 ±0.4083 13.4415 ±0.4052 – UniCardio w/o TBC 10.7032 ±0.4103 15.0380 ±0.4539 20.0815 ±0.4638 – UniCardio w/o TAM 2601.45 ±25.5475 3046.11 ±11.7325 467.99 ±8.7286 – Table F2 :Comparison of sampling efficiency. We evaluate the time required to generate each signal segment of 4 seconds with one-card A800 GPU. NFE, the number of function evaluations for diffusion models. RMSE, the root mean squared error between generated signals and ground-truth signals. PPG Imputation ECG Imputation PPG-to-ECG PPG-to-BP Sampler NFE RMSE ( ↓) Time ( ↓) NFE RMSE ( ↓) Time ( ↓) NFE RMSE ( ↓) Time ( ↓) NFE RMSE ( ↓) Time ( ↓) DDPM 50 0.1119 2.81s 50 0.1732 2.74s 50 0.2753 2.96s 50 10.27 2.85s DDIM 6 0.1205 0.376s 6 0.1755 0.393s 6 0.2786 0.381s 6 10.26 0.398s DDIM 4 0.1203 0.288s 4 0.1747 0.278s 4 0.2802 0.267s 4 10.43 0.280s Table F3 :Task-specific attention mask. We present the token ranges that are set to zero for different condition and target modalities. Modality PPG ECG BP AM Condition M[−−,0 :L]M[−−, L: 2L]M[−−,2L: 3L]M[−−,3L: 4L] Target M[0 :L,−−]M[L: 2L,−−]M[2L: 3L,−−]M[3L: 4L,−−] 43 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs Fig. F2 :Visualization of denoising results with ST change. The ground-truth ECG signals are randomly selected from the PTBXL dataset [36]. The generated signals are produced
https://arxiv.org/abs/2505.22306v1
by UniCardio in a tuning-free manner. 44 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs Fig. F3 :Visualization of imputation results with ST change. The ground-truth ECG signals are randomly selected from the PTBXL dataset [36]. The generated signals are produced by UniCardio in a tuning-free manner. The masked regions denote the missing segments. 45 Fig. F4 :Step-wise outputs of diffusion model. We use 1/6 of all diffusion steps Tas the display interval. We set T= 50 as the default implementation. More diffusion steps are closer to noise, while fewer diffusion steps are closer to clean signal. Here we take PPG imputation as the example of visualization. PPG ImputationECG ImputationPPG-to-ECG TranslationPPG-to-BPTranslationt=0t=8t=16t=24t=32t=40t=50 Ground TruthGenerated Signal Fig. F5 :Visualization of step-wise outputs. Here we present visualization results of PPG imputation, ECG imputation, PPG-to-ECG translation, and PPG-to-BP translation tasks, with steps t={0,8,16,24,32,40,50}.
https://arxiv.org/abs/2505.22306v1
arXiv:2505.22310v1 [cs.LG] 28 May 2025From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization Shoaib Ahmed Siddiqui∗ University of CambridgeAdrian Weller University of Cambridge The Alan Turing InstituteDavid Krueger Mila Gintare Karolina Dziugaite Google DeepMind MilaMichael C. Mozer Google DeepMindEleni Triantafillou Google DeepMind Abstract Recent unlearning methods for LLMs are vulnerable to relearning attacks: knowl- edge believed-to-be-unlearned re-emerges by fine-tuning on a small set of (even seemingly-unrelated) examples. We study this phenomenon in a controlled set- ting for example-level unlearning in vision classifiers. We make the surprising discovery that forget-set accuracy can recover from around 50% post-unlearning to nearly 100% with fine-tuning on just the retain set—i.e., zero examples of the forget set. We observe this effect across a wide variety of unlearning methods, whereas for a model retrained from scratch excluding the forget set (gold standard), the accuracy remains at 50%. We observe that resistance to relearning attacks can be predicted by weight-space properties, specifically, L2-distance and linear mode connectivity between the original and the unlearned model. Leveraging this insight, we propose a new class of methods that achieve state-of-the-art resistance to relearning attacks2. 1 Introduction 0.88 0.90 0.92 0.94 T est set accuracy0.000.501.00Forget set accuracyUnlearned RelearnedRetrain from scratch SCRUBNegGrad+ Circuit Breakers Figure 1: Fine-tuning an unlearned model on just the retain set recovers performance on the forget set! Results on CIFAR-10 using a forget set of atypical examples from class ‘airplane’.Machine unlearning is the problem of removing the influence of specific training datapoints, the forget set , from a pretrained model. This was ini- tially motivated from the perspective of privacy, the right-to-be-forgotten [ 3,29] and data pro- tection policies [ 28], and recently applied to a range of problems, including removing harmful knowledge [25, 40, 7]. Exact unlearning refers to completely eliminat- ing the influence of the forget set. This objective can be achieved by retraining the model without those datapoints (i.e., retrain from scratch ) [2]. However, such a method is computationally pro- hibitive as it requires retraining the model for every unlearning request [ 2]. This issue moti- vated the development of approximate unlearn- ∗Correspondence to msas3@cam.ac.uk . 2Code to reproduce our experiments: https://github.com/shoaibahmed/vision_relearning Preprint. Under review. ing methods, where the aim instead is to only approximately remove the influence of the given datapoints [ 5,10,23,25,40], in exchange for greater efficiency. While such methods may seemingly succeed at matching the retrained-from-scratch model on some simple metrics, like the accuracy on the forget set, it is unclear if they permanently remove the influence of these datapoints [ 15]. In fact, evaluating whether they do is a research problem in its own right [15, 36]. In this work, we take a different perspective, focusing on relearning attacks on unlearned models. We consider tampering attacks [4,34], where the attacker is able to fine-tune the model weights. This direction is inspired by a growing set of observations in the context of unlearning “knowledge” in LLMs: fine-tuning an unlearned model even on seemingly benign data may cause the believed-to- be-unlearned knowledge to re-emerge [ 18,27,6,4]. However, these studies are carried out under conditions
https://arxiv.org/abs/2505.22310v1
that make it difficult to draw clear conclusions. First, these works study unlearning knowledge, capabilities, or topics , where the problem is inherently under specified. For example, given a dataset containing knowledge necessary for making bioweapons, the goal may be to fully remove the capability of constructing bioweapons, while preserving general knowledge of biology. In this setting, it is hard to draw a clear line between forbidden and permissible knowledge and pinpoint all training examples responsible for acquiring different types of knowledge. Furthermore, knowledge is hard to measure, especially due to nuances of natural language, requiring question- answer evaluation which may be sensitive to the particular phrasing. Finally, because LLMs can make complex inferences beyond the training set, not all knowledge that is extractable should be attributed to a failure of unlearning [ 32]. In many cases, the acquisition of knowledge is natural and unavoidable, hence, making it difficult to distinguish between the two in these problem settings. To address these issues, we study relearning attacks in a setting that allows for controlled ex- perimentation: unlearning specific training examples from (small) vision classification models, a problem for which a plethora of approximate unlearning methods have been developed and tested [ 13,14,12,23,10,35,38,31]. In example-level unlearning, the gold standard is clear, namely, to retrain the model without the forget set. As our models are small enough, we can compute the gold-standard solution and use it as the reference point for comparison. Because we consider clas- sification, we can also use accuracy as a simple and well-understood measure of performance. These properties combined allow us to compare the tamper resistance of different unlearning algorithms with the correct reference point, and therefore draw conclusions about their quality. We evaluate a range of increasingly-complex unlearning algorithms in this setting and discover a surprising finding: for numerous unlearning algorithms, the accuracy of the forget set jumps from around 50% post-unlearning to nearly 100% after fine-tuning the unlearned models on only the retain set , which is disjoint from the forget set. Fig. 1 shows this phenomenon on CIFAR-10 using ResNet-18, after having attempted to unlearn a subset of atypical instances of class ‘airplane’. We note that a model retrained from scratch without the forget set does not exhibit this behaviour, with the accuracy remaining at 50%. Therefore, the recovery of forget set accuracy can be safely interpreted as a failure of these algorithms to fully remove the influence of the data points in the forget set. Our extensive analyses leads to multiple insights. First, unlearning and relearning of typical examples is trivial, and unlearning methods behave similarly to the gold standard. However, we see a stark contrast in their respective patterns of behaviour on a forget set of atypical examples. Furthermore, taking a weight-space view [ 11], we discover a key characteristic of unlearning algorithms that are better at resisting these attacks: they yield an unlearned model that is distant from the pretrained model in the weight-space. Based on this insight, we propose a new class of unlearning algorithms that are superior in terms of resisting relearning attacks
https://arxiv.org/abs/2505.22310v1
by incorporating terms in their objective that encourage the unlearned model to move far away from the pretrained model in the weight-space. To summarize, we make the following contributions in this work: •We show that unlearning algorithms fail to delete the influence of the forget set, which stays dormant and can resurface by fine-tuning even on just the retain set . •We identify a key characteristic of methods that are more robust against relearning attacks, namely: the unlearned model is distant from the pretrained model in weight space. •Leveraging this insight, we propose a new class of unlearning methods that attempt to push the unlearned model far away from the pretrained model. These methods are significantly more robust against relearning attacks in comparison to unlearning methods that operate only at the output-level [23] or the representation-level [25, 40]. 2 2 Background and Related Work Unlearning. The problem of machine unlearning was introduced by [ 3]. The goal is to remove the influence of a “forget set” from a model that was trained on a dataset including that set. This was motivated by privacy and right-to-be-forgotten policies [ 28]. The perfect unlearning method, from the perspective of fully erasing the influence of the forget set, is to simply retrain the model excluding that set. However, the computationally prohibitive training costs make such an approach infeasible in most practical cases. [ 2] propose to shard the dataset, and train an ensemble model over it, allowing to selectively retrain only the affected parameters. However, the computational cost is still prohibitive in the worst case, while also leading to poorer performance in some cases due to the use of specialized architectures. These issues motivated the development of approximate methods that accept imperfect unlearning in exchange for greater efficiency. This is the family of methods we focus on in this work. The goal in approximate unlearning is to post-process the trained model as efficiently as possible in order to closely match the model which is retrained from scratch using only a small amount of model fine-tuning [ 19,5,10,23,25,40,39,36]. This is a challenging problem as imperfect attempts to erase the influence of the forget set post-hoc may have a number of unwanted side-effects, such as harming the overall utility of the model [36]. Unlearning quality metrics. Since most approximate unlearning methods that are applicable to deep models do not come with theoretical guarantees about the quality of their approximation, we are required to estimate how well they approximate retraining from scratch empirically. This is a research problem in and of itself, and current rigorous metrics are very computationally expensive [ 36,15]. Furthermore, unlearning entails fundamental trade-offs, such as between forgetting and maintaining the model’s utility. This requires a multifaceted evaluation metric that captures relevant factors aside from forgetting quality. Commonly, in vision classification, the accuracy on the retain set and the accuracy on the test set are used to measure model utility. In similar spirit to our work, time to relearn [13] quantifies the strength of unlearning by the number of optimization steps required to reacquire forgotten information
https://arxiv.org/abs/2505.22310v1
by directly fine-tuning on it. We instead show that we can restore forget set accuracy even if fine-tuning only on a subset of it, or even only the retain set. Re-emergence of attempted-to-be-unlearned knowledge via fine-tuning. Recent work in language models showed that believed-to-be-unlearned knowledge can re-emerge by fine-tuning on a small subset of the forget set or even on seemingly-unrelated data [ 18,27,6,4]. Relatedly, it has also been shown that fine-tuning a language model on benign inputs can reverse the safety tuning of the model [ 30,24]. A key distinction sets our work apart from all prior efforts. They study unlearning knowledge, or capabilities , rather than specific training examples . Their goal is to remove unwanted knowledge beyond the specific instances in the forget set, e.g., fully remove a dangerous capability (such as bioweapon construction) after having unlearned on a specific dataset containing related knowledge [ 25]. This problem is inherently less well-specified compared to unlearning specific examples where we have a clear definition of the ideal solution, namely, retraining from scratch without the specific examples. In LLMs, measuring knowledge is also nuanced, requiring question-answering tools, for instance, where the success of extracting knowledge may depend on the phrasing [ 41]. We study relearning attacks for example-level unlearning in vision classifiers, a setting where the forget set is well-specified and the goal is well-defined and simple to measure. 3 Problem Formulation LetDtrdenote a training set and Aa learning algorithm. Let MP=A(MI,Dtr)denote the “pretrained model”, obtained by training on Dtr, starting from a random initialization MI. Now, let DF⊂ D trdenote a forget set that we want to unlearn, and let DR=Dtr\ DFdenote the retain set. The goal of an unlearning algorithm U, is to post-process the pretrained model MPto remove the influence of DF. Specifically, we denote an unlearning algorithm by U:M × D R× DF7→ Mthat takes in a model, retain set DR, forget set DF, and returns an unlearned model MU= U(MP,DR,DF). Ideally, the unlearned model MUshould match the gold-standard “retrained- from-scratch” model MRS=A(MI,DR)which starts from a random initialization and trains on only the retain set, fully eliminating the influence of DF. We desire unlearning algorithms that can approximate that solution but are much more efficient than retraining from scratch. 3 In this work, we study relearning attacks that apply a further fine-tuning phase attempting to reintro- duce the forget set. Such attacks that are able to modify the model’s weights are referred to in the literature as tampering attacks [4,34]. We carry out these attacks by fine-tuning the model on the union of DRand a subset of “relearning examples” DFre⊂ D F. We denote the relearned model as MRL=A′(M,DR∪ DFre), where A′denotes a fine-tuning algorithm used for relearning (which might be similar to Awith slightly different hyperparameters) and Mcan be either MUorMRS. We measure performance on the held-out portion of the forget set (held-out from the perspective that it was not used during relearning), denoted DFho=DF\ DFre. We vary the size of DFreand measure the effect on relearning. We also measure performance on a test set Dte,
https://arxiv.org/abs/2505.22310v1
to measure utility. An ideal unlearning algorithm is one that is tamper resistant : upon relearning, its accuracy on the forget set does not increase more than it would by learning the relearning set anew. In other words, the forget set accuracy of A′(MU,DR∪DFre)should not be higher than that of A′(MRS,DR∪DFre). At the same time, an ideal unlearning algorithm would not sacrifice the test accuracy. Threat model. Similar to tampering attacks considered for LLMs [ 34,27,18,6,4], we assume that the defender has access to a pretrained model, and performs unlearning using any algorithm of their choice. Furthermore, we assume that the attacker has white-box access to the unlearned model provided by the defender, the retain set DR, and limited access to the forget set (i.e., the relearning set DFre). The goal of the attacker is to recover performance on the full forget set DF, while minimizing the number of unlearned examples needed DFre(as relearning becomes trivial if DFre=DF). We also consider an extreme—and perhaps more realistic—case of access to only DR. 4 Experimental Setup Models and Datasets. We use two different models for our evaluation from the ResNet model family [ 16], namely ResNet-18 and ResNet-34 [ 16]. In terms of datasets, we use CIFAR-10 and CIFAR-100 datasets [ 22] with 10 and 100 classes respectively, and a total of 50ktraining instances in each case ( 5kinstances per class for CIFAR-10, and 500instances per class for CIFAR-100). Evaluation. All models are evaluated in terms of accuracy on the held-out part of the forget set DFho(same subset between all models), as well as the test set Dte. While we can report accuracy on the full forget set DFfor the unlearned model, we instead report accuracy on the remaining subset in order to enable a direct comparison of the impact of the relearning attack. The line plots show accuracy every 10 optimization steps. When reporting results using a scatter plot, we average the test set accuracy as well as the accuracy on the forget set for the last 50 steps reported in the line plots. Pretraining We pretrain the model for 300 epochs using Adam optimizer [ 21] with a learning rate of 1e-4, cosine learning rate decay with a decay factor of 0.1, batch size of 128, and a weight decay of 1e-4 in all configurations. Unlearning. We consider two unlearning settings: sub-class unlearning , where the forget set consists of 10% of the class instances (here, sub-class means a subset of the complete class), and class-agnostic unlearning , where we select 1% of the data set regardless of class labels. This ensures that we use the same number of examples in the forget set for both settings on CIFAR-10 (we only evaluate sub-class unlearning on CIFAR-100). We use a smaller learning rate of 1e-5 without any weight decay and optimize the model for a 100 epochs during the unlearning phase. Relearning attack. During this phase, we fine-tune on a combination of the retain set DRand a subset of the forget set for relearning ( DFre). We explore the impact of different choices
https://arxiv.org/abs/2505.22310v1
for relearning examples in Appendix F. We again use a small learning rate of 1e-5 without any weight decay, and optimize the model for just 10 epochs (except Fig. 1 where we optimized the model for 300 epochs). Similar to the pretraining stage, we use a cosine learning rate decay with a decay factor of 0.1. 4.1 Baseline Unlearning Methods We consider a range of different baseline unlearning methods. Each method has its own set of hyperparameters. We attempted to select the hyperparameters to achieve a good trade-off between the test accuracy and the forget set accuracy for each unlearning method. 4 0.80 0.900.000.501.00unlearned model 0.80 0.900 relearning examples 0.80 0.9010 relearning examples 0.80 0.90100 relearning examples T est set accuracyForget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 0.70 0.80 0.900.000.501.00unlearned model 0.70 0.80 0.900 relearning examples 0.70 0.80 0.9010 relearning examples 0.70 0.80 0.90100 relearning examples T est set accuracyForget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist RegFigure 2: Scatter-plots with test-set accuracy on the abscissa and accuracy on the held-out portion of the forget set, DFho, on the ordinate. The left-most subplot indicates performance immediately following unlearning. The next three subplots are following a relearning attack with instances of the retain set, DRand a varying number of instances of the forget set, DFre(0, 10, and 100, respectively). Each point is the average performance of the last 50 steps (see Fig. 10 for the whole trajectory for sub-class unlearning and Fig. 11 for the trajectory for class-agnostic unlearning). The forget set is comprised of atypical examples (from the ‘airplane’ class i.e., sub-class unlearning for the top row and all classes i.e., class-agnostic unlearning in the bottom row) in CIFAR-10. The figure indicates that many methods achieve near-perfect recovery of unlearned knowledge with only a small amount of model fine-tuning, even with 0 relearning examples (fine-tuning on only the retain set). Weight Distortion, CBFT, and Weight Dist Reg are introduced in Section 6. SCRUB [23] uses a two-phase training procedure where it interleaves iterations on the forget set and the retain set. The loss function minimizes KL-divergence between the pretrained model and the unlearned model output distributions on the retain set, along with cross-entropy on the true labels. For unlearning, it maximizes the KL-divergence on the forget set between the distributions of the pretrained model and the unlearned model. Circuit Breakers [40,25] was proposed particularly to unlearn knowledge in language models. The training procedure attempts to push the representations apart by minimizing cosine similarity with the pretrained model on the forget set, while minimizing the Euclidean distance of the representations on the retain set to avoid model collapse. We apply circuit breaker loss on layer 4 and layer 7 of our models, which is motivated by the fractional depth considered in the original work. NegGrad+ [23] maximizes the cross-entropy loss on the forget set, while minimizing the loss on the retain
https://arxiv.org/abs/2505.22310v1
set. We used the alternating variant (similar to SCRUB) instead of joint optimization of the two losses, as it resulted in better test accuracy as well as lower forget set accuracy. Catastrophic Forgetting [36] uses repeated fine-tuning on the retain set with a weight decay, which naturally leads to a decay in the magnitude of the parameters that are unimportant for the forget set. We use a weight decay of 0.001 for all our models. L1-Sparse [19] is similar to the Catastrophic forgetting in our case, except that it minimizes L1-norm instead of the L2-norm employed in weight decay. Selective Synaptic Dampening (SSD) [10] identifies model weights to dampen based on their importance for the retain set and forget set, quantified using the Fisher information matrix. In contrast 5 0.85 0.900.000.501.00SCRUB 0.85 0.90Circuit Breakers 0.85 0.90Weight Distortion T est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 3: Comparison between test set accuracy and accuracy on the held-out part of the forget setDFhoafter relearning, for subclass unlearning of atypical examples in CIFAR-10. We consider two-phase unlearning methods: first, an initial safeguard (unlearning phase) is applied, with the unlearning algorithm mentioned as the subplot title. Then, each of TAR, CBFT, and Weight Dist Reg are applied as a second phase for increasing the tamper-resistance. The ‘+’ symbol represents the performance of the initial safeguard for reference. We observe that TAR fails to add any tamper- resistance in addition to that of the initial safeguard despite being designed for this. to the original paper, we follow this process with fine-tuning on the retain set to be consistent with our other baselines, giving SSD a better shot at repairing test accuracy. Random Relabeling [14] relabels every example in the forget set with a random label. Note that we used an aggressive version of random relabeling where we re-assign a new label at every fine-tuning step. Hence, this can be considered analogous to minimizing divergence to a uniform distribution. Weight Attenuation attenuates all model weights with a fixed attention factor of 0.5, followed by simple fine-tuning on the retain set. Weight Dropout performs random (unstructured) pruning with a dropout factor of 0.2 (i.e., zeroing out 20% of the model weights), followed by simple fine-tuning on the retain set. Tampering Attack Resistance (TAR) [34], originally proposed for LLM unlearning, defines a bi- level optimization, starting from an already unlearned model, with the aim to make it resistant against tampering attacks. It uses a first-order approximation of inner adversaries, and attempts to maximize the entropy of the model predictions after fine-tuning of the unlearned model. TAR also uses a representation alignment loss which minimizes the Euclidean distance between the representations of the initially unlearned model and the current model to avoid model collapse. Similar to Circuit Breakers, we apply the representation alignment loss on layer 4 and layer 7 of our models. Note that TAR relies on an unlearned model as a starting point, making it a two-phase approach. 5 Recent Unlearning Algorithms are not Tamper-Resistant We begin by investigating the tamper-resistance of current state-of-the-art
https://arxiv.org/abs/2505.22310v1
unlearning methods. We use CIFAR-10 and CIFAR-100 with ResNet-18, and attempt to unlearn instances of a single class (‘airplane’ class for CIFAR-10, and ‘apple’ class for CIFAR-100), or across classes ( class agnostic ). The role of typicality for learning and unlearning. Not all examples are equally easy to unlearn, and not all examples are equally easy to (re)learn. Typical examples can be particularly easy to unlearn as even a no-op unlearning already yields similar predictions on these examples as the retrained from scratch model [ 19]. This is because they are, by definition, easy to predict regardless of whether or not they are part of the training set [ 9,20,1,33]. Generally, because typical examples are more likely to be predicted correctly, they are easier to relearn as well. Based on this, we hypothesize that the typicality of examples is a key property that will determine relearning patterns and differences between unlearning algorithms compared to retraining from scratch. To investigate this, we study three different settings, characterized by different typicality levels, where the forget set contains instances of the ‘airplane’ class that are: (i) most likely to be typical, (ii) randomly selected, and (iii) most likely to be atypical. We leverage pre-computed consistency scores from [ 20] to separate typical and atypical instances, by treating instances with the highest consistency 6 scores as typical instances, while treating instances with the lowest consistency scores as atypical instances. Other scoring schemes are also equally applicable [9, 33, 1]. When the forget set consists of typical examples, relearning is trivial and uninteresting. The reason is that accuracy of the retrain-from-scratch model is essentially perfect on the forget set, even before relearning is applied, and all unlearning methods exhibit similar behavior (for the sake of completeness, we show the results for this case in Fig. 6 – Appendix A). Similarly, when the forget set consists of randomly selected examples, performance of retrain-from-scratch is nearly perfect because randomly selected examples are predominantly typical [ 9,8,1,33] (again, for completeness, this case is shown in Fig. 7 – Appendix B). Consequently, we focus on atypical forget set items for the remainder of the paper; for these items, retrain-from-scratch will not predict correctly prior to relearning and thus we can measure the effectiveness of a relearning attack. Relearning attacks succeed against several unlearning baselines . We present the results for sub-class unlearning for the forget set of atypical examples in Fig. 2. In this and subsequent figures, existing methods are indicated by pastel colors. New methods, to be introduced shortly, are represented by saturated colors. For the most part, the existing methods all behave similarly and the reader need not attend to the individual methods. The accuracy of retrain from scratch is less than 50% on that forget set, and remains almost exactly the same when subjected to the relearning attack of fine-tuning only on the retain set (i.e., 0 relearning examples). The accuracy of this model shifts up slightly as we increase the number of relearning examples from the forget set (going from left to right). In stark contrast,
https://arxiv.org/abs/2505.22310v1
the different unlearning methods we evaluate show a qualitatively different trend. We make the striking observation that some methods (such as Circuit Breakers, SCRUB, and Random Relabeling) are very susceptible to relearning attacks. For these methods, forget-set accuracy drops down after unlearning, near the desired reference point of the retrained model. However, upon relearning even on just the retain set, the model achieves near-perfect accuracy on the forget set—a jump from near 50% post-unlearning to nearly 100% after relearning. Sub-class vs class-agnostic unlearning . We further attempt to understand if such differences exist in a class-agnostic forget-set setting (with the same number of forget-set examples, i.e., 500), which is now comprised of atypical examples across classes. Fig. 2 (bottom) shows that class-agnostic unlearning better differentiates among methods compared to sub-class unlearning. We see a wider spread among methods, even in the first subplot (post-unlearning, pre-relearning), since class-agnostic unlearning is harder. We also observe that while for sub-class unlearning, the performance of retrain- from-scratch (black star) on DFhoshifts upwards as more relearning examples are used, this shift does not occur with class-agnostic unlearning. This result is expected since relearning on a larger number of atypical examples does not improve performance on a disjoint set of other atypical examples (by definition of atypicality [ 9]). There is again a stark contrast between the performance of retrain- from-scratch model on DFho, where post-relearning accuracy remains very low, and many unlearning algorithms, where post-relearning accuracy again reaches nearly 100%. Importantly, the relative ranking between methods in the sub-class and the class-agnostic case remains consistent. Relearning attacks succeed even against methods designed for tamper-resistance . We further compare two-phase methods that assume an initial unlearning round, followed by a subsequent training round in order to reduce susceptibility against relearning attacks. This is inspired by the methodology of TAR [ 34], which is explicitly designed to increase the resistance of the unlearned model against fine-tuning-based relearning attacks. Despite TAR’s success in the case of language models, Fig. 3 shows that it struggles to provide any resistance against relearning attacks in our case. 6A Weight-Space View on Understanding and Improving Tamper-Resistance In the previous section, we demonstrated the susceptibility of existing prominent unlearning methods to tampering (Fig. 2). However, it is unclear what makes a method vulnerable or robust to these relearning attacks. In this section, we shed light on this question through a weight-space view. Specifically, we hypothesize that the susceptibility of an unlearned model to relearning may be associated with failing to move ‘far enough’ from the pretrained model in weight-space. We explore this hypothesis from two perspectives: (i) by measuring distances in weight-space, and (ii) through Linear Mode Connectivity analysis [ 11]. We then use these tools to interpret the tamper-resistance profiles of previously-proposed unlearning algorithms (Fig. 2) that show better tamper-resistance 7 0 10.00.51.0Retrain from Scratch 0 10.00.51.0SCRUB 0 10.00.51.0Circuit Breakers 0 10.00.51.0CBFT 0 10.00.51.0Weight Dist Reg T est Set Retain Set Forget Set Mixing weightAccuracyFigure 4: Linear mode connectivity analysis on CIFAR-10, where the forget set is comprised of atypical examples. We construct a
https://arxiv.org/abs/2505.22310v1
linear path between the pretrained and the unlearned (or retrained- from-scratch) model by interpolating the model parameters and batch-norm statistics using different mixing weights (shown on the x-axis). We report accuracy on the y-axis. 0 on the x-axis represents the pretrained model, while 1 represents the unlearned or retrained model. Retrain-from-scratch is not linearly connected to the pretrained model, whereas for unlearning algorithms, the resulting unlearned model is in many cases still linearly connected to the pretrained one. (such as Catastrophic Forgetting and L1 Sparse) compared to the ones that exhibit worse tamper- resistance (such as Random Relabeling, Circuit Breakers, and SCRUB). Retrain from scratch Random Relabeling SCRUB NegGrad+ Circuit Breakers SSD CBFT L1 Sparse Weight Dropout Weight Attenuation Catastrophic Forgetting Weight Distortion Weight Dist Reg101102||P U||2 Figure 5: L2norm of the difference between the pa- rameters of the pretrained and the unlearned mod- els induced by different methods. We consider only the difference in the parameters, while ignor- ing the batch-norm statistics for ResNet-18 trained on CIFAR-10, where the forget set is comprised of atypical examples.Weight-space distance. Fig. 5 plots L2dis- tance between the pretrained model and the un- learned model. We see that existing methods that we previously showed to be susceptible to relearning (pastel colors) induce only small movement in parameter space, indicated by the small L2distance. Generally, we see that methods with higher L2 distance exhibit higher robustness against re- learning attacks presented in Fig. 2. Notably, two methods based on these insights which we describe shortly, Weight Distortion andWeight Dist Reg , have the highest L2-norm and higher tamper-resistance. We remark that, out of pre- vious methods, those with increased tamper- resistance (such as Catastrophic Forgetting and L1 Sparse) have comparatively higher distance norm compared to methods with very poor tamper resistance (such as Random Relabel- ing, Circuit Breakers, and SCRUB). We note that both Catastrophic Forgetting and L1 Sparse use weight-space regularizers ( L2andL1, re- spectively). Fine-tuning without any regularizer failed to unlearn the forget set in our evaluations, which aligns with our weight-space interpretation. Linear Mode Connectivity (LMC). We further investigate the relationship in weight-space be- tween the unlearned and the retrained from scratch model via the lens of LMC [ 11]. We plot the accuracy when interpolating between the two models (where we interpolate both model parameters as well as model batch-norm statistics) along a linear path. We compare the pretrained and retrained from scratch model (which are trained from the same initialization), as well as the pretrained and different unlearned models in Fig. 4. Looking at the retrain from scratch plot, we see that there is a high-loss barrier between the two models, meaning that they are not in linearly connected modes. The same holds for methods with some tamper resistance (like Catastrophic Forgetting and L1 Sparse) but not for those vulnerable to tampering attacks like SCRUB, where we observe no such barrier. 6.1 A New Class of Tamper-Resistant Unlearning Methods We now leverage these insights to propose a class of unlearning methods that are designed with tamper-resistance in mind. We achieve this
https://arxiv.org/abs/2505.22310v1
through objectives that either aim to induce a large 8 distance in the weight-space, or a loss barrier between the pretrained and unlearned models. Hence, any method that directly or indirectly attempts to separate out the pretrained model and the unlearned model is an instantiation of this framework. Weight Distortion. This is a very simple method that adds isotropic zero-mean Gaussian noise to all model weights with a fixed standard deviation of 0.02, followed by simple fine-tuning on the retain set. We hypothesize that the addition of noise facilitates moving away from the pretrained model. Weight Dist Reg. We directly aim to maximize the distance between the pretrained and unlearned models, by explicitly adding a term that quantifies the Euclidean distance between the two models. We maximize this loss during training, while minimizing the loss on the retain set. Connectivity-Based Fine-Tuning (CBFT). We employ the method from [ 26] (originally proposed to obtain models focusing on distinct recognition mechanisms). We maximize the loss on the midpoint between the pretrained model and the current unlearned model on examples from the retain set as well as the forget set, while only minimizing the loss on the retain set for the unlearned model. This attempts to add a high-loss barrier in between, while still retaining model utility on the retain set for the final unlearned model. We use a small weighting factor of 0.001 on the loss maximization term. Similar to [26], we ignore the loss maximization term if the loss magnitude exceeds a value of 50. Findings . Fig. 2 shows that Weight Distortion and Weight Dist Reg are significantly more tamper- resistant in comparison to prior approaches. However, we observe that CBFT is less effective than Weight Dist Reg and Weight Distortion across the board. We hypothesize that this is due to acting on model outputs and only indirectly influencing the weight-space. Indeed, while CBFT does create a larger loss barrier than other methods (Fig. 4), the L2norm between the pretrained and unlearned parameters is relatively low (Fig. 5). Overall, out of the two weight-space diagnostic tools, we find that the L2norm of the difference in model parameters is more reliable in predicting tamper-resistance. Fig. 3 shows that both Weight Dist Reg and CBFT, when applied as a second phase on an initial safeguard, can substantially improve its tamper-resistance, unlike TAR. The only exception is if the initial safeguard is Weight Distortion, which already has sufficient tamper-resistance. 7 Discussion and Conclusion On the trade-off between tamper-resistance and test accuracy . As discussed previously, there are inherent trade-offs in unlearning, between forgetting the specified examples while maintaining utility (measured via test accuracy) [ 23,6,36]. Here, we discover a different trade-off between resisting relearning attacks, and test accuracy. Indeed, Fig. 2 shows that the methods that are best at defending against these attacks are the ones with the lowest test accuracy. Having surfaced this fundamental tension, we hope future work improves on the current Pareto frontier formed by our new methods. Findings hold across datasets/architectures . The results presented in the paper are consistent across
https://arxiv.org/abs/2505.22310v1
models i.e., ResNet-34 on CIFAR-10 as presented in Fig. 9 – Appendix D. Furthermore, these results are also consistent across different datasets, i.e., CIFAR-100, as highlighted in Fig. 8 – Appendix C. The role of the retain set for relearning . Given the surprising finding that we can recover forget set accuracy while fine-tuning on only the retain set, we ask: are the retain set examples necessary for this to occur, or could we have used any other examples from the same distribution? We investigate this in Appendix F where we replace the retain set with different sets, and fine-tune the unlearned model on those different sets (combined with the ‘relearning set’, which is a subset of the forget set, as before). Notably, replacing the retain set with test examples (that the model was not previously trained on) causes the relearning effect to be a bit less pronounced, especially when 0, or few, relearning examples are used, highlighting the importance of using training rather than held-out data for inducing relearning. This observation relates to other recent findings on anticipatory knowledge reawakening when exposing the model to a repeated sequence of documents. In this scenario, as the model processes documents in a fixed order, it unexpectedly begins to recover an increasing amount of information about a previously seen example even before encountering that example again [37]. Summary and take-aways . We showed that unlearning methods are susceptible to relearning attacks where the forget set accuracy can be recovered simply by fine-tuning on the retain set. For atypical examples in particular, there is a stark contrast between the relearning patterns of unlearning methods compared to retraining from scratch. Based on weight-space analysis, we suggest two diagnostic tools for understanding tamper-resistance, and propose simple methods that yield state- 9 of-the art tamper-resistant results, revealing new pathways for improving unlearning. The authors of [31] argued that unlearning algorithms that operate on representations rather than at the level of outputs may be more robust at defending some types of attacks. Our findings take this discussion further, showing that unlearning methods that operate at either the level of the model outputs or the representations, without any constraint on model weights (which include methods such as SCRUB, Circuit Breakers, and Gradient Ascent, etc) should struggle to induce robustness against relearning attacks. On the other hand, methods that directly or indirectly push the pretrained and the unlearned models apart via any intervention (which includes distortion model weights, regularizing the model to decay parameter magnitude, or even directly pushing the models apart using an explicit loss term) should be significantly more robust to relearning attacks. Our proposed methods for increasing tamper-resistance exemplify this and we hope future work builds on these further. Acknowledgements The authors would like to acknowledge useful discussions with Yanzhi Chen and Ilia Shumailov regarding unlearning, and susceptibility to relearning. AW acknowledges support from Turing AI Fellowship under grant EP/V025279/1, the Alan Turing Institute, and the Leverhulme Trust via CFI. References [1]Robert Baldock, Hartmut Maennel, and Behnam Neyshabur. Deep learning through the lens of example difficulty. Advances in
https://arxiv.org/abs/2505.22310v1
Neural Information Processing Systems , 34:10876–10889, 2021. [2]Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE symposium on security and privacy (SP) , pages 141–159. IEEE, 2021. [3]Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy , pages 463–480. IEEE, 2015. [4]Zora Che, Stephen Casper, Robert Kirk, Anirudh Satheesh, Stewart Slocum, Lev E McKinney, Rohit Gandikota, Aidan Ewart, Domenic Rosati, Zichu Wu, et al. Model tampering attacks enable more rigorous evaluations of llm capabilities. arXiv preprint arXiv:2502.05209 , 2025. [5]Vikram S Chundawat, Ayush K Tarun, Murari Mandal, and Mohan Kankanhalli. Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, pages 7210–7217, 2023. [6]Aghyad Deeb and Fabien Roger. Do unlearning methods remove information from language model weights? arXiv preprint arXiv:2410.08827 , 2024. [7]Ronen Eldan and Mark Russinovich. Who’s harry potter? approximate unlearning in llms. arXiv preprint arXiv:2310.02238 , 2023. [8]Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing , pages 954–959, 2020. [9]Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems , 33:2881–2891, 2020. [10] Jack Foster, Stefan Schoepf, and Alexandra Brintrup. Fast machine unlearning without retraining through selective synaptic dampening. In Proceedings of the AAAI conference on artificial intelligence , volume 38, pages 12043–12051, 2024. [11] Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning , pages 3259–3269. PMLR, 2020. 10 [12] Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, and Ponnurangam Kumaraguru. Towards adversarial evaluations for inexact machine unlearning. arXiv preprint arXiv:2201.06640 , 2022. [13] Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. Mixed-privacy forgetting in deep networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 792–801, 2021. [14] Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pages 11516–11524, 2021. [15] Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, and Nicolas Papernot. Inexact unlearning needs more careful evaluations to avoid a false sense of privacy. arXiv preprint arXiv:2403.01218 , 2024. [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. [17] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 , 2019. [18] Shengyuan Hu, Yiwei Fu, Zhiwei Steven Wu, and Virginia Smith. Jogging the memory of unlearned model through targeted relearning attack. arXiv preprint arXiv:2406.13356 , 2024. [19] Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, and Sijia Liu. Model sparsity
https://arxiv.org/abs/2505.22310v1
can simplify machine unlearning. Advances in Neural Information Processing Systems , 36:51584–51605, 2023. [20] Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, and Michael C Mozer. Characterizing structural regularities of labeled data in overparameterized models. arXiv preprint arXiv:2002.03206 , 2020. [21] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [22] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. [23] Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. Towards un- bounded machine unlearning. Advances in neural information processing systems , 36, 2024. [24] Simon Lermen, Charlie Rogers-Smith, and Jeffrey Ladish. Lora fine-tuning efficiently undoes safety training in llama 2-chat 70b. arXiv preprint arXiv:2310.20624 , 2023. [25] Nathaniel Li, Alexander Pan, Anjali Gopal, Summer Yue, Daniel Berrios, Alice Gatti, Justin D Li, Ann-Kathrin Dombrowski, Shashwat Goel, Long Phan, et al. The wmdp benchmark: Measuring and reducing malicious use with unlearning. arXiv preprint arXiv:2403.03218 , 2024. [26] Ekdeep Singh Lubana, Eric J Bigelow, Robert P Dick, David Krueger, and Hidenori Tanaka. Mechanistic mode connectivity. In International Conference on Machine Learning , pages 22965–23004. PMLR, 2023. [27] Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, and Javier Rando. An adversarial perspective on machine unlearning for ai safety. arXiv preprint arXiv:2409.18025 , 2024. [28] Alessandro Mantelero. The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten’. Computer Law & Security Review , 29(3):229–235, 2013. [29] Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory , pages 931–962. PMLR, 2021. 11 [30] Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693 , 2023. [31] Nazanin Mohammadi Sepahvand, Eleni Triantafillou, Hugo Larochelle, Doina Precup, James J Clark, Daniel M Roy, and Gintare Karolina Dziugaite. Selective unlearning via representation erasure using domain adversarial training. In The Thirteenth International Conference on Learning Representations . [32] Ilia Shumailov, Jamie Hayes, Eleni Triantafillou, Guillermo Ortiz-Jimenez, Nicolas Paper- not, Matthew Jagielski, Itay Yona, Heidi Howard, and Eugene Bagdasaryan. Ununlearning: Unlearning is not sufficient for content regulation in advanced generative ai. arXiv preprint arXiv:2407.00106 , 2024. [33] Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, and Sara Hooker. Metadata archaeology: Unearthing data subsets by leveraging training dynamics. arXiv preprint arXiv:2209.10015 , 2022. [34] Rishub Tamirisa, Bhrugu Bharathi, Long Phan, Andy Zhou, Alice Gatti, Tarun Suresh, Maxwell Lin, Justin Wang, Rowan Wang, Ron Arel, et al. Tamper-resistant safeguards for open-weight llms, 2024. URL https://arxiv. org/abs/2408.00761 . [35] Reihaneh Torkzadehmahani, Reza Nasirigerdeh, Georgios Kaissis, Daniel Rueckert, Gintare Karolina Dziugaite, and Eleni Triantafillou. Improved localized machine unlearn- ing through the lens of memorization. arXiv preprint arXiv:2412.02432 , 2024. [36] Eleni Triantafillou, Peter Kairouz, Fabian Pedregosa, Jamie Hayes, Meghdad Kurmanji, Kairan Zhao, Vincent Dumoulin, Julio Jacques Junior, Ioannis Mitliagkas, Jun Wan, et al. Are we making progress in unlearning? findings from the first neurips unlearning competition. arXiv preprint arXiv:2406.09073 , 2024. [37] Yanlai
https://arxiv.org/abs/2505.22310v1
Yang, Matt Jones, Michael C Mozer, and Mengye Ren. Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training. arXiv preprint arXiv:2403.09613 , 2024. [38] Kairan Zhao, Meghdad Kurmanji, George-Octavian B ˘arbulescu, Eleni Triantafillou, and Peter Triantafillou. What makes unlearning hard and what to do about it. Advances in Neural Information Processing Systems , 37:12293–12333, 2024. [39] Kairan Zhao, Meghdad Kurmanji, George-Octavian B ˘arbulescu, Eleni Triantafillou, and Pe- ter Triantafillou. What makes unlearning hard and what to do about it. arXiv preprint arXiv:2406.01257 , 2024. [40] Andy Zou, Long Phan, Justin Wang, Derek Duenas, Maxwell Lin, Maksym Andriushchenko, J Zico Kolter, Matt Fredrikson, and Dan Hendrycks. Improving alignment and robustness with circuit breakers. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [41] Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 , 2023. 12 A Tamper-Resistance with Typical Examples for the Forget Set As highlighted in Section 5, typical examples can be particularly easy to unlearn as simply ignoring these examples during training (i.e., a no-op) yields similar predictions as the retrained from scratch model [ 19]. This is because they are, by definition, easy to predict regardless of whether they are part of the training set or not [ 9,20,1,33]. Generally, because typical examples are more likely to be predicted correctly, they are easier to relearn as well. For the sake of completeness, we include the results for typical examples in Fig. 6. As evident from the figure, even the retrained from scratch model already achieves perfect accuracy on the forget set, highlighting that unlearning, and in turn, relearning are both trivial for typical examples. Therefore, we primarily focused on atypical examples in Section 5. We see that methods such as Random Relabeling, while completing forgetting the forget set, deviates in a very significant way from the retrain from scratch baseline, which is what we compare against. B Tamper-Resistance with Random Set of Examples for the Forget Set We further evaluate performance in the case of random selection of examples, rather than just typical examples as evaluated in Appendix A. We visualize these results in Fig. 7. Since randomly selected examples are predominantly typical [ 9,8,1,33], retrain from scratch already achieves near-perfect accuracy. Furthermore, we observe only minor differences between methods. Consequently, we focused on atypical examples for the forget set in the main paper (Section 5) as these examples are hard to predict without being part of the training set, marking clearly the impact of relearning attacks which goes beyond model generalization. C Tamper-Resistance on a More Complex Dataset (CIFAR-100) We predominantly focus on CIFAR-10 dataset in the main paper (Section 5). We evaluate the susceptibility on a more complex CIFAR-100 dataset in Fig. 8 in order to understand the impact of the dataset. We only look at sub-class unlearning in this case, focusing on the ‘apple’ class for unlearning. Due to the higher complexity of the dataset, we observe lower test accuracies as evident from the scale of
https://arxiv.org/abs/2505.22310v1
the x-axis. Retrain from scratch baseline achieves a low forget set accuracy. Furthermore, we see a wider spread of different methods on this more complex dataset, with the trend looking similar to the class-agnostic results presented in Fig. 2 (as it provided a more complex forget set comprising of the most atypical examples from the dataset). It is interesting to note that CBFT [ 26], which was not tamper-resistant on CIFAR-10, demonstrated significant resistance on CIFAR-100. D Tamper-Resistance of a Larger Model All our prior results focused on the smaller model, i.e., ResNet-18 [ 16]. In order to understand the impact of model size, we evaluate the susceptibility against relearning attacks on ResNet-34 (with almost double the number of parameters compared to ResNet-18, going from ∼11Mto∼21M). The results are visualized in Fig. 9. Note that we directly transfer the hyperparameter settings from our ResNet-18 experiments. Hence, we found that Weight Dist Reg, which was the most competitive method in terms of robustness, became susceptible to relearning. It is worth noting that the test set accuracy as well as the forget set accuracy for Weight Dist Reg in this case deviates significantly from all prior results, where it demonstrated consistently lower accuracies, highlighting a potential deficiency of the selected hyperparameters. Catastrophic Forgetting and Weight Dist Reg exhibit high robustness against relearning, which are instantiations of our framework. E Evolution of Forget Set Accuracy During Relearning In order to complement the results in Fig. 2 (top) and Fig. 3, we visualize the forget set accuracy evo- lution in Fig. 10. We see that methods that are most stable are nearly unaffected by increasing training time, highlighting that more training time might not be sufficient to increase their susceptibility further. 13 Similarly, Fig. 11 complements the results on class-agnostic unlearning in Fig. 2 (bottom), where we visualize both the line plots as well as the results from two-phase training strategies. We visually observe best separation between different methods on the line plot in this case of class-agnostic unlearning. F Relearning with examples from the same distribution, which are distinct from the retain set While we demonstrated that unlearned knowledge can be recovered through repeated fine-tuning on the retain set, it remains unclear whether we can also recover performance on the forget set by fine-tuning on new examples from the same data distribution that are unseen by the model. The question that we ask is: are the same examples that the model was trained on particularly important for relearning, or other examples from the same data distribution equally effective? In order to simulate this scenario, we use examples from the test set (which are unseen by the model) for relearning instead of the retain set. We further evaluate using a corrupted version of the test set from CIFAR-10-C [ 17]. In particular, we use the JPEG corrupted examples with the highest severity level of 5. Note that we still use examples from the relearning set when considering scenarios where the number of relearning examples is > 0. The results are presented in Fig. 12. We observe
https://arxiv.org/abs/2505.22310v1
that using the retain set DRachieves higher accuracy on the forget set compared to other choices. However, these differences diminish as the number of relearning examples increases, since these examples directly represent the unlearned knowledge and may become the dominant factor in recovery. When the test set is used for relearning (instead of the retain set), we unsurprisingly observe perfect accuracy on the test set, as it serves as the training set in this scenario. These results indicate that relearning is more effective when using examples that were seen during training, rather than a held-out set of examples. These findings are also related to the other recent findings on anticipatory knowledge recovery when exposing the model to a repeated sequence of documents. In this scenario, as the model processes documents in a fixed order, it unexpectedly begins to recover an increasing amount of information about a previously seen example even before encountering that example again [37]. G Computational Resources We used NVIDIA RTX 3090 for each of our experiments, with the GPU equipped with 24GB of high-bandwidth memory (HBM) – we only use a tiny fraction of it as we train small ResNet models on CIFAR-10/100. The pretraining takes about 3 hours. Each unlearning method takes about 2.5 hours, except TAR which takes about 9 hours. Each setting required training 21 unlearned models ( 3×3combinations for two-phase training as we evaluate three methods with three different initial safeguards). This equates to about 18×2.5 + 9×3 = 72 hours per setting. We evaluated 6 main settings: ResNet-18 on CIFAR-10 (typical, atypical, and random subset for the forget set), ResNet-34 on CIFAR-10 (atypical), ResNet-18 on CIFAR-10 with class-agnostic unlearning (atypical), and ResNet-18 on CIFAR-100 (atypical). Hence, this equates to about 450 hours. Grid evaluation of each model further takes one hour, resulting in an additional 130 hours ( 21×6models). Finally, taking into account all further evaluations as well as failed experiments, we expect to have invested about 1000 GPU hours on the project. H Societal Impact We highlight limitations of existing unlearning techniques in the context of visual recognition, showing that such techniques are prone to relearning attacks, where unlearned knowledge can be easily recovered with access to only retained knowledge. Deployment of such methods therefore poses risks in the context of privacy or model safety. We further propose a new class of methods that are more robust against such attacks. 14 Developing better tamper-resistance to relearning attacks for unlearned models attempts to reduce the potential harm from these existing systems. 15 0.40 0.60 0.800.000.501.00unlearned model 0.40 0.60 0.800 relearning examples 0.40 0.60 0.8010 relearning examples 0.40 0.60 0.80100 relearning examples T est set accuracyForget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 100101102010 relearning examples 10010110210 relearning examples 100101102100 relearning examples Number of training steps (log-scale)Forget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 0.85 0.900.000.501.00SCRUB 0.85 0.90Circuit Breakers 0.85 0.90Weight Distortion T
https://arxiv.org/abs/2505.22310v1
est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 6: Comparison between test set accuracy and accuracy on the held-out part of the forget setDFhoon CIFAR-10 and ResNet-18, where the forget set is comprised of typical examples from the ‘airplane’ class. The figure indicates that all methods achieve perfect recovery of unlearned knowledge, which is consistent with the retrain from scratch baseline, as the model can directly generalize to these examples without having them as part of the training set by definition. 16 0.80 0.900.000.501.00unlearned model 0.80 0.900 relearning examples 0.80 0.9010 relearning examples 0.80 0.90100 relearning examples T est set accuracyForget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 100101102010 relearning examples 10010110210 relearning examples 100101102100 relearning examples Number of training steps (log-scale)Forget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 0.85 0.900.000.501.00SCRUB 0.85 0.90Circuit Breakers 0.85 0.90Weight Distortion T est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 7: Comparison between test set accuracy and accuracy on the held-out part of the forget set DFhoon CIFAR-10 and ResNet-18, where the forget set is comprised of a random subset of the examples from the ‘airplane’ class. The figure indicates that retrain from scratch baseline can already achieve near-perfect accuracy, as the examples in a class are predominantly comprised of typical examples. 17 0.25 0.50 0.750.000.501.00unlearned model 0.25 0.50 0.750 relearning examples 0.25 0.50 0.7510 relearning examples T est set accuracyForget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 100101102010 relearning examples 10010110210 relearning examples Number of training steps (log-scale)Forget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 0.50 0.750.000.501.00SCRUB 0.50 0.75Circuit Breakers 0.50 0.75Weight Distortion T est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 8: Comparison between test set accuracy and accuracy on the held-out part of the forget set DFhoonCIFAR-100 and ResNet-18, where the forget set is comprised of atypical examples from the ‘apple’ class. We see significant variation among different methods on the more complex CIFAR-100 dataset, which is comparable to the results we observed with class-agnostic unlearning on CIFAR-10 in Fig. 2 (bottom). 18 0.85 0.900.000.501.00unlearned model 0.85 0.900 relearning examples 0.85 0.9010 relearning examples 0.85 0.90100 relearning examples T est set accuracyForget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 100101102010 relearning examples 10010110210 relearning examples 100101102100 relearning examples Number of training steps (log-scale)Forget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 0.88 0.90 0.930.000.501.00SCRUB 0.88 0.90 0.93Circuit Breakers 0.88 0.90 0.93Weight Distortion T est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 9:
https://arxiv.org/abs/2505.22310v1
Comparison between test set accuracy and accuracy on the held-out part of the forget set DFhoon CIFAR-10 and ResNet-34 , where the forget set is comprised of atypical examples from the ‘airplane‘ class. We still see the distinction between robust and non-robust methods. However, the relative ranking between different methods changes as we use the same set of hyperparameters, which should be tuned considering a larger number of parameters in the model. We find that using a large model, i.e., ResNet-34 instead of ResNet-18 induces a slight shift in relative method ranking, particularly for methods that require careful tuning of hyperparameters. 19 100101102010 relearning examples 10010110210 relearning examples 100101102100 relearning examples Number of training steps (log-scale)Forget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist RegFigure 10: Comparison between relearning time and accuracy on the held-out part of the forget set DFhoon CIFAR-10 and ResNet-18, where the forget set is comprised of atypical examples from the ‘airplane’ class. The figure indicates that many methods achieve near-perfect recovery of unlearned knowledge with only a small amount of model fine-tuning, without even assuming access to the unlearned examples. This figure is an extension of the results presented in Fig. 2 (top). 20 100101102010 relearning examples 10010110210 relearning examples 100101102100 relearning examples Number of training steps (log-scale)Forget set accuracyRetrain from scratch Random Relabeling SCRUB NegGrad+ Circuit BreakersL1 Sparse Catastrophic Forgetting SSD Weight AttenuationWeight Dropout Weight Distortion CBFT Weight Dist Reg 0.85 0.900.000.501.00SCRUB 0.85 0.90Circuit Breakers 0.85 0.90Weight Distortion T est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 11: Comparison between test set accuracy and accuracy on the held-out part of the forget set DFhoon CIFAR-10 and ResNet-18, where the forget set is comprised of atypical examples from all classes. This figure is an extension of the results presented in Fig. 2 (bottom). We find that atypical examples, when selected from all the classes, are significantly harder in comparison to examples selected from a particular class, and hence, achieve a better separation between different methods in comparison to sub-class unlearning in Fig. 2 (top). 21 0.95 1.000.000.501.000 relearning examples 0.95 1.0010 relearning examples 0.95 1.00100 relearning examples T est set accuracyForget set accuracy[cte] SCRUB [R] SCRUB [te] SCRUB 100101102010 relearning examples 10010110210 relearning examples 100101102100 relearning examples Number of training steps (log-scale)Forget set accuracy[cte] SCRUB [R] SCRUB [te] SCRUB Figure 12: Comparison between training time and accuracy on the held-out part of the forget set DFhoon CIFAR-10 and ResNet-18, where the forget set is comprised of atypical examples, and using SCRUB as the unlearning method. We evaluate the use of different sets to remind the model, including the retain set DR, test set Dte, corrupted test set Dctetaken from CIFAR-10-C [ 17] (using JPEG compression with a severity level of 5), along with the selected number of relearning examples. The figure indicates that there can be significant differences in relearned model accuracy based on the selected examples used for reminding the model. 22
https://arxiv.org/abs/2505.22310v1
arXiv:2505.22311v1 [cs.AI] 28 May 20251 From Large AI Models to Agentic AI: A Tutorial on Future Intelligent Communications Feibo Jiang, Senior Member, IEEE , Cunhua Pan, Senior Member, IEEE , Li Dong, Kezhi Wang, Senior Member, IEEE , Octavia A. Dobre, Fellow, IEEE , and Merouane Debbah, Fellow, IEEE Abstract —With the advent of 6G communications, intelligent communication systems face multiple challenges, including con- strained perception and response capabilities, limited scalability, and low adaptability in dynamic environments. This tutorial provides a systematic introduction to the principles, design, and applications of Large Artificial Intelligence Models (LAMs) and Agentic AI technologies in intelligent communication systems, aiming to offer researchers a comprehensive overview of cutting- edge technologies and practical guidance. First, we outline the background of 6G communications, review the technological evolution from LAMs to Agentic AI, and clarify the tutorial’s mo- tivation and main contributions. Subsequently, we present a com- prehensive review of the key components required for construct- ing LAMs, including Transformers, Vision Transformers (ViTs), Variational AutoEncoders (V AEs), diffusion models, Diffusion Transformers (DiTs), and Mixture of Experts (MoEs). We further categorize LAMs and analyze their applicability, covering Large Language Models (LLMs), Large Vision Models (LVMs), Large Multimodal Models (LMMs), Large Reasoning Models (LRMs), and lightweight LAMs. Next, we propose a LAM-centric design paradigm tailored for communications, encompassing dataset construction and both internal and external learning approaches. Building upon this, we develop an LAM-based Agentic AI system for intelligent communications, clarifying its core components such as planners, knowledge bases, tools, and memory modules, as well as its interaction mechanisms, including both single- agent and multi-agent interactions. We also introduce a multi- agent framework with data retrieval, collaborative planning, and reflective evaluation for 6G. Subsequently, we provide a detailed overview of the applications of LAMs and Agentic AI in communication scenarios. Finally, we summarize the research challenges and future directions in current studies, aiming to support the development of efficient, secure, and sustainable next- generation intelligent communication systems. Index Terms —Large AI Model; Large Language Model; Agen- tic AI; Communication; 6G. Feibo Jiang (jiangfb@hunnu.edu.cn) is with Hunan Provincial Key Labora- tory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China. Cunhua Pan (cpan@seu.edu.cn) is with the National Mobile Communica- tions Research Laboratory, Southeast University, Nanjing, China. Li Dong (Dlj2017@hunnu.edu.cn) is with Changsha Social Laboratory of Artificial Intelligence, Hunan University of Technology and Business, Changsha, China. Kezhi Wang (Kezhi.Wang@brunel.ac.uk) is with the Department of Com- puter Science, Brunel University London, UK. Octavia A. Dobre (odobre@mun.ca) is with the Faculty of Engineering and Applied Science, Memorial University, St. John’s, NL A1B 3X5, Canada. Merouane Debbah (merouane.debbah@ku.ac.ae) is with the 6G Research Center, Khalifa University of Science and Technology, Abu Dhabi 127788, UAE. GitHub link: https://github.com/jiangfeibo/ComAgent.I. I NTRODUCTION With the continuous evolution of 6G communication, intel- ligence has become a core direction for the development of future wireless networks. Traditional communication systems, which rely on static rules and predefined algorithms, struggle to cope with rapidly changing network topologies and dynamic environments. In this context, Large Artificial Intelligence Models (LAMs) have achieved remarkable success in
https://arxiv.org/abs/2505.22311v1
com- munications due to their advantages in cognitive decision- making and data generation. Meanwhile, Agentic AI, as a more advanced technology based on LAMs, can actively make decisions and self-optimize, offering novel solutions for intelligent resource management and optimization in 6G networks. Therefore, the technological transition from LAMs to Agentic AI holds significant importance for supporting the evolution of intelligent communication systems from model- driven to agent-driven paradigms. A. Background The goal of 6G is to build an intelligent world of ubiqui- tous connectivity, delivering unprecedented information expe- riences to human society. In the International Mobile Telecom- munications for 2030 (IMT-2030) framework proposed by ITU-R, six key capability modules are defined to support the comprehensive development of the future wireless com- munication ecosystem. These include Integrated Sensing And Communication (ISAC), which deeply fuses environmental sensing with communication functions to endow networks with human-like perception capabilities, enabling applications such as intelligent transportation and smart grids; massive com- munication, which supports the concurrent access of densely distributed devices to meet the real-time communication de- mands of massive terminals in smart cities and industrial Internet of Things (IoT); integrated AI and communication, which embeds LAMs into communication systems to real- ize network self-adaptation, self-optimization, and intelligent decision-making, significantly enhancing resource allocation and Quality of Service (QoS); immersive communications, which offers low-latency, high-bandwidth experiences such as holography and virtual-real fusion, driving new forms of interaction including the metaverse and AR/VR; ubiquitous connectivity, which constructs an all-domain communication network integrating space, air, sea, and land to eliminate geographical and spatial limitations; and hyper reliable and low-latency communication, which meets the stringent require- ments of ultra-low latency and high reliability for critical ap- 2 Fig. 1: LAMs and Agentic AI empowered 6G. plications like remote healthcare and autonomous vehicles. As illustrated in Fig. 1, 6G will rely on these six core capabilities to build an intelligent, ubiquitous, and highly efficient future wireless communication ecosystem, where communication, sensing, computing, AI, and security are deeply integrated to deliver advanced communication services to users [1]. B. Historical Development The development of AI has witnessed a progressive evo- lution from early simple discriminative models to generative models, and further to LAMs and highly intelligent Agentic AI. This process not only demonstrates continuous innovation in model architectures and learning algorithms but also reflects significant advancements in AI’s capabilities across multiple domains, including comprehension, generation, reasoning, and decision-making. The rise of LAMs has laid a solid foundation for the development of Agentic AI, while the emergence of Agentic AI marks a substantial leap in autonomous decision- making and complex task handling. The progression from LAMs to Agentic AI can be categorized into the following stages: 1) Emergence Stage: The development of LAMs began in 2018, marked by significant works such as Google’s Bidi- rectional Encoder Representations from Transformers (BERT)[2]. BERT is a bidirectional transformer model that achieved remarkable results in various Natural Language Processing (NLP) tasks through pre-training and fine-tuning. Its bidirec- tional nature allows BERT to excel in tasks such as sentence understanding and text classification. At the same time, Ope- nAI introduced
https://arxiv.org/abs/2505.22311v1
GPT-1 [3], a unidirectional transformer model focused on generative tasks. GPT-1 innovatively introduced the pre-training and fine-tuning paradigm and demonstrated the potential of large-scale pre-trained language models in NLP tasks. Subsequently, in 2019, GPT-2 [4] further expanded the language model’s scale and capabilities, showcasing its strong potential in generating text with approximately 1.5 billion parameters. 2) Initial Stage: In 2020, OpenAI released the colossal language model GPT-3 [5], with 175 billion parameters, marking the entry of Large Language Models (LLMs) into the initial stage. GPT-3, with its massive parameter scale and complex training methods, excelled in multiple NLP tasks such as text generation, translation, question answering, and code generation. GPT-3 demonstrated the potential of large pre- trained models in multi-task learning and zero-shot learning. Meanwhile, Google released the T5 model [6]. This model introduced a unified text-to-text transformation framework, allowing various NLP tasks to be converted into text-to-text 3 formats, thereby simplifying the task processing workflow. Additionally, T5 could handle translation, summarization, question answering, and text classification tasks in a unified model architecture. 3) Mature Stage: In 2022, GPT-3.5 [7] was released, based on further optimizations of GPT-3, enhancing the model’s performance and efficiency while improving response speed and accuracy in practical applications. In the same year, Anthropic released Claude, an LLM focused on enhancing model safety and transparency, aimed at reducing bias and misleading information. Additionally, Facebook AI Research introduced the Segment Anything Model (SAM) [8], a Large Vision Model (LVM) specializing in image segmentation tasks, which made significant progress in image processing through extensive data pre-training. 4) Multimodal Stage: In 2023, OpenAI launched the Large Multimodal Model (LMM) GPT-4 [9], capable of processing both text and image data, further expanding the application scope of LMMs. GPT-4 combined visual and language un- derstanding to achieve richer and more complex interaction capabilities. Concurrently, Google DeepMind released Gemini [10], an LMM capable of recognizing text, images, video, and code simultaneously. Gemini demonstrated outstanding performance across various tasks and application scenarios by generating high-quality code in mainstream programming languages and providing comprehensive safety assessments. 5) Reasonging stage: In 2024, OpenAI released Open AI- o1 [11], a model with enhanced reasoning capabilities. This model combines powerful cognitive processing with com- plex environmental modeling and prediction, advancing AI applications in decision-making and problem-solving while improving the logical reasoning performance of LAMs. In 2025, DeepSeek launched DeepSeek R1 [12], a model that introduces advanced logical reasoning algorithms, demonstrat- ing exceptional performance in complex tasks and dynamic environments, marking the official transition of LAMs into the era of complex reasoning. 6) Agentic Stage: With the continuous maturation of LAM technology, agent system frameworks based on LAMs began to emerge. Early-generation agent frameworks, represented by AutoGPT [13] and BabyAGI [14], demonstrated the potential of language understanding in task planning and execution. Concurrently, frameworks such as Microsoft’s OpenAgents [15] advanced multi-agent collaboration, role specialization, and environmental awareness, endowing agent systems with greater adaptability and generalization. By 2025, the emer- gence of LRMs like DeepSeek R1 substantially improved agent system performance, enabling more complex work- flows for multi-task, multi-tool, and multi-agent
https://arxiv.org/abs/2505.22311v1
collabora- tion—officially ushering in the era of Agentic AI [12]. Overall, the Agentic stage propelled LAMs from information under- standing to task execution and behavioral control, laying a crucial foundation for embodied intelligence and higher-level general intelligence. C. Related Survey Work Table I presents a comparative analysis between this tutorial and existing related survey studies. While current researchprimarily focuses on the role of LAMs in communication systems and partially explores their application potential in specific communication tasks, it still shows deficiencies in the detailed classification of LAMs’ learning mechanisms as well as in the construction and application methods of Agentic AI systems. Although these survey studies have made valuable contributions to exploring the applications of LAMs and Agentic AI in communications, the following areas remain open for improvement: 1) Lack of Detailed Taxonomy for LAMs and Their Training Paradigms: Although existing studies have partially explored the applications of LAMs in communications, they still lack a detailed taxonomy of model types and training paradigms. In terms of model types, most research focuses on LLMs, while the applications of LVMs, LMMs, LRMs, and lightweight LAM in communications remain understudied, with no com- prehensive classification or application adaptation framework established. Regarding training methodologies, while some studies discuss internal learning mechanisms (e.g., pretraining, fine-tuning, and alignment), they rarely address extrinsic learn- ing mechanisms, such as Retrieval-Augmented Generation (RAG) and structured knowledge learning (e.g., Knowledge Graphs (KG)). The applicability, differences, and synergistic relationships of these learning strategies in communications still lack systematic comparison and analysis. 2) Lack of Systematic Review on Agentic AI in Communi- cations: Current research primarily focuses on the perception and generation capabilities of LAMs in communication scenar- ios, whereas systematic discussions on Agentic AI equipped with long-term planning, autonomous decision-making, and tool invocation remain insufficient. In communication systems, Agentic AI holds broad application potential, particularly in complex interactive scenarios such as semantic communica- tions, federated learning, network management and optimiza- tion, and Unmanned Aerial Vehicle (UA V) communication. However, most existing studies fail to clearly define its system architecture, core modules (e.g., planners, tools, memory mod- ules), or integration pathways with communication knowledge. Additionally, there is a lack of modeling for multi-agent collaboration mechanisms tailored to communication tasks, hindering a comprehensive understanding of Agentic AI’s potential and value in advancing intelligent communication evolution. D. Motivations and Contributions As wireless communication systems advance toward 6G, they face unprecedented challenges in required intelligence levels, dynamic adaptability, and system efficiency [24]. Lever- aging their massive parameter scales (typically reaching bil- lions or even trillions of parameters), emergent capabilities, and powerful cognitive reasoning, LAMs are progressively transforming the application landscape of AI in communica- tions. These models demonstrate effective support for complex communication tasks, including semantic communication, re- source scheduling, and network self-optimization [19]. In 6G networks, LAMs can fulfill the following functional roles [25]: 1)Data Generator: As generative AI models with strong generalization and representation capabilities, LAMs can 4 TABLE I: Comparison of previous works with our tutorial. Year Ref.LAMs Agentic AI RemarksComponents (C1)Classification (C2)Training (C3)Application (C4)Challenges (C5)Components (C6)Interactions (C7)Application (C8)Challenges (C9) 2025 [16]No No No
https://arxiv.org/abs/2505.22311v1
Limit No Yes Limit Limit Limit-For C1 to C3 and C5, it is not mentioned. -For C4, it briefly mentions applications without detailed LAM use cases. -For C7 to C9, it touches on agent frameworks and future directions but lacks depth and interaction details. 2025 [17]No No No Limit No Yes Limit Limit Yes-For C1 to C3, C5, it is not mentioned. -For C4, it mentions applications but lacks depth. -For C7 and C8, it gives high-level interaction ideas without implementation. -For C9, it focuses on ethics, not technical aspects. 2025 [18]No Limit No Limit No Yes Limit Limit Yes-For C1, C3, C5, it is not mentioned. -For C2 and C4, it is mentioned, but the coverage is not extensive enough. -For C7 and C8, it introduces collaboration but lacks detail on interactions and roles. 2025 [19]Yes Yes Yes Yes Yes No No Limit No-For C6, C7, and C9, it is not discussed. -For C8, it is only briefly mentioned without detailed scenarios. 2024 [20]Limit Limit Limit Limit Limit No No No No-For C1 to C5, it covers LLMs broadly but lacks a systematic description of other LAMs. -For C6 to C9, Agentic AI is not included. 2025 [21]Limit Limit No Limit Yes No No Limit No-For C1, C2, C4, and C8, it gives partial coverage but lacks detailed context. -For C3, C6, C7 and C9, it is not discussed. 2024 [22]Limit No Limit Limit Yes No No No No-For C1, C3, and C4, it discusses LAMs for 6G and deployment ideas, but lacks detailed classification and training methods. -For C2, classification of LAMs is not covered. -For C6 to C9, Agentic AI is not included. 2024 [23]Limit Limit Limit Limit Limit No No No No-For C1 to C5, it provides a high-level overview of LLM in networking but there is a lack of description of other LAMs. -For C6 to C9, Agentic AI is not included. Our Tutorial Yes Yes Yes Yes Yes Yes Yes Yes-For C1 to C9, this work offers a comprehensive and up-to- date overview of LAMs and Agentic AI, presenting a systematic framework and clear guidance for their development and application. efficiently generate various types of communication data based on domain knowledge. By incorporating advanced generative architectures (e.g., autoregressive decoders, diffusion models), LAMs can synthesize high-quality Channel State Information (CSI) data to support critical tasks such as positioning estimation, bandwidth alloca- tion, and network architecture design. Such synthetic data exhibits realism while preserving user anonymity, providing cost-effective and reliable data support for 6G network modeling, optimization, and deployment without compromising privacy [19]. 2)Knowledge Organizer: With powerful cross-modal se- mantic reasoning and knowledge integration capabilities, LAMs can structurally process and deeply mine raw communication data to enable automated knowledge extraction and reorganization. In semantic communi- cation systems, LAMs can serve as knowledge bases to assist semantic encoding processes. Leveraging their extensive world knowledge and communication exper- tise, they effectively reduce ambiguity, improve semantic alignment quality, and enhance information transmission accuracy and contextual adaptability, thereby supporting the development of intelligent semantic representation and understanding frameworks [26]. 3)Resource Manager: Through
https://arxiv.org/abs/2505.22311v1
real-time perception and modeling of network environmental states, user behav- ior patterns, and resource utilization efficiency, LAMs facilitate intelligent scheduling and optimal allocation of communication resources. Integrated with Reinforce-ment Learning (RL) or long-chain reasoning frame- works, LAMs can dynamically formulate management decisions such as spectrum allocation, power control, and access strategies in multi-user, multi-service sce- narios, thereby enhancing overall system efficiency and fairness. Additionally, LAMs can predict future resource demand trends, enabling proactive network planning and QoS assurance [21]. The key distinction between LAMs and Agentic AI lies in their working methods and intelligent decision-making capa- bilities. LAMs typically respond within fixed input patterns and generate outputs through known knowledge, but lack autonomy and adaptability. In contrast, Agentic AI possesses the ability for proactive decision-making and self-optimization, enabling it to make independent decisions in complex and dynamic environments while actively learning and adjusting during task execution. This allows Agentic AI to handle more complex communication tasks, particularly in dynamic envi- ronments, where it can respond in real-time and continuously optimize its behavior. In 6G communications, Agentic AI can play the following roles: 1)Task Scheduler: Agentic AI possesses the capability to comprehend complex instructions, allocate subtasks, and coordinate multi-module execution, enabling it to serve as the core task scheduler in complex communication scenarios. It can dynamically deploy different algo- rithmic modules and collaboratively generate solutions that meet task objectives. For instance, in multi-UA V 5 cooperative communication scenarios, Agentic AI can autonomously plan service areas and flight paths, avoid obstacles, and allocate communication resources to es- tablish stable links and provide computational support in emergency situations, significantly enhancing system autonomy and response efficiency [27]. 2)System Designer: Leveraging its strong task compre- hension and complex system logic modeling capabilities, Agentic AI can automatically design system architec- tures and configure modules based on the functional re- quirements of communication systems. In AI-integrated communication tasks, Agentic AI can combine knowl- edge from federated learning, resource scheduling, pro- tocol stacks, and other domains to understand the design intent and operational mechanisms of algorithms such as FedAvg, autonomously completing system-level design and structural optimization. Through prompt tuning and policy feedback, it iteratively optimizes communication system performance, demonstrating highly intelligent system design potential [28]. 3)Decision Executor: Agentic AI not only exhibits rapid learning and adaptation in uncertain environments, showcasing autonomous strategic capabilities, but can also invoke traditional algorithms and external tools to demonstrate robust decision execution. In wireless network slicing management, Agentic AI can integrate multi-dimensional inputs (e.g., QoS, user demands, task priorities) and employ RL, causal reasoning, meta- learning, or game-theoretic strategies to make optimal decisions balancing performance, energy efficiency, and fairness. Additionally, Agentic AI can invoke external network simulation tools (e.g., NS-3 or OMNeT++) for simulation and validation, as well as call Software Defined Network (SDN) controllers to execute slice creation and scheduling [29]. This tutorial, set against the backdrop of the intelligent evolution of communication systems toward the 6G era, sys- tematically reviews and thoroughly examines the pivotal roles of LAMs and Agentic AI in future intelligent communication systems. It offers a comprehensive overview from multiple
https://arxiv.org/abs/2505.22311v1
perspectives, including model classification, training method- ologies, Agentic AI system design, application scenarios, and research challenges. The main contributions of this work are summarized in the following five aspects. 1)Systematic Review of Core Components and Model Classification in LAMs : A comprehensive synthesis of core components, including Transformer, Vision Trans- former (ViT), Variational AutoEncoder (V AE), Diffusion models, Diffusion Transformer (DiT), and Mixture of Experts (MoE), is provided, along with a classification and comparative analysis of mainstream model types such as LLMs, LVMs, LMMs, LRMs, and lightweight LAMs. This synthesis clarifies the respective applica- bility and technical potential of each model category in communication systems. 2)Design of Datasets and Learning Mechanisms for Communication-specific LAMs : Addressing chal-lenges such as scarce domain knowledge, high task complexity, and diverse application requirements in communications, we propose a design framework for communication-specific LAMs. This framework features methodologies for constructing communication-relevant datasets, internal learning mechanisms encompassing pre-training, fine-tuning, and alignment, as well as ex- ternal learning mechanisms including RAG and KG. These components collectively ensure model efficacy and usability in communication scenarios. 3)Construction of LAM-based Agentic AI Framework From Communication Perspective : A systematic in- tegration of LAMs with Agentic AI technologies to establish a communication-oriented Agentic AI archi- tecture. First, we identify core system components in- cluding LAMs, planners, knowledge bases, tools, and memory modules. Then, we examine interaction pat- terns for single-agent and multi-agent systems. Finally, we propose an integrated framework featuring multi- agent data retrieval, Multi-agent Collaborative Planning (MCP), and multi-agent evaluative reflection to support the intelligent processing of complex communication tasks. 4)Exploration of LAM and Agentic AI Applications Across Communication Scenarios : A systematic exam- ination of LAM applications is conducted across critical domains such as semantic communication, IoT, edge intelligence, network design and management, security and privacy, and resource allocation. In parallel, we explore the potential of Agentic AI in wireless communi- cation, semantic communication, network management and optimization, network security, and UA V commu- nication, aiming to enhance overall system intelligence and operational efficiency. 5)Identification of Challenges and Future Directions for LAM and Agentic AI in Communications : For LAMs, we address data scarcity, inadequate rea- soning, poor interpretability, and deployment difficul- ties, proposing solutions through autonomous continual learning, RL-driven reasoning training, model visual- ization, and model compression/distillation. For Agen- tic AI, we examine communication knowledge gaps, scalability limitations, complex control mechanisms, and evaluation challenges, suggesting future research on dy- namic knowledge-guided Agentic RAG, distributed con- trol architectures, unified control protocols, and process- oriented evaluation frameworks to enable the evolution from ”LAM-driven” to ”Agentic AI-driven” intelligent communication systems. Fig. 2 presents the organization of this tutorial. To ensure consistency in the formulation and model representation, this tutorial adopts the following unified symbolic notations, with definitions provided in Table II. 6 Fig. 2: Overall organization of the tutorial. II. K EYCONCEPTS A. Components 1) Transformer: The Transformer is a novel neural net- work architecture proposed by Google in 2017 [30]. Its core innovation relies entirely on the self-attention mechanism to capture dependencies within the input sequence, and the cross- attention mechanism to
https://arxiv.org/abs/2505.22311v1
connect the encoder and decoder. Self-attention is a key technique in the Transformer ar- chitecture. It enables the model to consider all other words (or tokens) in the sequence when processing a particular word, computing weighted representations based on relevance. Additionally, Google introduced multi-head self-attention to compute attention in parallel, allowing the model to learn information from different representation subspaces [30]. The computation of self-attention is given by the following for- mula: Attention (Q, K, V ) = softmaxQK⊤ √dk V, (1)where Q,K, andVdenote the Query, Key, and Value matrices, respectively; dkrepresents the dimensionality of the Key vectors; and softmax( ·)is the normalization function. In addition, each layer within the encoder-decoder archi- tecture of the Transformer includes a Feed-Forward Net- work (FFN), Layer Normalization (LayerNorm), and residual connections. These design choices significantly enhance the Transformer’s performance in modeling long-range dependen- cies, facilitating gradient propagation, and enabling efficient parallel training, outperforming traditional neural network architectures in these aspects. The Transformer offers strong capabilities in modeling long- range dependencies and supports highly parallelized computa- tion. However, a key limitation lies in the quadratic computa- tional and memory complexity of the self-attention mechanism with respect to the sequence length n, i.e., O n2 , which constrains its ability to handle long-sequence data efficiently. The Transformer is the cornerstone of LLMs. Its exceptional parallel computation capabilities and ability to capture long- range dependencies have enabled the scaling of models to tens or even hundreds of billions of parameters, as seen in models 7 TABLE II: Acronyms and descriptions. Acronym Description A2A Agent-to-Agent Protocol ACP Agent Communication Protocol AGI Artificial General Intelligence AI Artificial Intelligence CNN Convolutional Neural Network CoDi Composable Diffusion CoT Chain of Thought DiT Diffusion Transformer DPO Direct Preference Optimization FM Foundation Model FFN Feed-Forward Network GAN Generative Adversarial Network GQA Grouped-Query Attention GPT Generative Pretrained Transformer ICL In-context Learning KD Knowledge Distillation KG Knowledge Graph LAM Large AI Model LLM Large Language Model LLaMA Large Language Model Meta AI LMM Large Multimodal Model LLaV A Large Language and Vision Assistant LoRA Low-Rank Adaptation LRM Large Reasoning Model LVM Large Vision Model MAS Multi-Agent System MCP Model Context Protocol MHA Multi-Head Attention MoE Mixture of Expert MQA Multi-Query Attention PEFT Parameter-Efficient Fine-Tuning PPO Proximal Policy Optimization RAG Retrieval-Augmented Generation RL Reinforcement Learning RLHF Reinforcement Learning from Human Feedback SAM Segment Anything Model SFT Supervised Fine-Tuning SLM Small Language Model UA V Unmanned Aerial Vehicle V AE Variational Autoencoder ViT Vision Transformer such as GPT and the LLaMA series. In communications, the Transformer has been widely applied to a range of tasks, including semantic communication [31], signal processing [32], multimodal perception [33], and resource management [34], significantly advancing the level of intelligence in com- munication systems. 2) ViT: ViT was the first to demonstrate that a pure Transformer architecture can be directly applied to image recognition in 2020, achieving performance on large-scale datasets that matches or even surpasses that of state-of-the- art Convolutional Neural Networks (CNNs) [35]. ViT first divides an image into fixed-size patches, linearly embeds these patches, adds positional encodings, and then feeds
https://arxiv.org/abs/2505.22311v1
them into a standard Transformer encoder in the same way as word sequences are processed. Various visual tasks are then performed through different output layers, as illustrated below:y= Encoder (Concat( zcls,Flatten(Patch( I))) +Epos), (2) where Idenotes the input image, Patch (·)and Flatten (·)refer to the patching and flattening operations, respectively, while Concat (·)represents the concatenation operation. zclsandEpos denote the CLS token and positional encoding, respectively, and Encoder (·)refers to the Transformer encoder. The advantages of ViT lie in its strong capability for global information modeling and scalability, which aligns well with the extensibility of the Transformer architecture. However, ViT lacks certain inherent visual inductive biases present in tradi- tional models, such as locality and translational invariance, which often necessitates pretraining on large-scale datasets to achieve competitive performance. Moreover, when handling high-resolution images, the increased sequence length leads to substantial computational overhead. ViT has become one of the foundational architectures for LVMs (e.g., SAM, DINO) and serves as a critical component in many LMMs. In communications, ViT has been widely applied to tasks such as semantic communication [36], line- of-sight blockage prediction [37], and automatic modulation recognition [38]. Leveraging its powerful modeling capabili- ties, ViT enhances the system’s perception accuracy of spa- tially structured data and improves communication efficiency. 3) VAE: The V AE is a deep generative model based on variational Bayesian methods, integrating the architecture of autoencoders with the principles of probabilistic graphical models [39]. The V AE learns to encode input data into a low-dimensional latent space and enables sampling from this space to generate new data. Unlike standard autoencoders, V AEs learn a proba- bilistic distribution over the latent space, which facilitates the generation of diverse outputs. Specifically, the V AE encodes an input xinto a latent distribution qϕ(z|x), typically assumed to be a multivariate Gaussian distribution, samples zfrom this distribution, and then reconstructs an approximation of xthrough pθ(x|z). The training objective is to maximize the Evidence Lower Bound (ELBO): L=Eqϕ(z|x)[logpθ(x|z)]−KL (qϕ(z|x)∥p(z)),(3) where Edenotes the expectation, the reconstruction term Eqϕ(z|x)[logpθ(x|z)]ensures the quality of the generated outputs, while the Kullback-Leibler term KL (qϕ(z|x),|, p(z))guides the latent space to align with the prior distribution p(z), typically assumed to be a standard normal distribution. The V AE offers a generative framework with a solid theoretical foundation. Its learned latent space is typically smooth, making it amenable to interpolation and interpretation. However, a common limitation is that the generated samples may appear blurry, and the ELBO represents only a lower bound of the true data likelihood. V AE is commonly used for discrete image encoding and data compression, forming the core conceptual foundation of LVMs such as DALL-E and Stable Diffusion. As a powerful generative model, V AE has been widely applied in commu- nication tasks such as CSI feedback [40], semantic commu- nication [41], and Multiple-Input Multiple-Output (MIMO) 8 detection [42]. By leveraging its latent variable modeling capability, V AE effectively enhances the operational efficiency of communication systems. 4) Diffusion: Diffusion models are probabilistic generative models based on Markov processes, proposed in 2020 [43]. Their core idea is to model data distributions
https://arxiv.org/abs/2505.22311v1
through a ”noise addition–denoising” process. Diffusion models operate through two key processes: the forward diffusion process, which gradually adds Gaussian noise to the data (e.g., images, videos) until it becomes pure noise; and the reverse denoising process, which starts from pure noise and progressively removes the noise using a neural network to generate clear data samples. Forward Diffusion Process : Noise is gradually added to the original sample x0over multiple steps until it becomes nearly Gaussian white noise. The t-th step can be expressed as: q(xt|xt−1) =Np 1−βtxt−1, βtI , (4) where q(xt|xt−1)is a conditional probability distribution that defines the probability of the current noisy sample xtgiven the previous step’s noisy sample xt−1,βtis a noise variance parameter initialized to a small value, and Iis the identity matrix, and xtandxt−1denote the images at step tand step t−1, respectively. Reverse Denoising Process : A neural network ϵθ(xt, t)is trained to estimate the noise component, and the image at step t−1is updated using a fixed variance and the learned mean: xt−1=1√1−βt(xt−βtϵθ(xt, t)) +σtz, z ∼ N(0,I), (5) where σtzrepresents noise with variance σt, which is used to maintain the diversity and stochasticity of the generated samples. Diffusion models offer the advantages of generating high- quality and diverse samples, along with relatively stable train- ing. However, their main drawbacks include slow generation speed due to the need for multi-step iterative sampling and relatively complex theoretical derivation. Diffusion models have emerged as the dominant technology for high-quality image and video generation. LVMs such as Stable Diffusion [44], DALL-E 2/3 [45] [46], and Imagen [47] are all based on the principles of diffusion models. In commu- nications, diffusion models have been widely applied to tasks such as channel estimation [48], semantic communication [49], channel enhancement [50], and signal enhancement [51], demonstrating high-fidelity generation and strong robustness under complex channel conditions. 5) DiT: DiT, proposed in 2022, is a specialized design that applies the Transformer architecture to diffusion models [52]. It typically replaces the previously common U-Net structure with a Transformer during the reverse denoising process of the diffusion model to predict noise [52]. DiT maps the latent image representation zt, the timestep encoding t, and an optional condition yinto a sequence of embeddings, which are then fed into a standard Transformer for global self-attention and feedforward processing. The finaloutput is either the predicted noise ϵθ(xt, t, y)or a direct prediction of the denoised image sample, as illustrated below: h= Encoder (Embed( zt, t, y)), (6) ϵθ(xt, t, y) = Project( h), (7) where Embed( ·)denotes the embedding matrix, Encoder( ·) represents the Transformer encoder, and Project( ·)is the projection matrix used to output the predicted noise, xtdenotes the noisy image at the current timestep during the denoising process. DiT offers excellent scalability, enabling significant im- provements in generation quality by increasing model size. However, it remains constrained by the inherently slow sam- pling speed of diffusion models, and the computational over- head of Transformers for high-dimensional inputs remains substantial. The introduction of DiT represents a significant milestone in the development of diffusion models, demonstrating the powerful potential of
https://arxiv.org/abs/2505.22311v1
the Transformer architecture in gener- ative tasks. It has inspired the design of numerous subsequent large generative models, particularly world models such as OpenAI’s Sora [53], which also adopt the DiT architecture at their core to process spatiotemporal latent representation blocks. DiT has proven that the Transformer can serve as a universal and scalable backbone for a wide range of complex generative modeling tasks. 6) MoE: MoE is a model architecture paradigm designed to enhance model capacity through conditional computation while maintaining manageable computational costs [54]. It is not a standalone model but is typically integrated within specific LAMs. In a standard Transformer module, the original FFN sub- layer typically consists of two linear transformations and a non-linear activation function, applied independently to the representation of each position (token) in the output of the self- attention layer. To replace it with an MoE layer, one must first instantiate Nindependent ”expert” networks, each of which is itself an FFN with the same architecture as the original but with its own set of parameters. Additionally, a gating network is introduced to compute a probability score for each expert. Based on these scores, a sparse routing strategy is usually employed to select the top- Kscoring experts (with Ktypically being small, such as 1 or 2) to process the current token. Finally, the outputs of the selected Kexperts are combined through a weighted summation to produce the final MoE output for that token, as illustrated below: g= softmax ( Wgx+bg), (8) yi= Experti(x),∀i∈ IK, (9) output =X i∈IKgi·yi, (10) where Wgandbgdenote the weight matrix and bias vector of the gating network, respectively, and softmax( ·)is the normalization function. IKrepresents the index set of the top- Kexperts with the highest scores; Experti(·)denotes the i-th 9 expert network; yiis its corresponding output; and giis the score assigned to the i-th expert. The key advantage of MoE lies in its effective decoupling of parameter scale from computational cost, enabling the training and deployment of LAMs that significantly exceed the size of dense models with comparable computational budgets. How- ever, a major drawback is its substantial memory requirement: despite the sparsity in computation, all expert parameters must still be loaded into memory during inference. As a result, MoE models typically consume significantly more memory than dense models with equivalent computational complexity. MoE is one of the key technologies enabling the de- velopment of state-of-the-art LLMs. Architectures such as Google’s GLaM [55], Mistral AI’s Mixtral 8x7B [56], and GPT-4 [9] have all adopted the MoE framework. By leverag- ing MoE, these models can scale to larger parameter sizes while maintaining acceptable training and inference costs. In communications, MoE has been widely applied to tasks such as communication security [57], satellite communications [58], and signal processing [59], where the expert activation and parallel processing mechanisms contribute to enhanced system intelligence, improved inference efficiency, and greater robustness. B. Classification 1) LLM: LLMs represent a significant branch in the field of deep learning, referring specifically to neural network models that are pretrained on massive text corpora and contain an extremely large number of parameters,
https://arxiv.org/abs/2505.22311v1
typically in the tens or even hundreds of billions. Their core capability lies in understanding and generating human-like natural language, enabling them to perform a wide range of language-related tasks with remarkable generalization and adaptability. By learning grammar, semantics, and commonsense knowledge from large-scale corpora, LLMs have acquired human-level cognitive and reasoning abilities, establishing themselves as a foundational technology driving advancements in NLP and the broader field of AI. Structurally, most state-of-the-art LLMs are built upon the Transformer decoder-only architecture, particularly leveraging its self-attention mechanism. This mechanism enables the model to effectively capture long-range dependencies within the input sequence. A typical Transformer building block consists of a multi-head self-attention layer and a FFN layer, combined with residual connections and LayerNorm to stabi- lize the training process. By stacking multiple such blocks, LLMs are capable of constructing deep networks that learn complex patterns and representations in language data, ranging from low-level features to high-level abstractions. The GPT series developed by OpenAI (e.g., GPT-3 [5], ChatGPT [9]), the Gemma series by Google [60] [61], the LLaMA series by Meta AI [62] [63] [64], and Claude by Anthropic are all prominent representatives of LLMs. In communications, LLMs have been widely applied to tasks such as semantic communication [65], network management [21], multi-agent systems [25], and communication security [66]. Leveraging their powerful language understanding andgeneration capabilities, LLMs significantly enhance the intel- ligence, adaptability, and interaction efficiency of communi- cation systems. 2) LVM: LVMs refer to deep neural network models trained on massive visual datasets consisting of billions of images and videos, and characterized by an extremely large number of parameters. These models are designed to learn general and powerful visual representations, enabling them to understand complex image and video content and generalize across a wide range of downstream vision tasks. LVMs significantly expand the performance ceiling and application scope of computer vision systems. LVMs typically adopt CNNs or ViTs as their backbone architectures. These architectures extract visual features in a hierarchical manner by stacking multiple processing layers, progressively capturing representations ranging from low-level features (such as edges and textures) to high-level semantics (such as object parts and complete objects). In addition to pure Transformer-based designs, hybrid architectures that combine CNNs and Transformers are also common in LVM develop- ment, aiming to leverage the local feature extraction strength of CNNs and the long-range dependency modeling capabilities of Transformers. Representative LVMs include Masked Autoencoders (MAE) [67] , DINO [68], and SAM [8]. Their applications in commu- nications are primarily focused on semantic communication [26] [69] [70]. By incorporating LVMs such as SAM and MAE, lightweight knowledge bases and efficient semantic encoders/decoders are constructed, enabling the compression and sharing of image semantic information. This significantly improves communication efficiency and image reconstruction quality. 3) LMM: LMMs are designed to jointly process and understand data from multiple distinct modalities, such as text, images, audio, video, and potentially even code, point clouds, and sensor data [23]. The goal of LMMs is to enable comprehensive modality fusion and interaction, thereby more effectively simulating how humans perceive and understand the world through various inputs.
https://arxiv.org/abs/2505.22311v1
These models are capable of performing complex cross-modal reasoning, content gener- ation, and seamless interaction. LMMs are widely regarded as a critical step toward achieving more general forms of AI. In terms of architectural design, LMMs are typically built upon powerful unimodal backbones and incorporate more sophisticated cross-modal fusion and alignment mechanisms. Their core architecture often includes: modality-specific en- coders for different input types; one or more projection modules that map modality-specific information into a shared representation space and perform deep alignment, potentially using multi-layer cross-attention, modality gating mechanisms, or dedicated fusion networks; and a central processing unit, usually based on LLMs, responsible for semantic understand- ing, reasoning, and task execution. To support multimodal output generation, the LMM may also integrate corresponding decoders. OpenAI’s GPT-4 is a prominent representative of LMMs, natively supporting flexible combinations of text and image modalities for both input and output. It significantly reduces 10 interaction latency and enhances performance on cross-modal tasks, demonstrating truly real-time multimodal dialogue ca- pabilities. Similarly, Google’s Gemini series [60] [61] and the LLaV A series [71] [72] [73] are also powerful LMMs. By inte- grating a broader range of sensory inputs, these models enable unprecedented levels of complex cross-modal understanding and generation, further blurring the boundary between digital intelligence and physical-world perception. LMMs have been widely applied in communication scenar- ios such as semantic communication [65] [74], wireless net- work intent management [75], and multimodal task-oriented dialogue systems [76]. By integrating multimodal information such as images, text, and sensor data, these models signif- icantly enhance the communication system’s capabilities in understanding, adaptability, and intelligent interaction. 4) LRM: LRMs are AI models focused on enhancing systematic reasoning capabilities in complex tasks. Their pri- mary goal is to solve complex logical problems, such as those encountered in mathematics, programming, and scien- tific domains, through explicit multi-step logical reasoning. Compared to conventional LLMs, LRMs significantly improve their performance in planning, problem decomposition, and dynamic knowledge integration by incorporating techniques such as RL [77], Supervised Fine-Tuning (SFT) [78], and Chain-of-Thought (CoT) [79]. Structurally, LRMs typically adopt a multi-stage training framework. Starting from a base pretrained model, they op- timize the generation of reasoning chains through RL or hybrid training strategies (e.g., SFT + RL), and dynamically augment external knowledge using RAG mechanisms [80]. For instance, DeepSeek R1 [12] employs the Group Relative Policy Optimization (GRPO) algorithm [81], which balances answer accuracy and reasoning format consistency through a reward function. Additionally, it incorporates cold-start data and language consistency constraints, enabling the model to maintain high-quality reasoning while reducing redundant computation and optimizing resource consumption. Representative LRMs include models such as OpenAI-o1, DeepSeek R1, and Qwen-QwQ [82]. As an early bench- mark, OpenAI-o1 demonstrated complex reasoning capabil- ities through Monte Carlo Tree Search (MCTS) [83] and Procedural Reward Modeling (PRM). However, its closed- source nature and high computational cost have limited its widespread adoption. In contrast, DeepSeek R1 [12] stands out for its open-source availability, offering reasoning performance comparable to o1 but at a significantly lower cost, thereby greatly advancing the open-source community. Additionally, LRMs such as Qwen-QwQ
https://arxiv.org/abs/2505.22311v1
[82] have exhibited reasoning abil- ities on par with o1-like models in domain-specific tasks such as code generation, further enriching the diversity of the LRM ecosystem. In communications, LRMs are primarily applied to enhance system intelligence, adaptability, and security [84] by leveraging their powerful multi-step reasoning and abstraction capabilities. 5) Lightweight LAM: Lightweight LAMs refer to those LAMs that are specifically designed and optimized to reduce model complexity, minimize storage size, lower computational resource consumption, and accelerate inference speed. Thecore value of such models lies in their ability to operate effi- ciently in resource-constrained environments, such as mobile devices, embedded systems, IoT devices, and edge computing nodes. Additionally, they contribute to reducing the cost and latency of large-scale cloud deployments, thereby enabling LAMs to be more broadly and economically applied in various real-time or power-sensitive scenarios. Lightweight LAMs require carefully optimized architectural designs and trade-offs to maximize performance within con- strained resource “budgets.” Structural lightweight is typically reflected in several aspects: first, by directly reducing model depth (number of layers) and width (hidden dimensions, number of attention heads); second, by adopting more efficient component variants, such as Grouped-Query Attention (GQA) [85] or Multi-Query Attention (MQA) [86] in place of standard Multi-Head Attention (MHA), which significantly reduces the required cache size during inference and improves efficiency. A series of representative lightweight LAMs have recently demonstrated their potential and value. For example, the TinyLLaMA project aims to replicate the architecture and training pipeline of LLaMA 2 with approximately 1.1 billion parameters, incorporating optimizations such as GQA to pro- vide a foundational model for extremely resource-constrained research and development [87]. LiteLLaMA typically refers to official or community-optimized versions of the LLaMA series with fewer parameters, emphasizing a balance between performance and resource consumption [88]. MiniCPM is a multimodal model designed for edge deployment, with around 2 billion parameters, serving as a representative model tailored for mobile and edge devices [89]. In addition, Microsoft’s Phi series (e.g., Phi-2 [90], Phi-3-mini/small/medium [91]) is known for its “small size, high performance” characteristics, achieving results that surpass models of similar scale through training on high-quality datasets. The continual advancement of these lightweight LAMs is enabling powerful AI capa- bilities to be brought to a wider range of terminal devices and application scenarios. In communications, lightweight LAMs play a critical role in semantic communications [70], where their low computational overhead and strong inference capabilities allow for efficient semantic information extraction, compression, and reconstruction directly on edge devices, significantly enhancing both communication efficiency and robustness. The classification of LAMs and their corresponding appli- cation scenarios in communications are presented in Table III. C. Summary and Lessons Learned 1) Summary: This chapter provides a systematic summary of the core components and model types involved in LAMs for communications. We detail the fundamental principles, architectural characteristics, and application scenarios in com- munications for typical modules such as Transformer, ViT, V AE, Diffusion, DiT, and MoE. Meanwhile, we review the definitions, structural features, and representative works of different categories of LAMs including LLM, LVM, LMM, LRM, and lightweight LAMs. This lays a comprehensive foundation
https://arxiv.org/abs/2505.22311v1
for readers to understand the current technology system of LAMs. 11 TABLE III: Classification of LAMs and their applications in communications. LAM Category Components Specific Models Application Scenarios Large Language Model Transformer, MoE GPT series, Gemma series, LLaMA seriesSemantic Communication [65], Network Management [21], Multi-agent Systems [25], Communication Security [66] Large Vision Model ViT, Diffusion, DiT, MoE SAM series, DINOv2, MAE Semantic Communication [26] Large Multimodal Model Transformer, ViT, V AE, Diffusion, DiT, MoEGPT-4o, Gemini series, LLaV A seriesSemantic Communication [65], Intent Network Management [75], Multimodal Task Dialogue [76] Large Reasoning Model Transformer, MoE OpenAI o1, DeepSeek R1, Qwen-QwQNetwork Management [84] Lightweight LAM Transformer, ViT TinyLlama, LiteLlama, MiniCPM, Phi seriesSemantic Communication [70] 2) Lessons Learned: Although significant progress has been made in the model architectures, classification mech- anisms, and communication applications of LAMs, multiple challenges remain. On one hand, models like Transformer, ViT, and Diffusion require heavy computation when pro- cessing long sequences and high-dimensional data, diffusion models have slow generation speeds, and MoE structures consume high memory, all of which affect the real-time perfor- mance and deployment efficiency of LAMs in communication scenarios. On the other hand, the stability and consistency of LAMs in multimodal fusion and complex reasoning tasks still need improvement. Future research should focus on optimizing model computation structures, enhancing sampling and reason- ing efficiency, constructing unified and scalable multimodal model architectures, and promoting efficient deployment of lightweight LAMs in edge environments [22]. III. H OW TO DESIGN LARGE AI M ODELS FOR COMMUNICATIONS In the context of learning communication knowledge, LAMs primarily adopt two approaches. The first approach involves embedding communication knowledge directly into the model parameters through pre-training, fine-tuning, and alignment. However, this method is time-consuming and less suitable for knowledge that requires frequent updates. The second approach combines RAG and KG, leveraging external vec- tor databases and graph databases to provide contextualized knowledge to LAMs without modifying their parameters. This approach is more adaptive to the demands of rapidly evolv- ing knowledge. In the following sections, we first introduce the methodologies for constructing communication datasets, followed by a detailed discussion of both learning paradigms. A. Communication Datasets The construction of communication datasets serves as the foundation for training LAMs in communications. It encom- passes three key components: pre-training, fine-tuning, and alignment datasets. The primary goal is to support the tran- sition and enhancement of model capabilities from general- purpose intelligence to task-specific competence in communi- cation tasks. 1) Communication Content Filtering: Currently, general- purpose datasets used for training LAMs contain substan- tial communication-related content. Representative datasetsinclude Common Crawl1, Pile2, Dolma3, and RedPajama- Data4. Accordingly, domain-specific communication datasets can be constructed by extracting relevant content from these sources. This process involves identifying a set of commonly used communication-related keywords, filtering the datasets based on these keywords to retain communication-relevant content, and applying deduplication techniques to remove redundant or repetitive entries. These steps aim to enhance the training efficiency of LAMs while maintaining data diversity. During content filtering, communication-specific keywords can be precisely selected based on the following criteria [92]: •Technical Relevance: Keywords should be closely as-
https://arxiv.org/abs/2505.22311v1
sociated with core communication theories. For exam- ple, “6G” represents the latest generation of mobile communication technologies, and “V oIP” refers to voice communication over IP networks, both indicating clear communication-specific contexts. •High Frequency: Keywords should be commonly used terms in professional communication literature. For in- stance, “Broadband” describes high-speed internet access technologies, and “LTE,” an abbreviation for long-term evolution, frequently appears in mobile communication scenarios. •Uniqueness: Keywords should possess distinctiveness and specificity in communications. Terms like “spectrum allocation,” a key concept in wireless communication, and “fiber-optic communication,” the foundation of modern high-speed transmission, exemplify such uniqueness. •Authoritativeness: Keywords should originate from core communication standards and carry authoritative sig- nificance. For example, “3GPP” defines global mobile communication standards, and “IEEE 802.11” specifies wireless LAN standards, both accurately referencing for- mal communication technologies. •Timeliness: Keywords should reflect cutting-edge trends in communications. “Network slicing” allows flexible allocation of network resources, while “quantum encryp- tion,” based on quantum mechanics, enables secure com- munication, both representing advanced and emerging concepts. •Clarity: Priority should be given to keywords that precisely describe frontier communication technologies, 1http://commoncrawl.org/the-data/get-started/ 2https://github.com/togethercomputer/ 3https://huggingface.co/datasets/allenai/dolma 4https://github.com/togethercomputer/RedPajama-Data 12 avoiding vague or overly generic terms. Examples include “V oLTE” for high-definition voice services over LTE net- works, and “Non-Terrestrial Networks” refer to satellite communications and other non-ground-based networking technologies. 2) Pre-training Datasets for Communications : The pre- training of LAMs requires vast and diverse communication- related data that spans multiple technical domains and data sources to comprehensively acquire domain-specific knowl- edge and enhance the model’s generalization and accuracy. The following are representative datasets that can be utilized for pre-training LAMs in communications: •TSpec-LLM [93] is an open-source dataset targeting 3GPP documents, covering over 30,000 documents from 1999 to 2023 with a total size of 13.5 GB. The dataset is formatted in markdown while preserving the original structural hierarchy, facilitating comprehension and pro- cessing by LLMs. •OpenTelecom Dataset [92] is a pre-training corpus designed for LAMs in telecommunications. It encom- passes a wide range of sources, including communica- tion standards, research papers, books, and patents, with a particular emphasis on authoritative documents from 3GPP and IEEE. This dataset ensures that models acquire comprehensive and credible telecommunications knowl- edge, providing rich and high-quality textual resources for communication-related NLP tasks. •CommData-PT [94] is a high-quality pre-training corpus specifically constructed for LAMs in communications. It includes comprehensive content from 3GPP stan- dards, IEEE protocols, communication-related patents, academic papers, source code, and Wikipedia entries, covering knowledge across all layers of communication systems. The data are processed through LAM-assisted recognition, keyword-based filtering, cleaning, and stan- dardization, resulting in a corpus with high domain speci- ficity and structural consistency. This dataset provides strong support for both the pre-training and downstream task performance of communication-oriented LAMs. 3) Fine-tuning Datasets for Communications: In commu- nication, fine-tuning enables LAMs to learn and understand domain-specific instructions and perform corresponding tasks. Instruction-tuning datasets play a crucial role in this process by providing paired instruction-response samples, allowing the model to learn how to execute tasks based on given instruc- tions. This significantly enhances the model’s adaptability and accuracy in specific
https://arxiv.org/abs/2505.22311v1
communication tasks. The following are several representative instruction-tuning datasets: •TelecomInstruct Dataset [92] is an instruction-tuning dataset tailored for telecommunication tasks. It covers a wide range of task types, including question answering, document classification, code generation, and protocol interpretation. This dataset is designed to enhance the model’s ability to comprehend and execute telecom- specific instructions, thereby improving its effectiveness and generalization in complex communication scenarios.•CommData-FT [94] is a high-quality instruction-tuning dataset specifically designed for fine-tuning LAMs in communications. Built upon the CommData-PT corpus, it contains well-structured instruction-response pairs cov- ering tasks such as protocol-related question answering, document classification, and text summarization. The dataset adheres to standardized formatting and is man- ually curated to ensure data quality, providing strong support for model fine-tuning and specialization in com- munication tasks. 4) Alignment Datasets for Communications : Alignment datasets provide high-quality feedback to LAMs to optimize their behaviors and decision-making processes, enabling the models to better align with human preferences and values in complex environments. For example, the TelecomAlign dataset [92] is specifically designed for alignment fine-tuning in telecommunications. It aims to train LLMs to generate responses that better meet communication-specific require- ments and human preferences while minimizing redundancy, verbosity, and irrelevant content. By favoring concise and accurate answers, this dataset helps reduce system latency and aligns with the principles of semantic communication, thereby enhancing the model’s practicality and human-AI collaboration capabilities in communication scenarios. B. Internal Learning Internal learning in communications typically involves three stages: pre-training, fine-tuning, and alignment. Pre-training enables the model to acquire general semantic and knowl- edge capabilities from large-scale data; fine-tuning adapts the model to the specific requirements of communication tasks; and alignment further refines the model’s output behavior to ensure it aligns more closely with the practical standards and objectives of communication systems. 1) Pre-training: Pre-training LAMs in communications is a critical step in building foundational models equipped with both domain-specific knowledge and language understand- ing capabilities. It aims to address the limited adaptabil- ity of general-purpose LAMs to specialized communication tasks. Pre-training involves unsupervised learning on large- scale communication data, enabling the model to acquire fundamental concepts and structural knowledge relevant to the communication domain, thereby laying a solid founda- tion for downstream tasks. In communication scenarios, a continual pre-training strategy is typically adopted, where domain-specific data are introduced to further pre-train open- source LAMs (e.g., LLaMA, Gemma). The data sources include 3GPP standards, IEEE publications, communication patents, source code, and Wikipedia entries, forming high- quality corpora such as OpenTelecom [92] and CommData- PT [94]. The training objective is to predict the next token, allowing the model to learn the semantics and logic in com- munication contexts. Upon completion, the model retains its general language capabilities while significantly enhancing its understanding of communication protocols, terminologies, and system structures, thereby establishing a strong foundation for subsequent instruction fine-tuning and alignment. 13 The specific pre-training approach involves continual pre- training on domain-specific communication datasets [92]. Un- like the initial pre-training phase of LAMs, this process offers a cost-effective method to adapt a general-purpose LAM into a communication-specialized model. During continual pre-
https://arxiv.org/abs/2505.22311v1
training, the learning objective is based on causal language modeling, where the model predicts the next token given the preceding word sequence. Formally, let the input text be represented by a word sequence x= (x1, ..., x T)andθdenote the model parameters. The LAM is trained by minimizing the negative log-likelihood loss to enhance its understanding of communication knowledge. The loss function is defined as follows [92]: L(x, θ) =−TX t=1logP(xt|(x<t)), (11) where x<tdenotes the sequence of tokens preceding the token xt, and Tis the length of the word sequence. 2) Fine-tuning: Fine-tuning is a critical post-pretraining stage for communication-oriented LAMs, aimed at enhanc- ing their understanding and execution capabilities for spe- cific communication tasks such as protocol parsing, ques- tion answering, and code generation. This process involves supervised training on high-quality, task-specific datasets to enable the model to generate professional and accurate out- puts in response to given instructions. Fine-tuning typically leverages instruction datasets such as CommData-FT [94] or Telecom-Instruct [92], and optimizes the model using a cross-entropy loss function. To improve training efficiency, Parameter-Efficient Fine-Tuning (PEFT) techniques such as LoRA are often employed. As a result, the fine-tuned com- munication model exhibits stronger domain specificity and task adaptability, achieving superior performance in tasks like telecom question answering and code generation compared to general-purpose LAMs [94]. Instruction tuning performs SFT of communication-oriented LAMs using instruction-tuning datasets. These datasets consist of multiple instruction–response pairs, where each instruc- tionx(i)is paired with a corresponding response y(i). The dataset can be formally represented as I= x(i), y(i) N i=1. Using these instruction–response pairs, the LAM is trained to minimize the conditional negative log-likelihood loss of the response given the instruction, thereby enhancing its perfor- mance in zero-shot or few-shot scenarios and reducing refusal behaviors when responding to user requests. The loss function is defined as follows [92]: L(y(i), θ) =−|y(i)|X t=1logP(y(i) t|y(i) <t, x(i)) (12) where x(i)denotes the instruction in the i-th instruc- tion–response pair, y(i)represents the corresponding correct response, y(i) tis the t-th token in the response sequence of thei-th sample, and y(i) <tdenotes the subsequence of response tokens from the beginning up to the (t−1)-th token.3) Alignment: In communications, alignment is a critical step in enhancing the practical utility of LAMs, aiming to ensure that the generated outputs better reflect the preferences and requirements of communication tasks. Although fine- tuned models possess a certain level of capability in handling such tasks, they may still produce redundant, inaccurate, or task-irrelevant responses, necessitating further optimization. Alignment addresses this by constructing preference-labeled datasets and applying methods such as Direct Preference Optimization (DPO), which encourages the model to generate concise, accurate, and highly relevant outputs. Compared to traditional RL approaches, DPO eliminates the need for complex reward models, resulting in more efficient and stable training. Ultimately, aligned models demonstrate performance that is more consistent with real-world expectations in tasks such as telecom-specific question answering and protocol interpretation, making them better suited for future intelligent communication systems. The specific loss function of DPO is defined as follows [92]: LDPO(πθ;πref) =−E(x,yw,yl) (13) h logσ βlogπθ(yw|x) πref(yw|x)−βlogπθ(yl|x) πref(yl|x)i (14) where xdenotes
https://arxiv.org/abs/2505.22311v1
the input prompt or task instruction, ywrepre- sents the preferred response according to human feedback, and yldenotes the less preferred response. πθis the current model being optimized, while πrefis the reference model, typically a fine-tuned model without preference alignment. πθ(yw|x) denotes the conditional probability of generating response yw given input xunder the model πθ. C. External Learning In communications, the external learning paradigm for LAMs primarily follows two approaches: RAG based on vectorized data and KG based on structured graph data. RAG enhances the model’s information retrieval and generation ca- pabilities by incorporating external semantic vector databases, while KG strengthens the model’s knowledge reasoning and contextual understanding through structured representations. Together, these approaches significantly expand the applica- bility and effectiveness of LAMs in complex communication scenarios [94]. 1) Retrieval-Augmented Generation: In the external learn- ing paradigm of LAMs for communications, vector-based RAG serves as a key knowledge enhancement approach, addressing limitations such as slow knowledge updates and inaccurate responses in pre-trained models. RAG integrates information retrieval with generative modeling by retrieving relevant content from an external knowledge base before response generation. The retrieved content is combined with the original query and jointly fed into the LAM to pro- duce more accurate and context-aware outputs. The typical workflow involves segmenting communication documents and converting them into high-dimensional vectors, storing them in a vector database with indexing. Upon receiving a user query, the model encodes the query into a vector and retrieves the most relevant document fragments. Finally, these retrieved 14 segments are combined with the original query and passed to the LAM to produce an enhanced response. In communication scenarios, vector-based RAG focuses on extracting semantic information from domain-specific commu- nication documents to construct specialized communication vector databases. This approach enables the rapid integration of new knowledge without requiring updates to model pa- rameters, making it particularly suitable for communication tasks characterized by frequent knowledge updates and high complexity. For example, the CommGPT system [94] employs vector databases such as milvus [95], significantly improving the model’s accuracy in understanding technical terminology and complex standards, and demonstrating strong scalability and practical utility. 2) Knowledge Graph: In the external learning paradigm of LAMs for communications, graph-based KGs serve as a vital approach for knowledge modeling and enhancement, en- abling the model to better understand complex communication concepts and their interrelationships. A KG is a structured knowledge base that represents entities (e.g., protocols, tech- nologies, parameters) as nodes and their semantic relation- ships as edges within a graph structure. KGs provide global, structured knowledge support for LAMs, thereby enhancing their reasoning, retrieval, and interpretability capabilities. The construction of a KG typically involves three main steps: first, extracting entities along with their attributes and relationships from communication documents; second, constructing a graph structure to capture the semantic associations among entities; and finally, storing the graph in a graph database (e.g., neo4j [96]), which allows the model to query and reason over the graph to obtain relevant entities and contextual information. In communication scenarios, graph-based KGs are primarily used to represent and organize key entities and their associ-
https://arxiv.org/abs/2505.22311v1
ations within communication documents. By constructing a KG, the model can identify not only explicit relationships (e.g., a protocol belonging to a specific standard) but also infer implicit ones (e.g., the relevance of a particular technology to multiple protocols). For instance, in the CommGPT system [94], KGs are integrated with RAG to form a multi-scale knowledge enhancement mechanism, enabling the model to generate more accurate and contextually consistent responses to communication tasks involving multiple entities and multi- hop relationships. This significantly improves the model’s logical reasoning and global understanding capabilities in complex communication scenarios. Table IV presents the comparison of internal learning and external learning. As shown in Fig. 3, a structured design pipeline is established to develop LAMs specifically optimized for communications through various learning methods. D. Summary and Lessons Learned 1) Summary: This chapter systematically summarizes the key components and technical pathways for building LAMs for communications. Starting from the construction of com- munication datasets, we outline the filtering strategies for communication-specific data, methods for building pre- training, instruction fine-tuning, and preference alignmentdatasets, forming a comprehensive design scheme covering the entire data lifecycle. Regarding model training, we introduce internal learning and external learning separately, clarifying the objective functions and technical approaches at each stage. Internal learning includes key techniques such as pre-training, instruction fine-tuning, and preference alignment, while ex- ternal learning involves vector-based RAG and graph-based KG. Through this content, we provide a complete technical reference and methodological summary for the systematic design of LAMs for communications. 2) Lessons Learned: Although preliminary achievements have been made in dataset construction, pre-training, fine- tuning, and alignment of LAMs, internal learning still faces challenges such as knowledge update delays and high training costs, while external learning encounters difficulties including complex knowledge organization and system integration. Fu- ture efforts should focus on constructing high-quality, multi- level communication datasets, improving model adaptability and update capability for communication knowledge, and optimizing external enhancement mechanisms like RAG and KG, thereby promoting efficient and scalable structure design and development in communication scenarios [19]. IV. H OW TO DESIGN AGENTIC AI S YSTEMS FOR COMMUNICATIONS The LAM-based Agentic AI system [25] refers to an intel- ligent agent framework driven by LAMs and integrated with key modules such as a knowledge base, planners, tools, and memory modules. It is designed to autonomously comprehend, plan, and execute complex communication tasks. This system not only possesses natural language understanding and reason- ing capabilities but also supports multi-agent collaboration to enable iterative optimization of communication tasks. Com- pared with traditional agent systems, the Agentic AI system, empowered by LAMs, exhibits greater generality, scalability, and adaptability, thereby unlocking the full potential of AI in 6G networks. In the following sections, we provide a detailed overview of the system’s core components, agent interaction mechanisms, and multi-agent system architecture. A. System Architecture of Agentic AI The Agentic AI system is composed of LAMs, knowledge bases, planners, tools, and memory modules, collectively en- abling natural language understanding, knowledge reasoning, and task execution. The agent leverages the knowledge base to acquire domain-specific knowledge in communications, uti- lizes the
https://arxiv.org/abs/2505.22311v1
planner to reason about and decompose tasks, invokes external tools to perform operational steps, and employs the memory module to store and retrieve historical information for reflection and continuous task optimization. The system architecture of LAM-based Agentic AI is shown in Fig. 4. 1) LAMs: In an Agentic AI system, LAMs serve as the core reasoning engine and central coordination hub, responsible for task comprehension, contextual modeling, tool invocation, and reflective evaluation. LMMs such as GPT-4 or Gemini are typically employed to enable multimodal perception and in- terpretation of external commands and environments, thereby 15 TABLE IV: Comparison of internal learning and external learning. Aspect Pretraining Fine-tuning Alignment RAG Knowledge Graph Goal Learning general lan- guage patternsOptimizing performance for specific tasksAlign outputs with prefer- ence or task-specific con- straintsRetrieving external docu- ments to enhance LLMConstructing structured knowledge to enhance LLM Data Type Large-scale unlabeled textLabeled instruction data Preference data, synthetic alignment dataExternal documents/vector databaseStructured graph database Learning Method Unsupervised learning Supervised learning Reinforcement learning Retrieval and in-context learn-ingGraph reasoning Tuning ParametersAll parameters Task-specific parameters Alignment-specific parame- tersParameters of retrieval mod- ulesGraph embedding and struc- ture Time Complexity High Medium Medium Low Medium Disadvantages Lack of task-specific optimizationLabeled instruction data re- quirementCostly reward modeling and subjective criteriaRetrieval suboptimality Difficulty in generating high- quality graph structure. Fig. 3: The structured design pipeline of LAMs for communications through various learning methods. Fig. 4: The architecture of the LAM-based Agentic AI system. 16 orchestrating the system’s overall operations. LAMs utilize prompt templates to structure task processing workflows, in- voke the planner to dynamically generate subtask instructions, and dispatch them to other modules while actively monitoring execution states and coordinating outputs to ensure semantic consistency and logical coherence. Furthermore, LAMs can interface with external tools such as search engines, code executors, and file systems, as well as internal tools tailored to communication tasks, enabling a closed-loop cycle from reasoning to execution. As a unified decision-making engine connecting users, agents, tools, and the knowledge base, LAMs significantly enhance the system’s adaptability and scalability in handling complex communication tasks. 2) Planner: The planner module is a central component responsible for understanding tasks, decomposing objectives, and organizing execution in an Agentic AI system. It typically employs LRMs with strong language comprehension and in- ference capabilities, such as OpenAI-o1 [97] and DeepSeek R1 [12], which leverage slow thinking and CoT reasoning to break down complex tasks into executable subtask sequences. The planner generates feasible task chains, explicitly defining the order and dependencies among subtasks to guide downstream execution. The system adopts advanced reasoning strategies such as CoT [79], Tree-of-Thought (ToT) [98], Graph-of- Thought (GoT) [99], and the Plan-and-Solve framework [100], progressively refining tasks into actionable steps while iter- atively improving plans based on execution feedback. This module often integrates key techniques including multi-agent collaboration, structured task modeling, and feedback-driven replanning, significantly enhancing the system’s efficiency and adaptability in managing complex communication tasks. 3) Knowledge Base: In an Agentic AI system, the knowl- edge base module serves as a foundational pillar supporting task comprehension and reasoning, providing agents with structured, retrievable, and
https://arxiv.org/abs/2505.22311v1
highly relevant external knowl- edge. This module integrates communication-related docu- ments (e.g., research papers, standards, protocols) and AI domain knowledge. It is constructed through semantic encod- ing and knowledge embedding, enabling efficient alignment with the knowledge retrieval requirements of LAMs. The knowledge base may consist of both vectorized and graph- structured data: vector data are retrieved via RAG querying [101], while graph data are accessed through KG querying [102]. When external knowledge is required for task execution, user queries are converted into retrieval-style prompts to search the knowledge base. Relevant segments are returned based on semantic similarity ranking, then filtered and reformulated by the agent to provide task-contextualized knowledge support. This module supports both structured knowledge acquisition and dynamic evolution with adaptive updates. 4) Tools: The tool module serves as a critical bridge between language understanding and task execution, enabling agents to perform “perception–reasoning–action” operations in an Agentic AI system. This module integrates various types of external tools, including general-purpose tools (e.g., search engines, file access, API invocation, data analyzers), communication-specific tools (e.g., channel models, beam- forming algorithms, resource allocation algorithms), and AI-specific tools (e.g., clustering algorithms, feature extraction methods, deep learning algorithms), forming a comprehensive execution toolkit for the system. Agents comprehend the func- tionality of these tools through natural language descriptions and issue tool invocation commands, transmitting data to the appropriate tool for processing. Upon execution, the output is returned to guide subsequent steps in the task pipeline. Throughout this process, the tool module can be repeatedly invoked by multiple agents to support tasks such as code generation, data processing, and system modeling, acting as a core enabler in bridging reasoning and action. This module in- corporates key technologies such as prompt engineering [103], Agent Communication Protocol (ACP) [104], and tool selec- tion mechanisms [105], thereby enhancing the intelligence and flexibility of tool utilization and improving system adaptability in multi-turn interactions and complex task scenarios. 5) Memory: In an Agentic AI system, the memory module is a core component that supports self-reflection, task opti- mization, and continual learning. It is responsible for recording and managing intermediate states and outcomes throughout the task execution process, thereby establishing both short-term and long-term memory mechanisms. Short-term memory cap- tures semantically similar task experiences to facilitate com- parative analysis and localized optimization, while long-term memory retains semantically distinct experiences to support system-level strategy updates and global optimization. After each task, the agent evaluates its execution plan and outcomes, classifies and stores them accordingly, and then leverages short-term memory to propose fine-grained workflow adjust- ments and long-term memory to perform broader workflow revisions. This module emulates the human cognitive process of “short-term recall and long-term accumulation,” enabling memory-driven self-optimization across multi-turn interactions and serving as a critical foundation for autonomous learning and task generalization. The memory module integrates key technologies such as short and long-term memory modeling [106], vector databases [107], semantic embeddings [108], and self-reflection mechanisms [109], significantly enhancing the system’s capacity for information organization and knowledge evolution in complex communication tasks. B. Agent Interaction As the core units of decision-making and execution, agents
https://arxiv.org/abs/2505.22311v1
rely on the comprehension and generation capabilities of LAMs to engage in autonomous or collaborative interactions centered around task objectives and the external environment. Depending on the scope and nature of the interaction, agent interaction mechanisms can be categorized into two types: single-agent interaction and multi-agent collaborative interac- tion. 1) Single-Agent Interaction: Single-agent interaction en- compasses task reasoning optimization and causal logic re- finement, which effectively enhance the reasoning efficiency of LAMs and strengthen their understanding of causal rela- tionships. •Task Reasoning Optimization: Agents possess au- tonomous decision-making and self-learning capabilities, 17 enabling them to monitor the state of the LAM in real time during the reasoning process, identify bottlenecks, and dynamically adjust reasoning paths. By leveraging contextual information from the knowledge base and memory modules, agents can adapt to task variations, thereby improving reasoning efficiency and accuracy. For instance, the AutoGPT system achieves automated rea- soning by decomposing goals into subtasks and executing them iteratively [13]. Integrating agents with RL can further enhance their policy learning capabilities, leading to superior reasoning performance across a variety of tasks. •Causal Logic Optimization: By incorporating causal logic, agents can significantly enhance their reasoning capabilities. Through the integration of KG, agents are able to analyze causal relationships among various fac- tors, enabling them not only to identify complex patterns within the data but also to understand the underlying causal mechanisms. This allows for more targeted and context-aware reasoning. Agents can determine which variables exert direct or indirect influence on outcomes and accordingly adjust their reasoning strategies, thereby improving prediction accuracy and decision quality while reducing errors arising from the neglect of causal depen- dencies. 2) Multi-Agent Interaction: In multi-agent systems, each agent functions as an independent decision-making unit that collaborates with others to solve complex problems. Common optimization strategies include unordered complementary col- laboration, ordered complementary collaboration, and adver- sarial collaboration. •Unordered Complementary Collaboration: Unordered complementary collaboration emphasizes flexible interac- tion among agents without predefined sequences or rules. Each agent contributes information and strategies based on its own experience, enhancing system diversity and enabling the analysis of complex and uncertain problems from multiple perspectives. This approach mitigates the limitations of single-agent viewpoints, improves overall reasoning capabilities, and increases the system’s adapt- ability in dynamic environments. •Ordered Complementary Collaboration: Ordered com- plementary collaboration follows a well-defined sequence and structured process among multiple agents. Informa- tion is passed along a predetermined path, and tasks are executed in a coordinated manner, forming a continuous chain of knowledge. The output of one agent serves as the input for the next, thereby improving collaborative efficiency, reducing information redundancy and interfer- ence, optimizing the reasoning process, and enhancing task execution efficiency and decision accuracy in com- plex scenarios. •Adversarial Collaboration: Adversarial collaboration involves competitive interactions among agents, where they continuously challenge and critique each other’s reasoning strategies. This dynamic fosters self-reflection and strategic refinement, as agents learn from thestrengths and weaknesses of their counterparts. Such game-theoretic learning not only enhances individual rea- soning abilities but also strengthens the system’s overall robustness and problem-solving depth in complex envi- ronments. C. Multi-Agent System
https://arxiv.org/abs/2505.22311v1
Architecture The CommLLM framework [25] establishes a LAM-centric, multi-agent collaborative system architecture for 6G commu- nications. The schematic diagram of CommLLM is shown in Fig. 5. This architecture integrates a knowledge base, planners, tools, and memory modules to support intelligent decision- making and task execution. Driven by natural language input, the system orchestrates the full process of knowledge retrieval, task planning, result evaluation, and self-optimization through distributed collaboration among agents. Each agent is assigned to specific subtasks, working in coordination to form a closed- loop workflow of “input–reasoning–feedback–optimization.” This enables the system to handle complex communication tasks, adapt to dynamic environments, and continuously learn, representing a critical technological pathway for the intelligent evolution of 6G communication systems. The architecture consists of the following three components: •Multi-Agent Data Retrieval: The Multi-agent Data Retrieval (MDR) module is responsible for acquiring task-relevant information from external knowledge bases and serves as the foundational entry point for seman- tic modeling and knowledge support in the system. It comprises multiple function-specific agents, including a secure agent, a condensate agent, and an inference agent, forming a multi-stage data retrieval pipeline from input filtering to information reconstruction. Specifically, when a user submits a natural language task request, the secure agent first screens the input to eliminate potentially malicious instructions or requests that violate system constraints, ensuring the robustness and compliance of the task chain. The system then leverages embedded vector retrieval mechanisms to identify the most relevant knowl- edge segments from the pre-constructed knowledge base based on the processed query semantics. These retrieved segments are then compressed and cleaned by the conden- sate agent to remove redundancy and retain core content. Finally, the inference agent utilizes the LAM’s language understanding and generation capabilities to perform se- mantic abstraction, logical integration, and knowledge reconstruction on the compressed text, forming struc- tured, task-ready knowledge representations. This module not only ensures the contextual accuracy and relevance of knowledge used in downstream reasoning but also, through collaboration with the Multi-agent Collaborative Planning (MCP) module, significantly enhances the sys- tem’s domain adaptability, response efficiency, and inter- pretability in complex communication tasks. It constitutes the foundational gateway for establishing semantic clo- sure and knowledge-enhanced reasoning in Agentic AI systems. 18 •Multi-Agent Collaborative Planning: The MCP mod- ule plays a central role in decomposing complex com- munication tasks and generating high-quality execution pathways. It operates by orchestrating multiple planning agents configured with diverse roles and reasoning strate- gies, enabling parallel, multi-perspective task analysis. After receiving user requests and domain-relevant infor- mation from the knowledge base, each planning agent applies reasoning frameworks such as CoT or the Plan- and-Solve strategy to semantically model the task and divide it into subtasks. These subtasks are organized into chains with defined execution order and dependency relationships. The resulting task chains are structured to ensure logical consistency and semantic coherence, and may be executed sequentially or in parallel. The system then calls upon the LAM’s intrinsic tool capabilities or external integrated tool modules to address each subtask individually and generate preliminary execution results. The multi-agent planning strategy of MCP significantly enhances the
https://arxiv.org/abs/2505.22311v1
system’s responsiveness, reasoning robust- ness, and generation quality in multi-objective, constraint- rich scenarios. It also lays the groundwork for the eval- uation and refinement process carried out by the Multi- agent Evaluation and Reflection (MER) module. MCP effectively addresses the “path limitation” and “single- mode bias” inherent in traditional single-agent systems, serving as the core planning engine for complex commu- nication task execution in Agentic AI. •Multi-Agent Evaluation and Reflection: The MER module is the core mechanism for evaluating solution quality, generating self-feedback, and enabling iterative optimization. It is designed to compensate for potential deviations, inefficiencies, or illogical reasoning that may occur during a single inference pass. This module com- prises multiple functional agents responsible for eval- uation, reflection, and optimization. In the operational workflow, MER receives multiple candidate task plan chains generated by the MCP module, each representing a distinct reasoning path and execution strategy proposed by different planning agents. Evaluation agents assess and rank these candidates across multiple dimensions, including accuracy, efficiency, and interpretability, based on predefined task objectives, contextual constraints, and prior experience. The reflexion agents then access short- term memory to trace contextual trajectories and inter- mediate variables, identify weaknesses or logical gaps in the reasoning process, and use the LAM’s reasoning abil- ities to propose fine-grained improvement suggestions. Refinement agents further leverage high-quality historical solutions and paradigms stored in long-term memory to perform structural strategy reconstruction and path rewrit- ing, feeding the optimized plans back to the MCP module for the next iteration. Importantly, MER is not merely a result scoring mechanism; it is tightly integrated with the memory system and agent behavior configurations, forming a reflective and learnable feedback loop in the Agentic workflow. This greatly enhances the system’s sta- bility, adaptability, and robustness in addressing complexand open-ended tasks. D. Summary and Lessons Learned 1) Summary: This chapter summarizes the core concepts and system architecture of LAM-based Agentic AI Systems, systematically outlining their constituent modules, interaction mechanisms, and functional workflows. We focus on the de- sign of the reasoning engine centered on LAMs, constructing an agent system framework for complex communication tasks around key modules such as knowledge base retrieval, task planning, tool invocation, and memory optimization. Through analysis of agent interaction methods and optimization mecha- nisms, we demonstrate the system’s capabilities in task model- ing, reasoning execution, and self-evolution, providing a tech- nical foundation and methodological reference for advancing research and implementation of Agentic AI Systems tailored for 6G communications. 2) Lessons Learned: Although LAM-based Agentic AI systems have shown good generality and scalability across various communication tasks, challenges remain, such as complex multi-agent collaboration, instability in task plan- ning, low efficiency in knowledge base retrieval, and limited flexibility in tool integration and invocation. Future research should aim to enhance collaboration strategies and consistency modeling among multiple agents, build more efficient knowl- edge management mechanisms, strengthen the intelligence and generalization capability of tool invocation, and promote the continuous development of autonomous adaptation and multi- turn optimization capabilities of the Agentic AI system in complex communication scenarios [110]. V. H OW TO OPTIMIZE COMMUNICATION SYSTEMS USING
https://arxiv.org/abs/2505.22311v1
LAM S AND AGENTIC AI A. The Application Scenarios of LAMs 1) LAMs for Semantic Communication: With the remark- able capabilities of LAMs in natural language understanding and cross-modal reasoning tasks, semantic communication is gradually evolving from the traditional bit transmission paradigm toward an intelligence-driven, semantic-centric com- munication paradigm. In semantic modeling and optimization, introducing LLMs enables direct modeling and efficient decoding of semantic information. The general knowledge acquired through pre- training supports end-to-end semantic communication even without fine-tuning [111]. Moreover, LLMs enhance feature extraction and reconstruction in semantic communication by constructing general-purpose knowledge bases, enabling in- telligent optimization across task identification, semantic pro- cessing, and physical transmission. By integrating the SC-GPT approach, LLMs further improve channel adaptability and transmission efficiency, thereby significantly enhancing the accuracy and robustness of semantic communication [112]. In end-to-end semantic communication systems, the integration of KG with LLMs in the KG-LLM framework significantly enhances transmission efficiency and semantic reconstruction quality by enabling structured semantic extraction and context- aware compression encoding, thereby greatly improving the 19 Fig. 5: Schematic diagram of CommLLM [25]. overall performance of semantic communication [113]. In talking-head video semantic communication, LLMs construct private knowledge bases to enable semantic error correction and disambiguation and participate in joint semantic–channel encoding, thereby improving the accuracy and robustness of text transmission. Simultaneously, they facilitate high-quality reconstruction of speech and video, enabling more efficient and accurate semantic transmission and recovery under low- bandwidth conditions [114]. In image semantic communication systems, the incorpora- tion of LLM-designed semantic encoders and context-aware semantic decoders enables the extraction of high-density se- mantic information from raw images and the reconstruction of contextually consistent representations at the receiver. This sig- nificantly enhances the efficiency of semantic communication [115]. In multi-user image semantic communication systems, the SAM leverages a lightweight knowledge base to identify key semantic regions within images and employs efficient semantic encoding for image compression and reconstruction. Through a shared semantics mechanism, similar information is transmitted uniformly across users, reducing redundancy and enhancing both the efficiency and robustness of semantic communication [70]. In cross-modal semantic communication systems, a Vision-Language Model-based Cross-modal Se- mantic Communication (VLM-CSC) framework is constructed based on BLIP and Stable Diffusion models. This system performs high semantic density extraction from images to text and enables controllable image reconstruction. Under dynamic channel conditions, it integrates memory-enhanced continual learning and noise-aware attention modulation mod- ules, significantly improving the completeness of semantic representation and the robustness of semantic communication transmission [116]. In 3D semantic communication systems, the Generative AI Model-assisted 3D Semantic Communica- tion (GAM-3DSC) framework, integrating SAM, NeRF, and diffusion models, performs task-oriented 3D semantic extrac- tion, adaptive semantic compression, and channel estimation aided by a conditional Generative Adversarial Network (GAN)combined with diffusion models. This design enables efficient compression, transmission, and reconstruction of 3D semantic information under uncertain channel conditions, significantly enhancing multi-view semantic representation and the robust- ness of semantic communication [117]. In multimodal semantic communication systems, the pro- posed LAM-based Multimodal Semantic Communication (LAM-MSC) framework integrates the CoDi model for modal- ity transformation and leverages a personalized knowledge base built on
https://arxiv.org/abs/2505.22311v1
GPT-4 for semantic extraction and reconstruc- tion. Additionally, Conditional GANs (CGANs) are employed for channel estimation, significantly improving the transmis- sion efficiency of the semantic communication system under complex channel conditions and multimodal data scenarios [65]. Moreover, in 6G semantic communication systems, the Privacy-preserving Semantic Communication scheme based on Multimodal LLM (MLLM-PSC), built upon the GPT- 4V architecture, incorporates semantic extraction with user- profile-driven few-shot prompt learning and sensitive se- mantic encryption mechanisms. This framework enhances multimodal information compression and semantic consis- tency while achieving end-to-end privacy protection and low- overhead transmission of sensitive content [118]. The M4SC system integrates the reasoning and generalization capabilities of LLMs to construct a unified semantic space via Kernel- ized Alignment Networks (KAN) for multimodal alignment. It optimizes multi-task representation and transfers through natural language instruction templates and separates shared and private semantic transmissions in multi-user scenarios, significantly improving the compression ratio, bandwidth uti- lization, and adaptability of semantic communication systems [119]. In bandwidth-constrained environments, LLMs enhance the efficiency and robustness of underwater image semantic communication by identifying key information within images and performing semantic compression. By incorporating dif- fusion models and ControlNet, high-quality image reconstruc- tion is achieved, while language-based LLMs are employed to recover textual information. This approach significantly 20 Fig. 6: The application scenarios of LAMs. reduces data volume while ensuring semantic consistency and reconstruction fidelity [120]. Meanwhile, LLMs are employed in semantic communication systems within edge IoT networks, where they enable user intent recognition, semantic extraction, and reconstruction at the edge. This enhances information transmission efficiency and intelligent interaction capabilities in resource-constrained environments [121]. In summary, LAMs empowered semantic communication systems are fundamentally reshaping the foundational units of information transmission, shifting from the transfer of symbols and bits to the accurate conveyance of meaning. This paradigm shift provides strong support for the development of the next generation of ubiquitous, intelligent, and efficient communication systems with enhanced reliability. 2) LAMs for IoT: With the widespread deployment of the IoT and the continuous expansion of its application scenarios, traditional IoT systems face significant bottlenecks in task complexity, semantic understanding, device collaboration, and real-time interaction. The introduction of LAMs offers new perspectives for developing more intelligent and adaptive IoT systems. In task orchestration and control automation, the LLMind framework introduces an LLM-centric orchestration mecha- nism to enable collaborative control across devices and func- tional modules. By employing a multi-stage transformation process—from natural language, to finite state machines, and ultimately to executable code—it supports dynamic orchestra- tion and feedback-driven execution across multiple devices, thereby equipping IoT systems with the capabilities for con- tinual learning and adaptive evolution [122]. Moreover, the AutoIoT system integrates LLMs into the development of Artificial intelligence & IoT (AIoT) applications by trans- lating user intentions into locally executable code throughnatural language programming. This approach offers strong interpretability and flexible interaction while avoiding privacy and latency issues commonly associated with remote model invocation. It presents a new paradigm for low-barrier, high- efficiency AIoT application development [123]. LLMs can also enable on-device intelligence enhancement through edge inference and PEFT mechanisms. Furthermore, by leveraging collaborative
https://arxiv.org/abs/2505.22311v1
reasoning and distributed learning, a resource- adaptive optimization framework can be established, signif- icantly improving the responsiveness, privacy preservation, and cross-device generalization capabilities of IoT systems in applications such as smart healthcare, intelligent cities, and home automation [124]. In privacy and security protection, LLMs combine local deployment with prompt engineering to enable understand- ing, task analysis, and code generation from complex data uploaded by IoT devices. Without the need for fine-tuning, they can perform reasoning and self-correction, thereby signif- icantly enhancing the system’s data processing capabilities and intelligent responsiveness under privacy-sensitive, resource- constrained, and task-diverse scenarios. This demonstrates the critical role of LLMs in optimizing intelligent services within IoT systems [125]. LLMs are also employed in the develop- ment of real-time threat detection and prevention systems for IoT networks. By leveraging fine-tuned lightweight models in combination with IoT-specific datasets, these systems enable rapid responses to unknown attacks and efficient defense under constrained resource conditions, highlighting the promising potential of LLMs in intelligent security systems [126]. More- over, LLMs demonstrate significant potential in enhancing the security, manageability, and intelligent data processing capabilities of IoT systems by enabling high-precision few- shot recognition in Distributed Denial of Service (DDoS) at- 21 tack detection, automatically generating multi-scenario control scripts within macro-programming frameworks, and providing high-quality, interpretable responses for large-scale sensor data processing tasks [127]. In semantic reasoning and multimodal modeling, LLMs enable the abstraction and fusion of multi-source sensor data to automatically identify and generate high-level activity events from raw logs, without relying on extensive manual rules or domain-specific knowledge. This significantly enhances the efficiency of IoT data processing and intelligent anal- ysis, making it particularly suitable for applications such as smart healthcare and long-term monitoring. It highlights the critical role of LLMs in optimizing event understanding and unified log generation [128]. To enhance the reasoning capabilities of LLMs in the real physical world, the IoT-LLM framework integrates IoT sensing data with CoT prompting and incorporates IoT knowledge augmentation and retrieval mechanisms. This enables unified modeling and reasoning across various tasks, such as human activity recognition and industrial anomaly detection, significantly improving the depth of LLMs’ understanding of physical laws and sensor semantics [129]. For more complex IoT sensing scenarios, the IOT- LM framework employs a multimodal, multi-task perception encoder and instruction tuning mechanism to map multi-source heterogeneous sensor data into the input space of LLMs. This enables cross-modal knowledge fusion and shared task modeling, significantly enhancing the intelligence and gener- alization capabilities of IoT systems in multi-task recognition, interactive question answering, and real-time reasoning [130]. In summary, the deep integration of LAMs with IoT not only significantly enhances semantic understanding and reasoning capabilities but also drives comprehensive upgrades in the intelligence, scalability, and security of IoT systems. This lays a solid foundation for building a new generation of IoT ecosys- tems characterized by cognitive capabilities, autonomous task execution, and natural human-machine interaction. 3) LAMs for Edge Intelligence: With the continuous ad- vancement of LAMs, deploying them at the edge to support real-time, low-power, and privacy-preserving intelligent ser- vices is emerging as a key direction in
https://arxiv.org/abs/2505.22311v1
the development of edge intelligence. In edge training and inference optimization, to address the inefficiencies of federated learning in training LAMs at the edge, LLMs are utilized with forward gradient training and PEFT. This enables low-memory, on-device inference and training on mobile devices, supports Neural Processing Unit (NPU) acceleration, and allows for multi-device parallelism. As a result, the efficiency of federated learning and the conver- gence speed of models are significantly improved, highlighting the critical role of LLMs in enhancing edge intelligence under resource-constrained conditions [131]. To address the resource constraints of edge devices, the Edge-LLM framework em- ploys Layer-wise Unified Compression (LUC) and adaptive layer-tuning mechanisms to enable efficient fine-tuning and inference of LLMs on edge devices. This significantly reduces memory usage and computational complexity while maintain- ing performance, thereby improving deployment feasibility [132]. Furthermore, the EdgeShard system partitions LLMinference tasks across multiple edge devices and the cloud, en- abling fragmented processing of compute-intensive workloads. It introduces a joint device selection and model partitioning optimization algorithm, which markedly improves inference throughput and reduces latency, offering a new paradigm for LLM inference in collaborative edge environments [133]. At the deployment level, the edge-device collaborative LLM system assigns the generation process to be jointly executed by the serial components on the device and the parallel components at the edge, thereby optimizing inference latency and energy consumption. Additionally, an integer program- ming algorithm is employed to allocate computational and transmission resources, reflecting a systematic design approach to device–edge collaborative optimization [134]. In edge architecture design, LLMs are incorporated into Mo- bile Edge Intelligence (MEI) through the MEI4LLM frame- work, which integrates techniques such as PEFT, distributed training, and inference. This framework enhances computa- tional efficiency and resource utilization at the edge, effec- tively addressing challenges related to limited device capabili- ties and privacy protection. It demonstrates the critical role of LLMs in optimizing edge intelligence services [135]. More- over, in decentralized deployment scenarios, LLMs enable distributed inference across energy-harvesting edge devices. By incorporating scheduling optimization and dynamic power control, they significantly improve energy efficiency and task throughput at the edge while reducing the risk of device failure. This highlights the critical role of LLMs in enhancing sustainable edge intelligence [136]. From the perspective of 6G network evolution, LLMs support low-latency, on-device inference and training on 6G edge devices through tech- niques such as PEFT, quantization, and model partitioning. These methods effectively reduce resource consumption while improving system responsiveness and intelligent service ca- pabilities, further underscoring the importance of LLMs in advancing edge intelligence [137]. In addition, a comprehen- sive survey emphasizes that LLM-powered edge intelligence architectures should encompass efficient deployment, security protection, and trustworthy development mechanisms. It also outlines key technical pathways including model compression, autonomous optimization, and cross-domain scenario adapta- tion, thereby providing a systematic panorama of LLM-driven edge intelligence development [138]. In summary, LAMs are rapidly advancing their deep in- tegration with edge intelligence, progressively overcoming computational bottlenecks, energy constraints, and commu- nication overhead. This provides strong support for building low-latency, privacy-preserving, and highly generalizable edge intelligence systems. 4) LAMs for Network
https://arxiv.org/abs/2505.22311v1
Design and Management: As 6G networks evolve toward greater intelligence and autonomy, traditional rule-based approaches to network design and man- agement face limitations in flexibility and generalization. Leveraging their powerful capabilities in semantic understand- ing, intent recognition, task planning, and program generation, LLMs are increasingly being integrated into key components of network management architectures, facilitating the devel- opment of more efficient, intelligent, and versatile future 22 networks. In network intelligence design and management, LLMs play a pivotal role by enabling the ChatNet framework, which facilitates the automatic translation from natural language to network-specific language. This supports tasks such as net- work planning, configuration, and security policy formulation, thereby improving design efficiency and automation levels, and demonstrating the critical role of LLMs in enhancing intel- ligent network operations and management [23]. Building on this, the NetLM framework leverages LLMs to translate nat- ural language intents into executable policies. By integrating multimodal representation learning and knowledge space con- struction, it provides intelligent support for resource schedul- ing, policy generation, and network configuration, enhancing the design efficiency and automation of 6G networks, while improving autonomy and flexibility in multi-task network management [139]. Moreover, LLMs enable the unified access and modeling of multi-source heterogeneous data for intelli- gent operations, resource scheduling, and configuration strate- gies. By combining natural language understanding, instruc- tion generation, and knowledge augmentation techniques, they support intent parsing, fault diagnosis, and policy generation, thus improving the automation and generalization capabilities of 6G network management [140]. In addition, LLMs support the automatic generation of configuration commands and net- work topology diagrams from natural language descriptions, enabling seamless transitions from textual input to executable network configurations. By incorporating prompt engineering techniques involving both textual and visual modalities, this approach significantly enhances response accuracy and design visibility, thereby improving the reliability and practicality of LLMs in network architecture design, generation, and configuration automation [141]. In adaptive optimization for network management, LLMs exhibit program synthesis capabilities tailored to graph- structured tasks. They can directly generate high-quality graph manipulation code from natural language descriptions, pro- viding more efficient interfaces for network lifecycle man- agement and communication graph analysis [142]. Compared to traditional approaches that rely on scenario modeling and parameter tuning, LLMs demonstrate CoT reasoning and In- Context Learning(ICL) in knowledge-free network manage- ment. By incorporating multi-model collaboration mecha- nisms, they support adaptive strategy generation for tasks such as resource scheduling and power control, even in the absence of prior scenario knowledge [143]. Through the NetLLM framework, LLMs achieve unified adaptation by employing multimodal encoders to process non-textual inputs (e.g., time- series data and graph structures), and task-specific output heads to directly respond to configuration tasks such as bitrate selection and scheduling strategies. This significantly enhances the efficiency and generalization ability of network task man- agement [144]. Additionally, LLMs parse user intent and generate function-to-service mappings to collaboratively exe- cute cross-domain network slicing resource configurations and task deployments. This enables intelligent end-to-end manage- ment from design to execution, greatly advancing the level of automation in network administration [145]. Furthermore,under the Network Algorithm Design Automation (NADA) via LLMs framework,
https://arxiv.org/abs/2505.22311v1
LLMs are used to generate and optimize Adaptive Bitrate (ABR) algorithms by automatically designing state models and neural network structures. Coupled with prompt engineering and filtering mechanisms, this approach enables efficient algorithm customization and performance enhancement across diverse network environments such as 4G, 5G, and satellite networks. It significantly simplifies the development process of network algorithms and strengthens the intelligence and automation capabilities of network man- agement [146]. In summary, LAMs are profoundly empowering network de- sign and management tasks, driving the evolution of intelligent network systems toward higher autonomy, stronger general- ization capabilities, and improved task adaptability across key dimensions such as network architecture, resource scheduling, configuration deployment, and algorithm generation. 5) LAMs for Security and Privacy: With the widespread deployment of LAMs in communication systems, network security, and user privacy protection are facing unprecedented challenges. Issues such as outsourced training, user data ex- posure, and model misuse necessitate that intelligent commu- nication systems not only deliver efficient services but also possess robust security and privacy-preserving capabilities. In recent years, LAMs have demonstrated significant advantages in areas such as input encryption, adversarial attack defense, differential privacy mechanism design, and the construction of trusted execution paths, gradually emerging as a founda- tional pillar for the next generation of secure communication systems. In the analysis of backdoor attacks and privacy leakage risks, the application of LLMs within communication networks has made them a novel vehicle for carrying out such attacks. Representative methods such as input interference, prompt manipulation, instruction injection, and demonstration contam- ination are classified based on the attack medium, triggering mechanism, and the intended target. By incorporating the unique characteristics of communication scenarios, the study further explores the concealment of these attacks and the dif- ficulties in defending against them, offering valuable insights into enhancing the security of communication networks [147]. The deployment of LLMs in Zero touch network and Service Management (ZSM) environments raises urgent concerns re- garding privacy leakage. Focusing on membership inference attacks, this study systematically evaluates the leakage risks and attack effectiveness of mainstream pre-trained models. It further proposes a mechanism that integrates trust evalu- ation modules, few-shot fine-tuning, and adversarial training, offering a concrete implementation pathway for ensuring the trustworthiness of LLM-based services in communication net- works [148]. Moreover, the application of LLMs in network systems exhibits both offensive and defensive characteristics. On the positive side, they contribute to data integrity verifi- cation, anomaly detection, and multi-level privacy protection. However, they also face challenges such as hallucinated out- puts, training data leakage, and vulnerability to adversarial examples. These issues highlight critical research directions for the development of trustworthy intelligent communication 23 agents [66]. In the context of privacy protection and secure deployment, the BlockChain for LLM (BC4LLM) framework enhances the attack resistance and trustworthiness of LLMs during distributed training by incorporating blockchain-based data ownership verification, identity authentication, and privacy- preserving computation. This framework enables end-to-end privacy protection and security enhancement for training data, learning processes, and generated content within communica- tion networks [149]. LLMs can also achieve irreversible end- to-end data protection from input to inference
https://arxiv.org/abs/2505.22311v1
by employing encrypted vocabulary reordering, geometric transformations in the embedding space, and client-side pre-encryption mech- anisms. These techniques effectively prevent input leakage and the risk of model parameter reconstruction during com- munication, thereby enhancing security and privacy in cross- task inference scenarios [150]. Within client–server commu- nication frameworks, LLMs utilize activation-guided privacy restoration mechanisms and dX-privacy-protected meta-vector construction to remove and reconstruct sensitive fragments in user inputs. This design prevents privacy leakage during data transmission while maintaining inference accuracy and computational efficiency [151]. In heterogeneous edge–cloud collaborative communication scenarios, LLMs can leverage PrivateLoRA by introducing low-rank residual transformation mechanisms and distributed parameter fine-tuning strategies. This approach enables a privacy-preserving information flow that transmits only non-reversible activation values and gra- dients, ensuring data locality while significantly reducing communication overhead. It facilitates the efficient deployment of privacy-enhanced generative services on mobile devices [152]. When processing sensitive data, LLMs integrate user trust modeling, information sensitivity detection, and adaptive output control mechanisms. By applying Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), differential privacy training, and semantic-level filtering, they enable output regulation based on user trust levels and strengthen privacy protection. This supports the development of intelligent inference systems that balance usability and se- curity in communication-intensive domains such as healthcare and finance [153]. In secure task collaboration and intelligent defense systems, LLMs demonstrate strong capabilities in task generalization, security knowledge modeling, and enhanced code genera- tion. These abilities enable effective situational awareness and coordinated responses across a range of critical security tasks, including threat intelligence, vulnerability detection, program repair, and malicious behavior identification. Such integration offers a unified model support and a reference framework for intelligent defense and privacy protection in network environments [154]. Moreover, LLMs enhance system resilience against backdoors, data poisoning, and adversarial attacks through adversarial training, anomaly detection, and secure model updating mechanisms. By incorporating privacy- preserving technologies such as differential privacy, federated learning, and secure multi-party computation, they support the construction of end-to-end security defense systems across the inference process and model lifecycle in communication sce-narios, laying the foundation for highly trustworthy intelligent systems [155]. In summary, LAMs are continuously advancing commu- nication networks from traditional security architectures to- ward intelligent perception, secure decision-making, and au- tonomous defense. This transformation is driven by a range of mechanisms, including input reconstruction protection, differ- ential privacy, semantic filtering, secure model updating, and access control strategies. 6) LAMs for Resource Allocation: With the widespread deployment of wireless communication, edge computing, and multi-agent systems, resource allocation problems have become increasingly complex. Challenges such as high- dimensional non-convexity, dynamic environmental variations, and multi-objective coordination have exposed the limitations of traditional optimization methods. Leveraging their powerful semantic modeling and reasoning capabilities, LAMs have demonstrated significant advantages in resource allocation scenarios, emerging as a key driver for intelligent resource scheduling. In channel awareness and power control optimization, a few- shot learning-based LLM inference framework is constructed to generate power allocation strategies using channel gain prompts. Combined with a binary power control mechanism to enhance robustness, this approach enables joint optimization of spectral and energy efficiency objectives [156]. In
https://arxiv.org/abs/2505.22311v1
intelligent network resource scheduling, a MoE architecture integrated with the GPT family of LLMs replaces traditional gating networks. By interpreting user goals from natural language inputs and selecting the optimal combination of experts, this method effectively supports adaptive optimization for resource allocation tasks such as power control, service selection, and load balancing, significantly improving decision-making efficiency and system flexibility in multi-task environments [157]. Through the LLM-OptiRA framework, LLMs perform modeling, transformation, and solution of non-convex resource allocation problems. By incorporating error correction and feasibility domain validation mechanisms, the framework en- ables efficient and robust resource scheduling in wireless communication systems [158]. In edge-side collaborative resource scheduling, a RAG optimization framework is proposed by integrating the gen- erative capabilities of LLMs with real-time information re- trieval. Within MEC systems, this framework jointly schedules task offloading ratios, computing resources, and transmission power allocation, significantly enhancing the adaptability and interpretability of dynamic resource allocation decisions [159]. In integrated space–air-ground networks, LLMs are utilized as cacheable resources. By incorporating the concept of cognitive age and RL-based auction mechanisms, the system coordinates the allocation of model storage, communication bandwidth, and computational capacity, thereby improving inference ser- vice timeliness and overall resource utilization [160]. In edge–cloud collaborative architectures, LLMs are scheduled using a multi-armed bandit algorithm driven by constraint satisfaction objectives. This approach enables dynamic re- source allocation for diverse LLM-based services, significantly enhancing inference throughput and energy efficiency while 24 Fig. 7: The application scenarios of Agentic AI. maintaining latency constraints [161]. In MEC scenarios, LLM training tasks are distributed between users and edge servers through a hierarchical coordination strategy. By incorporating PEFT and stability-aware optimization objectives, the system achieves multi-objective joint resource allocation that balances training energy consumption, latency, and model reliability [162]. In task-aware and adaptive computing resource allocation, LLMs introduce dynamic decision modules and KV-cache pruning strategies to adaptively skip Transformer layers based on task complexity and token importance. This enables on- demand allocation and substantial compression of both compu- tational and memory resources [163]. In multi-agent systems, LLMs autonomously perform task assignment and resource coordination by integrating task planning mechanisms with agent capability awareness strategies. This approach improves resource utilization and inference efficiency in collaborative multi-model environments [164]. In summary, LAMs are driving the evolution of resource allocation mechanisms from static, rule-based approaches toward dynamic, semantics-driven strategies. These models effectively address a range of critical scenarios, including training scheduling, inference compression, model caching, and multi-agent coordination. As a result, they significantly enhance resource allocation efficiency and generalization ca- pabilities under complex conditions, laying a solid foundation for the development of the next generation of intelligent and highly adaptive network systems. B. The Application Scenarios of Agentic AI 1) Agentic AI for Wireless Communication: With the rapid development of wireless communication technologies, espe- cially in the 6G era, the intelligence and automation of networkarchitectures have become key factors in improving communi- cation efficiency, optimizing network resources, and achieving effective management. By introducing agents into wireless communications, it is possible to realize more complex task collaboration, decision optimization, and network performance enhancement, thereby meeting the
https://arxiv.org/abs/2505.22311v1
demands of 6G networks for efficient, intelligent, and personalized services. In network optimization, utilizing LAMs as agents can significantly enhance the intelligence and optimization perfor- mance of networks. In 6G networks, using LLMs as agents to collaboratively complete tasks and optimize network perfor- mance enables collective intelligence of multi-agent systems, thereby facilitating task decomposition and decision execution in edge environments [165]. Furthermore, applying LLMs for network operation optimization and health assessment provides more efficient fault diagnosis, resource scheduling, and load balancing, thus improving the intelligence level of 6G networks [166]. In wireless sensing systems, using LLMs as agents combined with Model Context Protocols (MCP) and expert systems enhances the perception and reasoning capabilities of LLMs in complex wireless communication environments, allowing more accurate resolution of dynamic issues in wireless networks [167]. Regarding task collaboration, employing LAMs as agents can effectively achieve task-oriented collaboration and auto- mated workflows to drive task execution. In 6G networks, an AI agent architecture centered on LLMs utilizes key technologies such as multimodal collaboration, dynamic re- source management, and edge computing to realize network automation, personalized services, and intelligent multi-device collaboration [168]. Additionally, using LLMs as agents to realize task-oriented physical layer automation, a two-stage pre-training and fine-tuning scheme has been proposed to build 25 specialized LLM agents adapted to different communication tasks, and a retrieval-based reasoning framework is employed to effectively invoke existing communication functions [169]. In 6G networks, by leveraging LLMs as agents with percep- tion, reasoning, and alignment modules, a split learning system has been designed to address device resource limitations, collaboratively complete complex tasks via edge computing, and ensure low latency and high-efficiency task execution [170]. In summary, using LAMs as agents can significantly en- hance the intelligence level and task collaboration capabilities of wireless communication systems. These technologies pro- vide more efficient and flexible solutions for 6G communica- tion systems, driving communication networks toward greater intelligence and higher efficiency. 2) Agentic AI for Semantic Communication: With the rapid development of agent technologies in communications, se- mantic communication is gradually transitioning from the traditional bit transmission model to an intelligent commu- nication paradigm centered on “semantics.” The introduction of agents provides semantic communication with adaptive capabilities and efficient data processing methods, enabling the system not only to handle traditional bitstream data but also to extract, transmit, and reconstruct semantic information, thereby improving the efficiency and accuracy of communica- tion systems. In semantic extraction and transmission, agents can dy- namically adjust the encoding, transmission, and decoding methods based on the communication environment, user re- quirements, and information content, achieving optimal in- formation transfer. In the Agent-driven Generative Seman- tic Communication (A-GSC) framework, agents use RL to adaptively adjust semantic sampling and extraction, enabling dynamic optimization of data transmission according to task demands [171]. In multi-user systems, leveraging LLMs as agents supports complex task decomposition, normalization of semantic representation, and semantic translation mapping, thereby improving communication efficiency in multi-user sce- narios, optimizing computational resource management, and addressing the limitations of traditional semantic communica- tion frameworks in 6G networks [172]. Moreover, in multi- user semantic communication, employing
https://arxiv.org/abs/2505.22311v1
LLMs as agents to establish the Shared KB (SKB) enhances communication efficiency, and the proposed Multi-User Generative Semantic Communication (M-GSC) framework extends the capabilities of LLMs to handle complex multi-user tasks, optimize network resource usage, and overcome challenges in semantic encoding and decoding [172]. In resource allocation, agents can collaboratively optimize resources and communication strategies, improving network resource utilization and task transmission efficiency. In multi- cell semantic communication systems, agents dynamically optimize resource allocation through DRL, enhancing the over- all network performance [173]. Additionally, in task-oriented semantic communication, a two-layer agent framework based on DRL jointly optimizes power, time slots, and semantic compression rates, achieving intelligent resource allocation under energy harvesting and cognitive radio environments,thereby improving user experience and spectrum utilization efficiency [174]. In summary, agent-driven semantic communication achieves efficient information transmission by dynamically adjusting the encoding, transmission, and decoding of information. Meanwhile, with the help of DRL techniques, agents can optimize resource allocation and communication strategies, improving network resource utilization and task transmission efficiency, thereby advancing semantic communication sys- tems toward greater intelligence and efficiency. 3) Agentic AI for Network Management and Optimization: With the application of agents in network management and optimization, management models are gradually evolving from traditional rule-based approaches toward more intelligent and automated paradigms. Agents are capable of autonomous learning, adapting to dynamic environments, and collabo- ratively completing complex tasks, significantly enhancing network management efficiency and adaptability. In network management, leveraging LAMs as agents through autonomous decision-making and collaboration markedly improves the automation of network resource man- agement. In wireless networks, integrating LLMs as agents within multi-agent generative AI enables multiple agents to collaboratively plan and solve tasks to achieve network ob- jectives. Through task planning, reasoning, and cooperative problem-solving, these agents can operate efficiently in edge networks, significantly elevating the intelligence level of net- work management [165]. In 6G networks, LLMs as core agents work in coordination with other network components to provide powerful language understanding and generation capabilities for network health assessment and fault diagnosis, supporting automated and intelligent network operations [166]. Meanwhile, in 6G-enabled Digital Twin (DT) networks, LLMs optimize data retrieval and Radio Resource Management (RRM) through intelligent reasoning and autonomous learning, realizing more automated and efficient network management [175]. In network optimization, using LAM agents with collab- oration and self-learning capabilities enables real-time opti- mization of network resource allocation, thereby enhancing network performance. The WirelessAgent framework em- ploys LLMs to create autonomous Agentic AI that inte- grates four core modules—perception, memory, planning, and action—simulating human cognitive processes to manage complex wireless tasks. This enables LLMs to optimize re- sources in dynamic network environments, improving network resilience and performance [176]. Additionally, agents can autonomously adjust network resource configurations through RL and other methods, enhancing optimization efficiency and reliability [177]. In Open Radio Access Network (O- RAN) systems, LLMs as agents dynamically optimize resource allocation to ensure network performance stability and adapt- ability. By analyzing network load and user demands in real time, LLMs automatically adjust resource allocation strategies to optimize resource utilization and improve reliability [178]. In summary, the application of agents in
https://arxiv.org/abs/2505.22311v1
network manage- ment and optimization greatly enhances network adaptability and intelligence. Through autonomous learning, collaboration, 26 and task decomposition, agents make network management more efficient and network optimization more precise, driving the intelligent development of next-generation network sys- tems. 4) Agentic AI for Network Security: With the continuous advancement of agent technology in network security, agent- driven security defense systems are gradually replacing tra- ditional protective measures. Through autonomous learning, collaboration, and adaptation, agents can efficiently respond to complex security threats and provide real-time network protection and defense strategies. Utilizing LAMs as agents enables efficient identification and response to network threats through real-time analysis and collaborative work, thereby enhancing the system’s se- curity capabilities. In network defense (such as blue team operations), employing LLMs as agents allows for real-time threat analysis, automatic generation of defense strategies, and dynamic adjustment of security measures based on network status and potential threats, significantly improving overall system security and defense capabilities. Particularly in 6G network environments, LLMs can effectively identify complex security threats and strengthen network security by generating corresponding defense strategies [179]. In the scenario of Internet of Autonomous Defense Vehicles (IoADV), LLMs as core agents, leveraging powerful language understanding and generation capabilities along with multimodal data processing and predictive analytics, significantly enhance agent percep- tion, decision-making, and real-time navigation in complex environments. They also play a key role in communication optimization and decision-making, improving the intelligence level of defense vehicles and their ability to operate in complex combat scenarios [180]. Regarding defense, using LAMs as agents enables dynamic adjustment of security measures based on network conditions and potential threats, thereby strengthening overall system security. In Space-Air-Ground Integrated Networks (SAGIN) under zero-trust architecture, employing LLMs as agents facil- itates collaboration among multiple LLM agents to perform se- curity assessment and defense strategy generation via an LLM- based Situation Awareness (LLM-SA) method. This approach effectively handles vast heterogeneous threat information and, through adaptive learning and cooperative analysis, enhances network security defense capabilities, ensuring the system can rapidly respond and optimize defenses against complex and evolving attacks [181]. In summary, leveraging LAMs as agents provides novel solutions for modern network security. Agents are capable not only of real-time response in complex security environments but also of optimizing defense strategies through collaboration, thereby offering robust guarantees for the long-term stability and security of networks. 5) Agentic AI for UAV Communication: With the con- tinuous advancement of agent technology in UA V commu- nications, the architecture of aerial networks is gradually shifting from traditional centralized control to a novel structure centered on “distributed agent collaboration.” Agent systems leverage advanced mechanisms such as RL, state awareness, and behavior modeling, enabling UA Vs to possess strongertask-driven capabilities, autonomous decision-making, and ef- ficient collaboration. This provides intelligent support for re- source scheduling and system control in complex and dynamic communication environments. In task execution of multi-UA V systems, agents im- prove UA Vs’ reasoning and decision-making abilities through LAMs, allowing them to better adapt to complex environ- ments and perform tasks. In Urban Air Mobility (UAM), the AirVista framework employs Multimodal LLMs
https://arxiv.org/abs/2505.22311v1
(MLLMs) as agents and enhances UA Vs’ 3D spatial reasoning and task execution efficiency by integrating Artificial systems, Computational experiments, and Parallel execution (ACP) methods. This framework particularly assists UA Vs in urban environments to better accomplish complex tasks such as infrastructure monitoring, urban logistics delivery and security patrols [182]. In UA V mission reliability assessment, a novel decision model combining LLMs and RAG techniques has been proposed. This enables UA Vs to adapt in real time within complex dynamic environments, improving decision accuracy and responsiveness in multi-UA V networks, thereby enhancing system reliability [183]. Regarding multi-agent collaboration and task planning op- timization, agents utilize LAMs to optimize collaboration and task planning among multiple UA Vs, improving task comple- tion rates and system efficiency. In UA V-assisted edge com- puting environments, combining LLMs with a Multi-Agent DRL (MADRL) framework, and introducing the QTRAN algorithm, optimizes task offloading and trajectory planning. This method effectively addresses the correlation between local observations and global states in multi-UA V systems, thereby improving task completion rates and convergence speed [184]. In UA V task generation and planning, the UA V- CodeAgents framework employs LLMs and Vision-Language Models (VLMs), combined with the ReAct (Reason + Act) paradigm, enabling UA Vs to plan tasks based on satellite imagery and natural language instructions, precisely locate targets, and dynamically adjust task goals, thereby improving task execution efficiency and adaptability [185]. In summary, agent technology assisted by LAMs signifi- cantly enhances UA Vs’ capabilities in task execution, decision- making, multi-agent collaboration, and task planning. These advances not only drive the development of UA V technology but also provide intelligent support for the efficient operation of future communication systems in dynamic environments. C. Summary and Lessons Learned 1) Summary: This chapter summarizes the typical appli- cation scenarios of LAMs and Agentic AI in communication systems. LAMs are primarily used in semantic communica- tion, IoT, edge intelligence, network design and management, security and privacy, and resource allocation, while Agentic AI is widely applied in wireless communication, semantic communication, network management and optimization, net- work security, and UA V communication. Their integration accelerates the intelligent and autonomous evolution of com- munication systems. 27 2) Lessons Learned: Although LAMs and Agentic AI have demonstrated great potential in semantic communication, IoT, edge intelligence, network management, and security protec- tion, current research still faces challenges such as complex model deployment, high resource consumption, insufficient multimodal understanding capabilities, and security risks. Fu- ture research should focus on lightweight model design, cross- modal semantic fusion, system collaborative optimization, and privacy protection mechanisms to promote the continuous evolution of communication systems toward efficient, secure, and adaptive intelligent architectures [19]. VI. R ESEARCH CHALLENGES AND FUTURE DIRECTIONS A. Research Challenges and Directions of LAMs 1) Untimely Updating and Learning of Communication Data: The lack of high-quality communication data represents a critical challenge limiting the development of LAMs in communications. Due to the rapid evolution of communica- tion technologies and the vast, complex nature of domain- specific knowledge, LAMs must continuously learn and adapt. However, obtaining timely and reliable communication data remains highly
https://arxiv.org/abs/2505.22311v1
difficult. On one hand, relevant knowledge is scattered across research papers, standards, and patents, making data collection and processing costly. On the other hand, real-world communication data is highly dynamic and often involves privacy-sensitive or confidential information, restricting its availability for public training. Furthermore, data in 6G systems is frequently affected by noise, incompleteness, and errors, requiring models to exhibit strong robustness and fault tolerance. The difficulty in data acquisition and the lack of timely learning have become major bottlenecks impeding the performance improvement and practical deployment of LAMs in communications. To address the challenge of data scarcity faced by LAMs in communications, continual learning offers an effective de- velopment pathway. By incorporating autonomous continual learning strategies [186], LAMs are able to independently acquire and update communication knowledge in dynamic environments. Multimodal continual learning [187] further enhances their ability to process diverse communication data across modalities such as text, images, and audio, thereby expanding the scope of knowledge acquisition. Robust contin- ual learning [188] improves model stability and adaptability in complex communication scenarios. When combined with evaluation metrics such as forward transfer, backward trans- fer, and forgetting measures [189], the learning process can be effectively monitored and optimized. The integration of continual learning significantly enhances LAMs’ capabilities in knowledge acquisition, adaptation, and reasoning for com- munication tasks, thereby mitigating the challenges posed by delayed updates in communication data. 2) Insufficient Reasoning Capabilities: The limited logical reasoning capabilities of LAMs have emerged as a significant challenge for their application in 6G systems [190]. As 6G communication environments require models to handle com- plex tasks such as signal processing and resource allocation, the ability to perform accurate logical and causal reasoningbecomes essential. However, current LAMs are primarily data- driven and lack a deep understanding of causal relationships, making them inadequate for handling multi-hop and coun- terfactual reasoning. This often results in the generation of inaccurate or illogical solutions in complex communication scenarios, thereby compromising system stability and user experience. For instance, an LAM may fail to recognize that network congestion is caused by a sudden surge in users or that signal degradation stems from channel fading. Moreover, the inherent ambiguity of natural language and the limitations of formal logic further constrain the reasoning capabilities of LAMs. Enhancing logical understanding and reasoning in communication-specific contexts is a critical direction for future research. To address the insufficient reasoning capabilities of LAMs, future advancements may focus on three key directions: process-based reward models [191], long-chain reasoning mechanisms [98], and RL-driven reasoning training [192]. Process-based reward models provide feedback at each step of the reasoning process, guiding the model to prioritize the coherence and validity of its reasoning paths. Long-chain rea- soning mechanisms help maintain logical consistency across multi-step tasks, thereby enhancing the depth of reasoning in complex problem settings. RL-driven reasoning training leverages environmental feedback to enable the model to iteratively refine its reasoning strategies through trial and error, learning more effective reasoning trajectories from experience and improving generalization to novel tasks. The synergistic integration of these three approaches is expected to signif- icantly enhance the logical
https://arxiv.org/abs/2505.22311v1
reasoning and causal inference capabilities of LAMs in communications. 3) Inadequate Explanation: Current LAMs exhibit sig- nificant limitations in interpretability, which has become a critical barrier to their application and further development. As complex black-box systems, LAMs lack effective mechanisms to reveal the underlying logic behind their decisions, resulting in limited transparency and controllability in communication tasks. Whether addressing signal processing, resource allo- cation, or managing diverse communication protocols and network scenarios, LAMs often fail to clearly articulate the rationale behind their outputs, thereby constraining user under- standing and system-level optimization. The absence of both local and global interpretability not only undermines model trustworthiness but may also pose risks to the security and sta- bility of communication systems. Therefore, enhancing inter- pretability and developing systematic explanation frameworks are essential directions for enabling the reliable deployment of LAMs in communications. To address the issue of limited interpretability in LAMs, future efforts may leverage techniques such as causal learn- ing [193], neuro-symbolic integration [194], feature attribu- tion [195], and model visualization [196] to enhance model transparency. Causal learning facilitates the identification of underlying causal relationships behind model decisions, im- proving logical clarity; neuro-symbolic integration combines the strengths of neural networks and symbolic reasoning, en- abling more traceable inference processes; feature attribution highlights the key input features influencing model predic- 28 tions, helping users understand the basis of the output; and model visualization offers intuitive insights into the model’s internal mechanisms, increasing user comprehension and trust. Together, these approaches contribute to making LAMs more interpretable and controllable in communication tasks. 4) Difficulties in Deployment of LAMs: LAMs face sig- nificant challenges in real-world deployment, primarily due to constraints in hardware and communication resources. Current communication devices, particularly edge devices such as mobile terminals and IoT nodes, often have limited compu- tational and storage capabilities, which makes it challenging to meet the high computational demands of LAMs. This results in performance degradation and increased deployment costs. Additionally, the integration of LAMs increases the volume of communication data, especially in semantic communication scenarios where the synchronization of model parameters and knowledge bases imposes higher demands on network bandwidth and latency. The scarcity of spectrum resources further exacerbates the complexity of data compression and reconstruction. Therefore, improving model efficiency under resource-constrained conditions and advancing lightweight LAM design and compression techniques are critical for enabling the practical deployment of LAMs. To address the resource constraints associated with deploy- ing LAMs, future efforts should focus on developing efficient model compression and acceleration techniques to reduce computational complexity and communication overhead. Prun- ing methods [197] can eliminate redundant structures, thereby reducing model size and computational demands. Quanti- zation techniques [198] compress high-precision parameters into lower-bit representations, minimizing memory usage and inference latency, particularly when combined with activation- aware mechanisms that help maintain high accuracy. Knowl- edge Distillation (KD) [199] transfers knowledge from a large model to a smaller one, enabling lightweight deployment while preserving core capabilities, making it especially suitable for edge environments. The integration of these techniques offers a practical pathway for the efficient deployment of LAMs in 6G communications.
https://arxiv.org/abs/2505.22311v1
B. Research Challenges and Directions of Agentic AI 1) The Lack of Communication Knowledge: In the devel- opment of Agentic AI, the lack of communication knowl- edge presents a significant challenge to its application in 6G systems. Due to the complexity and rapid evolution of communication technologies, agent systems rely heavily on high-quality communication knowledge bases to support task understanding and reasoning-based decision-making. How- ever, current knowledge bases offer insufficient coverage of core concepts, protocols, and standards in communications, leading to misinterpretations and inaccurate reasoning in tasks such as channel estimation and resource allocation. Moreover, communication knowledge is dispersed across research papers, standards, and patents, making its collection and integration both challenging and labor-intensive, requiring domain exper- tise to ensure quality. While expanding the knowledge base can enhance model capabilities, it also introduces higher storageand computational costs, potentially reducing system effi- ciency. Therefore, constructing an efficient and comprehensive communication knowledge infrastructure is a key direction for enabling the successful deployment of Agentic AI in communications. To address the issue of communication knowledge scarcity in Agentic AI systems for 6G communications, the introduc- tion of the Agentic RAG mechanism [200] offers a promis- ing solution. This approach integrates agent collaboration with semantic retrieval techniques, enabling the system to dynamically extract relevant knowledge from communication literature, standards, and patents in real-time based on the task context. The retrieved information is then synthesized by agents to generate context-aware outputs, effectively overcom- ing the limitations of static knowledge base coverage. Agentic RAG not only enhances the system’s ability to understand and reason over complex communication protocols and dy- namic environments but also supports continual knowledge updates and transfer learning, thereby improving the system’s adaptability and intelligent decision-making capabilities in communication scenarios. 2) Limited Scalability of Agentic AI: Agentic AI faces significant scalability challenges when deployed in large- scale, multi-task, or multi-agent collaborative communication environments. Most existing systems still rely on centralized control architectures, where a central model manages task scheduling and resource allocation. While effective in small- scale scenarios, such architectures are prone to resource bot- tlenecks, latency, and even deadlocks under high concurrency or multi-system collaboration, thereby limiting overall system performance. Additionally, current multi-agent coordination mechanisms lack distributed scheduling and autonomous ca- pabilities. Communication and task allocation among agents often depend on static configurations, which are insufficiently flexible or adaptive to handle heterogeneous tasks and dynamic environments. For example, in complex task chains, a single agent failure can lead to system-wide cascading failures, and conventional systems lack the mechanisms for rapid recovery or adaptive restructuring. These limitations expose critical weaknesses in scalability and robustness under dynamic en- vironments. To address the scalability limitations of Agentic AI, future efforts should focus on developing distributed and hierarchical architectures [201] [202], enhancing multi-agent collaboration mechanisms [203], and optimizing runtime resource schedul- ing. Architecturally, the integration of distributed control with hierarchical structures allows agents at different levels to take on decision-making, coordination, and execution roles, thereby improving the system’s parallel processing capabilities and response efficiency. Regarding coordination mechanisms, incorporating federated learning enables knowledge sharing among agents, while swarm intelligence
https://arxiv.org/abs/2505.22311v1
techniques facilitate self-organizing collaboration based on local observations, en- hancing system flexibility and adaptability. At the operational level, the use of self-optimizing algorithms and load-balancing strategies allows the system to dynamically adjust resource allocation and task routing, effectively mitigating bottlenecks and improving overall stability and efficiency. The synergy 29 of these approaches provides a robust foundation for building efficient, stable, and scalable Agentic AI systems. 3) Complexity of Agent Control Mechanisms: In Agentic AI systems, the complexity of coordination among agents and control interactions between agents and other system components presents a major research challenge. As the number of agents increases and task environments evolve dynamically, the system faces significant pressure in task decomposition, information exchange, and behavioral coordi- nation. These challenges often lead to issues such as compo- nent scheduling conflicts and communication inconsistencies, resulting in low collaboration efficiency, high task execution latency, and difficulty in meeting the real-time, reliability, and adaptability demands of complex scenarios. Moreover, existing systems often lack standardized collaboration protocols and structured control workflows, making it difficult to support efficient organization and dynamic adaptation among large- scale heterogeneous agents. The immaturity of current control mechanisms has become a core bottleneck limiting the broader application of Agentic AI in communication systems. To address the complexity of agent control mechanisms in Agentic AI systems, three core Protocols, namely MCP [204], Agent-to-Agent Protocol (A2A) [205], and ACP [104], offer a layered and complementary technical pathway. MCP standardizes the interaction between models and external resources, enhancing system controllability and modular- ity. A2A enables dynamic discovery and task collaboration among agents through mechanisms such as agent cards and the task–artifact framework, thereby simplifying cross-system coordination. ACP, built upon a REST-native architecture, supports multimodal asynchronous communication and task tracking, strengthening collaborative capabilities among agents within local environments. Together, these protocols establish a secure, flexible, and scalable agent control infrastructure. 4) Difficulty in Evaluating Agentic AI: Current Agentic AI systems face significant challenges in evaluation, primarily due to the absence of unified and systematic assessment frameworks, which makes it difficult to comprehensively reflect agent capabilities in dynamic task settings. Existing approaches often rely on static datasets and outcome-oriented single metrics, overlooking the agent’s performance through- out multi-step reasoning, tool invocation, and strategy planning processes, thereby failing to identify latent issues. Moreover, the inherent diversity and uncertainty of agent behavior make traditional evaluation methods based on fixed reference an- swers inadequate for assessing flexibility and generalization capability. This limitation significantly hinders accurate per- formance validation and system optimization [206]. To address the evaluation challenges of Agentic AI, future research should focus on developing a unified and scalable evaluation framework capable of assessing agent performance at a fine-grained level across task planning, tool invocation, and reasoning processes. The evaluation paradigm should shift from outcome-based metrics to process-oriented assessment to enhance understanding of the agent’s behavioral trajectory. Additionally, automated evaluation methods such as LAM- based evaluators should be incorporated to reduce human effort and enhance evaluation efficiency. It is also essential todevelop general-purpose evaluation tools that can accommo- date the diversity and uncertainty of agent behaviors, thereby
https://arxiv.org/abs/2505.22311v1