text string | source string |
|---|---|
a focus on their application in knowledge management systems for digital assistants. The primary objective is to evaluate user preferences and performance when interacting with these interfaces, particularly in scenarios requiring structured information input, such as date handling and content categorization. While the... | https://arxiv.org/abs/2505.22303v1 |
in a fixed order ( [M, M, E, C, C, E, C, M, E ], where the letters translates into Easy,Medium, Complex), which was pre-randomized with respect to task difficulty to minimize learning effects. This controlled setup allowed for a reliable comparison between the two interfaces, isolating the effects of task complexity an... | https://arxiv.org/abs/2505.22303v1 |
1.73). As expected, perceived ease-of-use decreased for both interfaces as nominal task difficulty increased; for instance, mean SEQ scores dropped from 6.52 (GUI) and 6.00 (VCMS) for ’Easy’ tasks to 5.05 (GUI) and 4.14 (VCMS) for ’Complex’ tasks. Efficiency metrics also favoured the GUI, which yielded shorter average ... | https://arxiv.org/abs/2505.22303v1 |
a strong and statistically credible preference for ’V oice CMS’ over ’GUI’ (Intercept Log-Odds = -11.92, 95% CrI [-36.27, -3.26]). The credible interval being entirely below zero indicates lower odds of choosing GUI compared to V oice CMS for easy tasks. No significant difference was found between ’Neutral’ and ’V oice... | https://arxiv.org/abs/2505.22303v1 |
= 0.72, z = -3.13, p = 0.0018). However, there was a strong positive association between seq_diff and choosing the GUI (Estimate = 1.15, SE = 0.41, z = 2.78, p = 0.0054). This indicates that as the GUI was perceived as relatively easier (i.e., seq_diff increased), the odds of preferring the GUI increased - the correspo... | https://arxiv.org/abs/2505.22303v1 |
between participants in their baseline tendency to prefer the GUI interface (Intercept SD = 0.78), but considerably less variability in their baseline tendency to prefer the V oice CMS interface (Intercept SD = 0.24) after accounting for the time difference. Examining the fixed effects, the model for GUI preference sho... | https://arxiv.org/abs/2505.22303v1 |
(SD = 1.00, Median = 1, Range [1, 7]). Notably, the analysis showed that partial summaries, a part of a user feedback mechanism after each new part of information received by the assistant, were consistently enabled during message exchanges in this dataset. The average total number of messages exchanged per task was 4.... | https://arxiv.org/abs/2505.22303v1 |
of knowledge management tasks. Although objective accuracy was similar, some differences emerged in perceived usability, task completion speed, and notably, in user preference patterns. These differences were not uniform but were influenced by the complexity of the tasks performed, prompting a closer look at how task d... | https://arxiv.org/abs/2505.22303v1 |
a voice-only interaction, a point strongly echoed in user comments ( "wanted to be sure I entered it correctly" ,"wouldn’t turn off summary, not because of AI itself" ). However, while essential for user confidence, the number of summaries generated was not significantly associated with objective task success, suggesti... | https://arxiv.org/abs/2505.22303v1 |
confidence through effective, potentially visual, feedback mechanisms is key to broader adoption. Hybrid voice-visual interfaces represent a compelling direction to harness the strengths of conversational input while providing the assurance that users desire in the visual form, and appear to be a particularly promising... | https://arxiv.org/abs/2505.22303v1 |
in multimodal systems. International Journal of Human–Computer Interaction , 40(20):6287–6302, 2024. Hannah Limerick, James W. Moore, and David Coyle. Empirical evidence for a diminished sense of agency in speech interfaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems , pages ... | https://arxiv.org/abs/2505.22303v1 |
arXiv:2505.22306v1 [cs.LG] 28 May 2025Versatile Cardiovascular Signal Generation with a Unified Diffusion Transformer Zehua Chen1†, Yuyang Miao1,2†, Liyuan Wang1*†, Luyun Fan3, Danilo P. Mandic2and Jun Zhu1* 1Department of Computer Science & Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint Ce... | https://arxiv.org/abs/2505.22306v1 |
wear- able devices, are susceptible to noise and interruptions, complicating their interpretation by human experts and automated algorithms. Recent efforts have sought to address these challenges by generating cardiovascular signals from recorded ones (Sec. 4.1), focusing on individual tasks such as denoising raw recor... | https://arxiv.org/abs/2505.22306v1 |
even better performance than the ground-truth signals. The generated signals further ensure interpretability through displaying diagnostic charac- teristics of typical abnormalities, validated by clinician assessments. To our knowledge, UniCardio represents the first unified framework for cardiovascu- lar signal genera... | https://arxiv.org/abs/2505.22306v1 |
one-dimensional (1D) 6 convolutional neural networks (CNNs) with various kernel sizes to extract rep- resentations across different time scales. The extracted representations of all modalities are concatenated and fed into the customized transformer modules, which further receive learnable diffusion embeddings to encod... | https://arxiv.org/abs/2505.22306v1 |
def ghi 0.02850.02750.02570.02560.0200.0250.0300.0350.040RMSEECGBP---+-+++Denoising: PPG0.18080.14870.13330.000.050.100.150.200.250.30RMSEECGBP-+-+++Translation: PPG0.11470.06080.04460.04440.000.050.100.150.200.25RMSEECGBP---+-+++Imputation: PPG8.087910.14516.47610.03.06.09.012.015.018.0RMSEECGPPG-+-+++Translation: BP ... | https://arxiv.org/abs/2505.22306v1 |
4.95M PPG-to-ECG TranslationRDDM [15] PPG 0.5710 ±0.0140 0.5155 ±0.0153 0.7706 ±0.0137 138.77M CardioGAN [16] PPG 0.4313 ±0.0100 0.3226 ±0.0104 0.5208 ±0.0146 5.97M UniCardio PPG 0.2747 ±0.0067 0.1937 ±0.0070 0.4407 ±0.0154 4.94M UniCardio-F PPG 0.1960 ±0.0062 0.1165 ±0.0059 0.2698 ±0.0131 4.94M UniCardio-M PPG, BP 0.1... | https://arxiv.org/abs/2505.22306v1 |
signal processing (Fig. 1). To demonstrate its practical effectiveness, we apply UniCardio to the publicly available datasets of unseen domains and explore representative applications spanning two major areas: detecting abnormal health conditions and estimating vital signs. Depending on the specific characteristics of ... | https://arxiv.org/abs/2505.22306v1 |
Depression ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalAtrial Premature Contraction ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalST Depression ECGOurs01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalAtrial Fibrillation ECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized Sig... | https://arxiv.org/abs/2505.22306v1 |
seconds, making UniCardio well-suited for real-time monitoring. Together, these advantages establish UniCardio as a practical, robust, and interpretable tool for advancing cardiovascular healthcare. 3 Discussion UniCardio introduces a unified framework for multi-modal cardiovascular sig- nal generation. Departing from ... | https://arxiv.org/abs/2505.22306v1 |
data, modalities, and tasks, ensuring its scalability and adaptability. We emphasize that UniCardio’s full potential can be further unleashed by collecting more pre-training data covering a variety of signal modalities with both normal and abnormal conditions. Due to the limited availability of pub- lic datasets, our d... | https://arxiv.org/abs/2505.22306v1 |
of conditional distributions, expanding the total number of tasks tok×(2k−1) with p(x|cx),p(y|cy), and p(z|cz). 4.2 Generative Framework Diffusion Models. To capture the multi-modal conditional distributions inherent in cardiovascular signals, we adopt diffusion models [20, 21] that offer distinct advantages over conve... | https://arxiv.org/abs/2505.22306v1 |
as PPG, ECG, and BP are inherently complex and exhibit diverse temporal patterns that vary across different physiological states. When generating one of them from others, the correspondence between these modal- ities essentially spans multiple time scales. To handle the multi-frequency components, we design a multi-sca... | https://arxiv.org/abs/2505.22306v1 |
where AM acts as the target modality ( j=k),Msimilarly ensures information flow of intra-modal interactions within i, intra-modal interactions within k, and inter-modal inter- actions from itok, while blocking other irrelevant interactions. This unified masking mechanism enables consistent handling of generative tasks ... | https://arxiv.org/abs/2505.22306v1 |
device disconnections by filtering out noisy signals based on sample entropy. This metric quantifies the regularity and complexity of time series by measuring the likelihood that a given pattern remains consistent in subsequent points. We set thresholds of 0.2, 0.3, and 0.2 for PPG, ECG, and BP signals, respectively, t... | https://arxiv.org/abs/2505.22306v1 |
dataset. The detection of AF, ST change, and hypertrophy is performed by train- ing respective classification models based on 1D VGG-16 architectures [62]. HR estimation is conducted by detecting heartbeat peaks with common algo- rithms [63]. BP estimation is achieved by training a regression model based on a CNN-LSTM ... | https://arxiv.org/abs/2505.22306v1 |
arterial blood pressure: what you may not know. Critical Care Nurse 22(2), 60–79 (2002) [5] Elgendi, M., Haugg, F., Fletcher, R.R., Allen, J., Shin, H., Alian, A., Menon, C.: Recommendations for evaluating photoplethysmography- based algorithms for blood pressure assessment. Communications Medicine 4(1), 1–7 (2024) 22 ... | https://arxiv.org/abs/2505.22306v1 |
distributions in multi-modal diffu- sion at scale. In: Proceedings of the International Conference on Machine Learning, pp. 1692–1717 (2023). PMLR [23] Peebles, W., Xie, S.: Scalable diffusion models with transformers. In: Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195–4205 (2023) [... | https://arxiv.org/abs/2505.22306v1 |
Reiss, A., Duerichen, R., Marberger, C., Van Laerhoven, 25 K.: Introducing wesad, a multimodal dataset for wearable stress and affect detection. In: Proceedings of the ACM International Conference on Multimodal Interaction, pp. 400–408 (2018) [39] Kirchhof, P., Benussi, S., Kotecha, D., Ahlsson, A., Atar, D., Casadei, ... | https://arxiv.org/abs/2505.22306v1 |
Medical and Biological Engineering 40, 149–157 (2020) [51] Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P.S., Sun, L.: A comprehen- sive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226 (2023) [52] Mai, W., Zhang, J., Fang, P., Zhang, Z.: Brain-co... | https://arxiv.org/abs/2505.22306v1 |
Machine Learning (2024) 28 [67] Chen, Z., Tan, X., Wang, K., Pan, S., Mandic, D.P., He, L., Zhao, S.: Infergrad: Improving diffusion models for vocoder by considering inference in training. In: IEEE International Conference on Acoustics, Speech and Signal Processing (2023) [68] Liu, H., Chen, Z., Yuan, Y., Mei, X., Liu... | https://arxiv.org/abs/2505.22306v1 |
filter- ing model for continuous and accurate blood pressure estimation. In: Proceedings of the International Joint Conference on Neural Networks (2020) 30 1 Introduction 2 2 Results 4 2.1 Unified multi-modal generative modeling for cardiovascular signals 4 2.2 Versatile high-quality cardiovascular signal generation . ... | https://arxiv.org/abs/2505.22306v1 |
forecasting [13, 75, 76] as well as modality translation such as PPG-to-ECG synthesis [15]. However, most previous studies remain task-specific, lacking a unified framework for multi-modal signal generation. Despite the growing importance of cardiovascular monitoring, a diffusion model capable of handling multiple sign... | https://arxiv.org/abs/2505.22306v1 |
and physiological semantics. To enable unified modeling under such heterogeneity, UniCardio employs a shared, uninformative Gaussian prior derived from an unconditional forward process. This prior remains agnostic to both the generation target and the conditioning configuration. As a result, UniCardio supports versatil... | https://arxiv.org/abs/2505.22306v1 |
configurations are unified within a single generative framework without requiring task-specific retraining or model adaptation. Appendix C Unified Training Process UniCardio aims to model conditional distributions among PPG, ECG, and BP signals using a single model architecture within a unified generative framework . I... | https://arxiv.org/abs/2505.22306v1 |
Progressive Training of UniCardio Building on the principles of conditional learning, UniCardio adopts a struc- tured training strategy to facilitate efficient multi-modal, multi-condition generation of cardiovascular signals. Rather than training task-specific models, UniCardio progressively handles increasingly const... | https://arxiv.org/abs/2505.22306v1 |
trajectory, where Eq. (D17) naturally recovers Eq. (D16). This formulation allows DDIM to synthesize samples by progres- sively denoising the input through a deterministic mapping guided by the learned noise estimates at each timestep. By eliminating the need for stochas- tic perturbations during sampling, DDIM substan... | https://arxiv.org/abs/2505.22306v1 |
and the observed condition modalities, the model progressively refines an initial Gaus- sian noise through deterministic updates. The subset of time steps is selected by linearly spacing the desired number of steps across the diffusion range [0 , T]. The sampling procedure follows the deterministic rule described in Eq... | https://arxiv.org/abs/2505.22306v1 |
specific encoder consists of six consecutive 1D CNNs with various kernel sizes {1,3,5,7,9,11}. The joint feature vector hsis processed through five con- secutive customized transformer modules with residual and skip connections, resulting in the final feature vector h′ s. Each modality-specific decoder is imple- mented... | https://arxiv.org/abs/2505.22306v1 |
by UniCardio in a tuning-free manner. 44 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs 01234-1.5-1.0-0.50.00.51.01.5 Time (s)Normalized SignalECGOurs01234-1.5-1.0-0.50.00.51.01.... | https://arxiv.org/abs/2505.22306v1 |
arXiv:2505.22310v1 [cs.LG] 28 May 2025From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization Shoaib Ahmed Siddiqui∗ University of CambridgeAdrian Weller University of Cambridge The Alan Turing InstituteDavid Krueger Mila Gintare Karolina Dziugaite Google DeepMind MilaMichael C. Mozer G... | https://arxiv.org/abs/2505.22310v1 |
that make it difficult to draw clear conclusions. First, these works study unlearning knowledge, capabilities, or topics , where the problem is inherently under specified. For example, given a dataset containing knowledge necessary for making bioweapons, the goal may be to fully remove the capability of constructing bi... | https://arxiv.org/abs/2505.22310v1 |
by incorporating terms in their objective that encourage the unlearned model to move far away from the pretrained model in the weight-space. To summarize, we make the following contributions in this work: •We show that unlearning algorithms fail to delete the influence of the forget set, which stays dormant and can res... | https://arxiv.org/abs/2505.22310v1 |
by directly fine-tuning on it. We instead show that we can restore forget set accuracy even if fine-tuning only on a subset of it, or even only the retain set. Re-emergence of attempted-to-be-unlearned knowledge via fine-tuning. Recent work in language models showed that believed-to-be-unlearned knowledge can re-emerge... | https://arxiv.org/abs/2505.22310v1 |
to measure utility. An ideal unlearning algorithm is one that is tamper resistant : upon relearning, its accuracy on the forget set does not increase more than it would by learning the relearning set anew. In other words, the forget set accuracy of A′(MU,DR∪DFre)should not be higher than that of A′(MRS,DR∪DFre). At the... | https://arxiv.org/abs/2505.22310v1 |
for relearning examples in Appendix F. We again use a small learning rate of 1e-5 without any weight decay, and optimize the model for just 10 epochs (except Fig. 1 where we optimized the model for 300 epochs). Similar to the pretraining stage, we use a cosine learning rate decay with a decay factor of 0.1. 4.1 Baselin... | https://arxiv.org/abs/2505.22310v1 |
set. We used the alternating variant (similar to SCRUB) instead of joint optimization of the two losses, as it resulted in better test accuracy as well as lower forget set accuracy. Catastrophic Forgetting [36] uses repeated fine-tuning on the retain set with a weight decay, which naturally leads to a decay in the magn... | https://arxiv.org/abs/2505.22310v1 |
unlearning methods. We use CIFAR-10 and CIFAR-100 with ResNet-18, and attempt to unlearn instances of a single class (‘airplane’ class for CIFAR-10, and ‘apple’ class for CIFAR-100), or across classes ( class agnostic ). The role of typicality for learning and unlearning. Not all examples are equally easy to unlearn, a... | https://arxiv.org/abs/2505.22310v1 |
the different unlearning methods we evaluate show a qualitatively different trend. We make the striking observation that some methods (such as Circuit Breakers, SCRUB, and Random Relabeling) are very susceptible to relearning attacks. For these methods, forget-set accuracy drops down after unlearning, near the desired ... | https://arxiv.org/abs/2505.22310v1 |
linear path between the pretrained and the unlearned (or retrained- from-scratch) model by interpolating the model parameters and batch-norm statistics using different mixing weights (shown on the x-axis). We report accuracy on the y-axis. 0 on the x-axis represents the pretrained model, while 1 represents the unlearne... | https://arxiv.org/abs/2505.22310v1 |
through objectives that either aim to induce a large 8 distance in the weight-space, or a loss barrier between the pretrained and unlearned models. Hence, any method that directly or indirectly attempts to separate out the pretrained model and the unlearned model is an instantiation of this framework. Weight Distortion... | https://arxiv.org/abs/2505.22310v1 |
models i.e., ResNet-34 on CIFAR-10 as presented in Fig. 9 – Appendix D. Furthermore, these results are also consistent across different datasets, i.e., CIFAR-100, as highlighted in Fig. 8 – Appendix C. The role of the retain set for relearning . Given the surprising finding that we can recover forget set accuracy while... | https://arxiv.org/abs/2505.22310v1 |
Neural Information Processing Systems , 34:10876–10889, 2021. [2]Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE symposium on security and privacy (SP) , pages 141–159. IEEE, 2021. [3]Yinzhi ... | https://arxiv.org/abs/2505.22310v1 |
can simplify machine unlearning. Advances in Neural Information Processing Systems , 36:51584–51605, 2023. [20] Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, and Michael C Mozer. Characterizing structural regularities of labeled data in overparameterized models. arXiv preprint arXiv:2002.03206 , 2020. [21] Diederik P King... | https://arxiv.org/abs/2505.22310v1 |
Yang, Matt Jones, Michael C Mozer, and Mengye Ren. Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training. arXiv preprint arXiv:2403.09613 , 2024. [38] Kairan Zhao, Meghdad Kurmanji, George-Octavian B ˘arbulescu, Eleni Triantafillou, and Peter Triantafillou. What makes unlea... | https://arxiv.org/abs/2505.22310v1 |
the x-axis. Retrain from scratch baseline achieves a low forget set accuracy. Furthermore, we see a wider spread of different methods on this more complex dataset, with the trend looking similar to the class-agnostic results presented in Fig. 2 (as it provided a more complex forget set comprising of the most atypical e... | https://arxiv.org/abs/2505.22310v1 |
that using the retain set DRachieves higher accuracy on the forget set compared to other choices. However, these differences diminish as the number of relearning examples increases, since these examples directly represent the unlearned knowledge and may become the dominant factor in recovery. When the test set is used ... | https://arxiv.org/abs/2505.22310v1 |
est set accuracyForget set accuracyRetrain from scratch TAR CBFT Weight Dist RegFigure 6: Comparison between test set accuracy and accuracy on the held-out part of the forget setDFhoon CIFAR-10 and ResNet-18, where the forget set is comprised of typical examples from the ‘airplane’ class. The figure indicates that all ... | https://arxiv.org/abs/2505.22310v1 |
Comparison between test set accuracy and accuracy on the held-out part of the forget set DFhoon CIFAR-10 and ResNet-34 , where the forget set is comprised of atypical examples from the ‘airplane‘ class. We still see the distinction between robust and non-robust methods. However, the relative ranking between different m... | https://arxiv.org/abs/2505.22310v1 |
arXiv:2505.22311v1 [cs.AI] 28 May 20251 From Large AI Models to Agentic AI: A Tutorial on Future Intelligent Communications Feibo Jiang, Senior Member, IEEE , Cunhua Pan, Senior Member, IEEE , Li Dong, Kezhi Wang, Senior Member, IEEE , Octavia A. Dobre, Fellow, IEEE , and Merouane Debbah, Fellow, IEEE Abstract —With th... | https://arxiv.org/abs/2505.22311v1 |
com- munications due to their advantages in cognitive decision- making and data generation. Meanwhile, Agentic AI, as a more advanced technology based on LAMs, can actively make decisions and self-optimize, offering novel solutions for intelligent resource management and optimization in 6G networks. Therefore, the tech... | https://arxiv.org/abs/2505.22311v1 |
GPT-1 [3], a unidirectional transformer model focused on generative tasks. GPT-1 innovatively introduced the pre-training and fine-tuning paradigm and demonstrated the potential of large-scale pre-trained language models in NLP tasks. Subsequently, in 2019, GPT-2 [4] further expanded the language model’s scale and capa... | https://arxiv.org/abs/2505.22311v1 |
collabora- tion—officially ushering in the era of Agentic AI [12]. Overall, the Agentic stage propelled LAMs from information under- standing to task execution and behavioral control, laying a crucial foundation for embodied intelligence and higher-level general intelligence. C. Related Survey Work Table I presents a c... | https://arxiv.org/abs/2505.22311v1 |
Limit No Yes Limit Limit Limit-For C1 to C3 and C5, it is not mentioned. -For C4, it briefly mentions applications without detailed LAM use cases. -For C7 to C9, it touches on agent frameworks and future directions but lacks depth and interaction details. 2025 [17]No No No Limit No Yes Limit Limit Yes-For C1 to C3, C5,... | https://arxiv.org/abs/2505.22311v1 |
real-time perception and modeling of network environmental states, user behav- ior patterns, and resource utilization efficiency, LAMs facilitate intelligent scheduling and optimal allocation of communication resources. Integrated with Reinforce-ment Learning (RL) or long-chain reasoning frame- works, LAMs can dynamica... | https://arxiv.org/abs/2505.22311v1 |
perspectives, including model classification, training method- ologies, Agentic AI system design, application scenarios, and research challenges. The main contributions of this work are summarized in the following five aspects. 1)Systematic Review of Core Components and Model Classification in LAMs : A comprehensive sy... | https://arxiv.org/abs/2505.22311v1 |
connect the encoder and decoder. Self-attention is a key technique in the Transformer ar- chitecture. It enables the model to consider all other words (or tokens) in the sequence when processing a particular word, computing weighted representations based on relevance. Additionally, Google introduced multi-head self-att... | https://arxiv.org/abs/2505.22311v1 |
them into a standard Transformer encoder in the same way as word sequences are processed. Various visual tasks are then performed through different output layers, as illustrated below:y= Encoder (Concat( zcls,Flatten(Patch( I))) +Epos), (2) where Idenotes the input image, Patch (·)and Flatten (·)refer to the patching a... | https://arxiv.org/abs/2505.22311v1 |
through a ”noise addition–denoising” process. Diffusion models operate through two key processes: the forward diffusion process, which gradually adds Gaussian noise to the data (e.g., images, videos) until it becomes pure noise; and the reverse denoising process, which starts from pure noise and progressively removes t... | https://arxiv.org/abs/2505.22311v1 |
the Transformer architecture in gener- ative tasks. It has inspired the design of numerous subsequent large generative models, particularly world models such as OpenAI’s Sora [53], which also adopt the DiT architecture at their core to process spatiotemporal latent representation blocks. DiT has proven that the Transfo... | https://arxiv.org/abs/2505.22311v1 |
typically in the tens or even hundreds of billions. Their core capability lies in understanding and generating human-like natural language, enabling them to perform a wide range of language-related tasks with remarkable generalization and adaptability. By learning grammar, semantics, and commonsense knowledge from larg... | https://arxiv.org/abs/2505.22311v1 |
These models are capable of performing complex cross-modal reasoning, content gener- ation, and seamless interaction. LMMs are widely regarded as a critical step toward achieving more general forms of AI. In terms of architectural design, LMMs are typically built upon powerful unimodal backbones and incorporate more so... | https://arxiv.org/abs/2505.22311v1 |
[82] have exhibited reasoning abil- ities on par with o1-like models in domain-specific tasks such as code generation, further enriching the diversity of the LRM ecosystem. In communications, LRMs are primarily applied to enhance system intelligence, adaptability, and security [84] by leveraging their powerful multi-st... | https://arxiv.org/abs/2505.22311v1 |
for readers to understand the current technology system of LAMs. 11 TABLE III: Classification of LAMs and their applications in communications. LAM Category Components Specific Models Application Scenarios Large Language Model Transformer, MoE GPT series, Gemma series, LLaMA seriesSemantic Communication [65], Network M... | https://arxiv.org/abs/2505.22311v1 |
sociated with core communication theories. For exam- ple, “6G” represents the latest generation of mobile communication technologies, and “V oIP” refers to voice communication over IP networks, both indicating clear communication-specific contexts. •High Frequency: Keywords should be commonly used terms in professional... | https://arxiv.org/abs/2505.22311v1 |
communication tasks. The following are several representative instruction-tuning datasets: •TelecomInstruct Dataset [92] is an instruction-tuning dataset tailored for telecommunication tasks. It covers a wide range of task types, including question answering, document classification, code generation, and protocol inter... | https://arxiv.org/abs/2505.22311v1 |
training, the learning objective is based on causal language modeling, where the model predicts the next token given the preceding word sequence. Formally, let the input text be represented by a word sequence x= (x1, ..., x T)andθdenote the model parameters. The LAM is trained by minimizing the negative log-likelihood ... | https://arxiv.org/abs/2505.22311v1 |
the input prompt or task instruction, ywrepre- sents the preferred response according to human feedback, and yldenotes the less preferred response. πθis the current model being optimized, while πrefis the reference model, typically a fine-tuned model without preference alignment. πθ(yw|x) denotes the conditional probab... | https://arxiv.org/abs/2505.22311v1 |
ations within communication documents. By constructing a KG, the model can identify not only explicit relationships (e.g., a protocol belonging to a specific standard) but also infer implicit ones (e.g., the relevance of a particular technology to multiple protocols). For instance, in the CommGPT system [94], KGs are i... | https://arxiv.org/abs/2505.22311v1 |
planner to reason about and decompose tasks, invokes external tools to perform operational steps, and employs the memory module to store and retrieve historical information for reflection and continuous task optimization. The system architecture of LAM-based Agentic AI is shown in Fig. 4. 1) LAMs: In an Agentic AI syst... | https://arxiv.org/abs/2505.22311v1 |
highly relevant external knowl- edge. This module integrates communication-related docu- ments (e.g., research papers, standards, protocols) and AI domain knowledge. It is constructed through semantic encod- ing and knowledge embedding, enabling efficient alignment with the knowledge retrieval requirements of LAMs. The... | https://arxiv.org/abs/2505.22311v1 |
rely on the comprehension and generation capabilities of LAMs to engage in autonomous or collaborative interactions centered around task objectives and the external environment. Depending on the scope and nature of the interaction, agent interaction mechanisms can be categorized into two types: single-agent interaction... | https://arxiv.org/abs/2505.22311v1 |
Architecture The CommLLM framework [25] establishes a LAM-centric, multi-agent collaborative system architecture for 6G commu- nications. The schematic diagram of CommLLM is shown in Fig. 5. This architecture integrates a knowledge base, planners, tools, and memory modules to support intelligent decision- making and ta... | https://arxiv.org/abs/2505.22311v1 |
system’s responsiveness, reasoning robust- ness, and generation quality in multi-objective, constraint- rich scenarios. It also lays the groundwork for the eval- uation and refinement process carried out by the Multi- agent Evaluation and Reflection (MER) module. MCP effectively addresses the “path limitation” and “sin... | https://arxiv.org/abs/2505.22311v1 |
LAM S AND AGENTIC AI A. The Application Scenarios of LAMs 1) LAMs for Semantic Communication: With the remark- able capabilities of LAMs in natural language understanding and cross-modal reasoning tasks, semantic communication is gradually evolving from the traditional bit transmission paradigm toward an intelligence-d... | https://arxiv.org/abs/2505.22311v1 |
GPT-4 for semantic extraction and reconstruc- tion. Additionally, Conditional GANs (CGANs) are employed for channel estimation, significantly improving the transmis- sion efficiency of the semantic communication system under complex channel conditions and multimodal data scenarios [65]. Moreover, in 6G semantic communi... | https://arxiv.org/abs/2505.22311v1 |
reasoning and distributed learning, a resource- adaptive optimization framework can be established, signif- icantly improving the responsiveness, privacy preservation, and cross-device generalization capabilities of IoT systems in applications such as smart healthcare, intelligent cities, and home automation [124]. In ... | https://arxiv.org/abs/2505.22311v1 |
the development of edge intelligence. In edge training and inference optimization, to address the inefficiencies of federated learning in training LAMs at the edge, LLMs are utilized with forward gradient training and PEFT. This enables low-memory, on-device inference and training on mobile devices, supports Neural Pro... | https://arxiv.org/abs/2505.22311v1 |
Design and Management: As 6G networks evolve toward greater intelligence and autonomy, traditional rule-based approaches to network design and man- agement face limitations in flexibility and generalization. Leveraging their powerful capabilities in semantic understand- ing, intent recognition, task planning, and progr... | https://arxiv.org/abs/2505.22311v1 |
LLMs are used to generate and optimize Adaptive Bitrate (ABR) algorithms by automatically designing state models and neural network structures. Coupled with prompt engineering and filtering mechanisms, this approach enables efficient algorithm customization and performance enhancement across diverse network environment... | https://arxiv.org/abs/2505.22311v1 |
by employing encrypted vocabulary reordering, geometric transformations in the embedding space, and client-side pre-encryption mech- anisms. These techniques effectively prevent input leakage and the risk of model parameter reconstruction during com- munication, thereby enhancing security and privacy in cross- task inf... | https://arxiv.org/abs/2505.22311v1 |
intelligent network resource scheduling, a MoE architecture integrated with the GPT family of LLMs replaces traditional gating networks. By interpreting user goals from natural language inputs and selecting the optimal combination of experts, this method effectively supports adaptive optimization for resource allocatio... | https://arxiv.org/abs/2505.22311v1 |
demands of 6G networks for efficient, intelligent, and personalized services. In network optimization, utilizing LAMs as agents can significantly enhance the intelligence and optimization perfor- mance of networks. In 6G networks, using LLMs as agents to collaboratively complete tasks and optimize network perfor- mance... | https://arxiv.org/abs/2505.22311v1 |
LLMs as agents to establish the Shared KB (SKB) enhances communication efficiency, and the proposed Multi-User Generative Semantic Communication (M-GSC) framework extends the capabilities of LLMs to handle complex multi-user tasks, optimize network resource usage, and overcome challenges in semantic encoding and decodi... | https://arxiv.org/abs/2505.22311v1 |
network manage- ment and optimization greatly enhances network adaptability and intelligence. Through autonomous learning, collaboration, 26 and task decomposition, agents make network management more efficient and network optimization more precise, driving the intelligent development of next-generation network sys- te... | https://arxiv.org/abs/2505.22311v1 |
(MLLMs) as agents and enhances UA Vs’ 3D spatial reasoning and task execution efficiency by integrating Artificial systems, Computational experiments, and Parallel execution (ACP) methods. This framework particularly assists UA Vs in urban environments to better accomplish complex tasks such as infrastructure monitorin... | https://arxiv.org/abs/2505.22311v1 |
difficult. On one hand, relevant knowledge is scattered across research papers, standards, and patents, making data collection and processing costly. On the other hand, real-world communication data is highly dynamic and often involves privacy-sensitive or confidential information, restricting its availability for publ... | https://arxiv.org/abs/2505.22311v1 |
reasoning and causal inference capabilities of LAMs in communications. 3) Inadequate Explanation: Current LAMs exhibit sig- nificant limitations in interpretability, which has become a critical barrier to their application and further development. As complex black-box systems, LAMs lack effective mechanisms to reveal t... | https://arxiv.org/abs/2505.22311v1 |
B. Research Challenges and Directions of Agentic AI 1) The Lack of Communication Knowledge: In the devel- opment of Agentic AI, the lack of communication knowl- edge presents a significant challenge to its application in 6G systems. Due to the complexity and rapid evolution of communication technologies, agent systems ... | https://arxiv.org/abs/2505.22311v1 |
techniques facilitate self-organizing collaboration based on local observations, en- hancing system flexibility and adaptability. At the operational level, the use of self-optimizing algorithms and load-balancing strategies allows the system to dynamically adjust resource allocation and task routing, effectively mitiga... | https://arxiv.org/abs/2505.22311v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.