text
string
source
string
Ability of Large Lan- guage Models to Express Personality Traits. Preprint , arXiv:2305.02547. Yiqiao Jin, Qinlin Zhao, Yiyang Wang, Hao Chen, Kai- jie Zhu, Yijia Xiao, and Jindong Wang. 2024. Agen- tReview: Exploring peer review dynamics with LLM agents. In Proceedings of the 2024 Conference on Empirical Methods in Na...
https://arxiv.org/abs/2505.21116v1
Ideas With the Self and in Collaboration With Large Language Models. arXiv preprint arXiv:2403.12928 . Yi-Cheng Lin, Tzu-Quan Lin, Chih-Kai Yang, Ke-Han Lu, Wei-Chih Chen, Chun-Yi Kuan, and Hung-Yi Lee. 2024a. Listen and speak fairly: a study on semantic gender bias in speech integrated large lan- guage models. In 2024...
https://arxiv.org/abs/2505.21116v1
U.S. Copyright Office. Bo Pan, Jiaying Lu, Ke Wang, Li Zheng, Zhen Wen, Yingchaojie Feng, Minfeng Zhu, and Wei Chen. 2024. Agentcoord: Visually exploring coordination strategy for llm-based multi-agent collaboration. Preprint , arXiv:2404.11943. 16 Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morri...
https://arxiv.org/abs/2505.21116v1
algorithms. In 2017 International Confer- ence on Computing, Communication and Automation (ICCCA) . Peiyang Song, Kaiyu Yang, and Anima Anandkumar. 2025. Lean copilot: Large language models as copilots for theorem proving in lean. Preprint , arXiv:2404.12534. Aarohi Srivastava et al. 2023. Beyond the imitation game: Qu...
https://arxiv.org/abs/2505.21116v1
8. Yun Wan and Yoram M Kalman. 2025. Using Gener- ative AI Personas Increases Collective Diversity in Human Ideation. Preprint , arXiv:2504.13868. Boshi Wang, Xiang Deng, and Huan Sun. 2022. Itera- tively prompt pre-trained language models for chain of thought. In Proceedings of the 2022 Conference on Empirical Methods...
https://arxiv.org/abs/2505.21116v1
task graph-driven frame- work for asynchronous and parallel llm-based multi- agent systems. Preprint , arXiv:2503.07675. Mingyue Yuan, Jieshan Chen, and Aaron Quigley. 2024. Maxprototyper: A multi-agent generation system for interactive user interface prototyping. Preprint , arXiv:2405.07131. J.D. Zamfirescu-Pereira, E...
https://arxiv.org/abs/2505.21116v1
Liu, Cheng Gao. 2024. Systematic Idea Refinement for Machine Learning Research Agents. In Submitted to Tsinghua University Course: Advanced Machine Learning & Machine Learning . Under review. A Proactivity Spectrum Supplementary This section details how we classify the proactivity levels shown in Fig. 2. We classify ag...
https://arxiv.org/abs/2505.21116v1
strong by leaning on a solid human-driven backbone and manual evaluation. While the results are good, it also imposes an excessive load on de- signers and creators (Wan et al., 2024; He et al., 2024; Lim and Perrault, 2024). 20 MAS Technique Task Domain Framework Divergent ExplorationAUT and RAT Long-Term Guidance (202...
https://arxiv.org/abs/2505.21116v1
Discussion (2024) Model-Generated Self-Defined PersonaGym (2024) Human-Defined Self-Defined Baby-AIGS-MLer (2024) Human-Defined Assistant SPARKIT (2024) Human-Defined Self-Defined Multi-Agent Debate (2024) Human-Defined Debater Acceleron (2024) Human-Defined Mentor & Colleague ChainBuddy (2025) Human-Defined Mentor & P...
https://arxiv.org/abs/2505.21116v1
arXiv:2505.21119v1 [cs.LG] 27 May 2025Universal Value-Function Uncertainties Moritz A. Zanger, Max Weltevrede, Yaniv Oren, Pascal R. Van der Vaart, Caroline Horsch, Wendelin Böhmer, Matthijs T. J. Spaan Department of Intelligent Systems Delft University of Technology Delft, 2628 XE, The Netherlands Correspondence: m.a....
https://arxiv.org/abs/2505.21119v1
random network distillation (RND) [Burda et al., 2019], pseudo counts [Bellemare et al., 2016] or intrinsic curiosity [Pathak et al., 2017] efficiently capture myopic epistemic uncertainty but require additional propagation mechanisms to obtain value uncertainties [O’Donoghue et al., 2018, Janz et al., 2019, Zhou et al...
https://arxiv.org/abs/2505.21119v1
and transitions to a new state St+1∼P(·|St, At). We quantify the merit of taking actions At=a in state St=sand subsequently following policy πby the action-value function, or Q-function Qπ:S × A − → R, which accounts for the cumulative discounted future rewards and adheres to a recursive consistency condition described...
https://arxiv.org/abs/2505.21119v1
of infinite width n, the function initialization f(·, θ0), as shown by Lee et al. [2018], is equivalent to a Gaussian process prior with a specific kernel κ:Rnin×Rnin− →Rcalled the neural network Gaussian process (NNGP). The functional evolution of fthrough gradient flow is then governed by a gradient inner product ker...
https://arxiv.org/abs/2505.21119v1
setting outlined in the previous section 2.1. Due to the use of bootstrapped TD losses, the closed-form NTK regime solutions in Eq. 4 do not apply to deep value function ensembles. An alternative to the above approach is the propagation of myopic uncertainty estimates. Several prior methods[O’Donoghue et al., 2018, Zho...
https://arxiv.org/abs/2505.21119v1
gand hence achieves zero-loss according to Eq. (7). Therefore, if the dataset Xsufficiently covers the dynamics induced by π(·|s, z), the online network u(s, a, z, ϑ 0)is able to recover g(s, a, z, ψ 0) 4 Figure 2: ( left:) Illustration of uncertainty estimation in tabular UVU with 4independently initialized tables for...
https://arxiv.org/abs/2505.21119v1
unexplored action bwith probability 1−z. We analyze the predictive variance of an ensemble of 128universal Q-functions, each conditioned on the policy π(·|s, z). In the bottom row, we plot the squared prediction error of a single UVU model, averaged over 128independent heads. Both approaches show peaked uncertainty in ...
https://arxiv.org/abs/2505.21119v1
set of test points XTconverges to a Gaussian with mean and covariance given by Eθ0 f(XT, θ∞) = ΘXTX∆−1 Xr, Covθ0 f(XT, θ∞) =κXTXT−(ΘXTX∆−1 XΛXT+h.c.)+(Θ XTX∆−1 X(ΛX−γΛX′)∆−1⊤ XΘXXT), where Θxx′is the NTK, κxx′is the NNGP kernel, h.c.denotes the Hermitian conjugate, and ∆˜X= ΘX˜X−γΘX′˜X,and Λ˜X=κX˜X−γκX′˜X. Proof is...
https://arxiv.org/abs/2505.21119v1
trained to convergence with errors ϵi(x, ϑ∞, ψ0). Let 1 2¯ϵ(x, ϑ∞, ψ0)2=1 2MPM i=1ϵi(x, ϑ∞, ψ0)2be the sample mean squared prediction error over M heads. Moreover, consider M+ 1independent converged Q-functions Qi(x;θ∞)and denote their sample variance ¯σ2 Q(x, θ∞) =1 MPM+1 i=1(Qi(x;θ∞)−¯Q(x;θ∞))2, where ¯Qis the sample...
https://arxiv.org/abs/2505.21119v1
we use variations of different difficulties by increasing maximum grid sizes. Dataset Collection. A dataset D={(si, ai, ri, zi, s′ i,)}ND i=1is collected using a policy that per- forms expertly but systematically fails for certain task/grid combinations (e.g., it can not successfully open doors on the “north” wall, irr...
https://arxiv.org/abs/2505.21119v1
it is indeed able to effectively quantify value uncertainty using a single-model multi-headed architecture. We furthermore ablate UVU’s dependency on network width, given that our theoretical analysis is situated in the infinite width limit. Fig. 4 (a) shows that UVU’s performance scales similarly with network width to...
https://arxiv.org/abs/2505.21119v1
Discussion In this work, we introduced universal value-function uncertainties (UVU), an efficient single-model method for uncertainty quantification in value functions. Our method measures uncertainties as prediction error between a fixed, random target network and an online learner trained with a tem- poral difference...
https://arxiv.org/abs/2505.21119v1
ensembled double Q-learning: Learning fast without a model. arXiv preprint arXiv:2101.05982 , 2021. M. Chevalier-Boisvert, B. Dai, M. Towers, R. Perez-Vicente, L. Willems, S. Lahlou, S. Pal, P. S. Castro, and J. Terry. Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented ta...
https://arxiv.org/abs/2505.21119v1
q-learning for offline reinforcement learning. Advances in neural information processing systems , 33:1179–1191, 2020. S. Lahlou, M. Jain, H. Nekoei, V . I. Butoi, P. Bertin, J. Rector-Brooks, M. Korablyov, and Y . Bengio. Deup: Direct epistemic uncertainty prediction. arXiv preprint arXiv:2102.08501 , 2021. B. Lakshmi...
https://arxiv.org/abs/2505.21119v1
W. Böhmer, and S. Whiteson. Optimistic exploration even with a pessimistic initialisation. Proceedings of ICLR 2020 , 2020. T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. In International conference on machine learning , pages 1312–1320. PMLR, 2015. S. Schmitt, J. Shawe-Taylor, ...
https://arxiv.org/abs/2505.21119v1
on Artificial Intelligence , 2020. 14 A Theoretical Results This section provides proofs and further theoretical results for universal value-function uncertainties (UVU). A.1 Learning Dynamics of UVU We begin by deriving learning dynamics for general functions with temporal difference (TD) losses and gradient descent, ...
https://arxiv.org/abs/2505.21119v1
t∈[0, T]to the constant neural tangent kernel Θt0 xx′→Θxx′, provided that the integralRT 0∥dt∥2dtstays bounded. Here, dt∈RNDis the training direction of the parameter evolution such thatd dtθt=−α∇θf(X, θ)dt In the here studied case of semi-gradient TD learning, the parameter evolution (as outlined above in Eq. (17)) is...
https://arxiv.org/abs/2505.21119v1
the infinite width limit, s.t.flin(x, θ∞) =f(x, θ∞)andΘt0 xx′= Θ xx′. The post-training function f(x, θ∞)is given by f(x, θ∞) =f(x, θ0)−ΘxX Θt0 XX−γΘt0 X′X−1(f(X, θ0)−(γf(X′, θ0) +r)), (27) and is thus a deterministic function of the initialization θ0. Theorem 1. Letf(x, θt)be a NN with Lhidden layers of width n1, . ...
https://arxiv.org/abs/2505.21119v1
initializations ϑ0, ψ0, θ0. Proof. Since our algorithm uses semi-gradient TD losses to train u(x, ϑt), the linearized dynamics of Theorem (1)apply. However, we consider a fixed target network g(x;ψ0)to produce synthetic rewards according to rg=g(x, ψ0)−γg(x′, ψ0). (32) With the post training function as described by Eq...
https://arxiv.org/abs/2505.21119v1
variables; thus, z1 i(x)is Gaussian distributed with z1 i(x)∼ GP (0, κ1 ii), (50) with kernel κ1 ii(x, x′) =σ2 w n0x⊤x′+σ2 b,and κ1 ij= 0, i̸=j. (51) Induction step l >1.For layers l >1we have zl i(x) =σbbl i+σw√nl−1nl−1X j=1wl ijxl j(x), xl j(x) =ϕ(zl−1 j(x)). (52) By the induction assumption, zl−1 j(x)are generated b...
https://arxiv.org/abs/2505.21119v1
θl) =kl ii(x, x′), i =j, 0, i ̸=j.(68) 20 For the l.h.s. , we first apply chain rule to obtain ∇θl−1zl i(x, θl) =σw√nl−1nl−1X jwl ij˙ϕ(zl−1 j(x, θl−1))∇θl−1zl−1 j(x, θl−1). (69) The gradient inner product of outputs iandjthus reduces to ∇θl−1zl i(x, θl)⊤∇θl−1zl j(x′, θl) = σ2 w nl−1nl−1X kwl ikwl jk˙ϕ(zl−1 k(x, θl−1))...
https://arxiv.org/abs/2505.21119v1
Mdegrees of freedom 1 MM+1X i=11 2 Qi(x;θ∞)−¯Q(x;θ∞)2∼σ2 Q Mχ2(M), (76) where ¯Q(x;θ∞) =1 M+1PM+1 iQi(x;θ∞)is the sample mean of M+ 1universal Q-functions, completing the proof. A.3 Limitations and Assumptions In this section, we detail central theoretical underpinnings and idealizations upon which our theoretical an...
https://arxiv.org/abs/2505.21119v1
oagent-pos ∈R2is the agent position in x, y-coordinates, oagent-dir ∈Ris a scalar integer indicating the agent direction (takes on values between 1 and 4), odoor-config ∈R24is the door configuration, comprising 4one-hot encoded vectors indicating each door’s color, and odoor-pos ∈R8 is a vector containing the x, y-posi...
https://arxiv.org/abs/2505.21119v1
online agent used for data collection. 23 Architectural Details. We use a hypernet- work MLP architecture adapted to the DQN set- ting, as depicted in Fig. 5. Specifically, this means we pass states sand task encodings z through single-layer encoders, which are then joint by elementwise multiplication. The resulting ve...
https://arxiv.org/abs/2505.21119v1
modeling approaches, including uniform choices for elements like network architectures and optimization algorithms, where appropriate. Specifically, every experiment employed the same architecture as detailed in Appendix B.1. Key hyperparameters, encompassing both foundational and algorithm-specific settings, were tune...
https://arxiv.org/abs/2505.21119v1
arXiv:2505.21136v2 [cs.LG] 28 May 2025SageAttention2++: A More Efficient Implementation of SageAttention2 Jintao Zhang1Xiaoming Xu1Jia Wei1Haofeng Huang1Pengle Zhang1Chendong Xiang1 Jun Zhu1Jianfei Chen1 Abstract The efficiency of attention is critical because its time complexity grows quadratically with se- quence len...
https://arxiv.org/abs/2505.21136v2
Speedup RTX4090, RTX5090FP16 FP32 1x FP8 FP32 2x FP8 FP16 4x 2.1. SageAttention2 SageAttention2 (Zhang et al., 2025a) is a quantiza- tion (Zhang et al., 2025g; Hu et al., 2025) method based on FlashAttention (Dao et al., 2022). FlashAttention tiles 1 SageAttention2++: A More Efficient Implementation of SageAttention2 T...
https://arxiv.org/abs/2505.21136v2
Choice of PrandVr. Table 2 shows attention accuracy for feasible (Pr, Vr)pairs. The results demonstrate that narrowing quantization ranges introduces negligible error. We select Pr= 224 andVr= 4.5for optimal performance. 4. Experiment Main result. SageAttention2++ achieves up to 3.9× speedup over FlashAttention2 while ...
https://arxiv.org/abs/2505.21136v2
312 379 404 416 421317 396 429 461 479 488350 451 476 519 548 562RTX5090, (Head dim = 64, causal = True) FlashAttn Sage1Sage2(8+8) Sage2++(8+8) Figure 4. Speed comparison between SageAttention2++ and baselines (RTX5090, headdim=64). and SageAttention2 (Zhang et al., 2025a). Please note that FlashAttention3 can only run...
https://arxiv.org/abs/2505.21136v2
the faster in- struction of FP8 Matmul accumulated in FP16 for the matrix multiplication of PV. Experiments show that SageAttention2++ achieves a 3.9×speedup (SageAt- tention2 has a 3 ×speedup) over FlashAttention, while main- taining the same attention accuracy as SageAttention2. This means SageAttention2++ can accele...
https://arxiv.org/abs/2505.21136v2
Y ., Zhang, C., Wu, Q., Luo, X., Ahn, S., Han, Z., Abdi, A. H., Li, D., Lin, C.-Y ., Yang, Y ., and Qiu, L. MInference 1.0: Accelerating pre-filling for long-context LLMs via dynamic sparse attention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Kamradt, G. Llmtest needle in a...
https://arxiv.org/abs/2505.21136v2
Paperno, D., Kruszewski, G., Lazaridou, A., Pham, N.-Q., Bernardi, R., Pezzelle, S., Baroni, M., Boleda, G., and Fern ´andez, R. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) ...
https://arxiv.org/abs/2505.21136v2
B., Han, S., and Lewis, M. Ef- ficient streaming language models with attention sinks. InThe Twelfth International Conference on Learning Representations , 2024b. Xu, J., Liu, X., Wu, Y ., Tong, Y ., Li, Q., Ding, M., Tang, J., and Dong, Y . Imagereward: Learning and evaluat- ing human preferences for text-to-image gen...
https://arxiv.org/abs/2505.21136v2
metrics. For text-to-text models, we use perplexity (Ppl.) (Jelinek et al., 1977) for WikiText, accuracy (Acc.) for LAMBADA and NIAH. For text-to-video models, following Zhao et al. (2025), we evaluate the quality of generated videos on five metrics: CLIPSIM and CLIP-Temp (CLIP-T) (Liu et al., 2024) to measure the text...
https://arxiv.org/abs/2505.21136v2
arXiv:2505.21140v1 [cs.LG] 27 May 2025HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Honglin Gao School of Electrical and Electronic Engineering Nanyang Technological University Singapore honglin001@e.ntu.edu.sgXiang Li School of Electrical and Electronic Engineering Nanyang Technological Un...
https://arxiv.org/abs/2505.21140v1
including recommendation systems and financial risk modeling [ 16,19], etc. Heterogeneous Graph Neural Networks (HGNNs) extend GNNs to incorporate diverse relational information, making them well- suited for tasks like node classification [ 30,31] and link predic- tion [ 15,26]. In financial applications, HGNNs have be...
https://arxiv.org/abs/2505.21140v1
exceptional invisibility, setting a new standard in graph-based adversarial techniques. Specifically, when the targeted nodes of the attack have been selected, new trigger nodes are introduced into the graph to carry out the attack. These trigger nodes are strategically connected to the targeted nodes and some highly i...
https://arxiv.org/abs/2505.21140v1
Attacks introduce malicious triggers by modifying node attributes while keeping the graph structure unchanged. NFTA (Node Feature Target Attack) [ 1] injects fea- ture triggers without requiring knowledge of GNN parameters, disrupting the feature space and confusing model predictions. It also introduces an adaptive str...
https://arxiv.org/abs/2505.21140v1
tween node 𝑣𝑖∈V𝑡𝑎and node𝑣𝑗∈V𝑡𝑏. We then define the edge setEas the union of all such edges, recorded as triples𝑣𝑖,𝑣𝑗,𝑟𝑡𝑎,𝑡𝑏: E=Ð 𝑟𝑡𝑎,𝑡𝑏∈R𝑣𝑖,𝑣𝑗,𝑟𝑡𝑎,𝑡𝑏|𝑣𝑖∈V𝑡𝑎,𝑣𝑗∈V𝑡𝑏,𝐴𝑡𝑎,𝑡𝑏𝑣𝑖,𝑣𝑗=1 Figure 1: Overall Backdoor Attack Process on a Heteroge- neous Graph. Hence, each adja...
https://arxiv.org/abs/2505.21140v1
serted trigger node 𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟to ensure both attack effectiveness and stealthiness is computationally expensive and challenging, Het- eroBA decomposes the attack process into two key components, addressing the following two core challenges: (i) how to generate the features of the inserted trigger node 𝑣(𝑛𝑒𝑤)...
https://arxiv.org/abs/2505.21140v1
𝑡𝑡𝑟(𝑘,𝑗)is the𝑗-th feature of the 𝑘-th node inV′ 𝑡𝑡𝑟, and 𝑚=|V′ 𝑡𝑡𝑟|. We then generate the binary features for the newly inserted trigger nodes V(new) 𝑡𝑡𝑟by sampling each dimension 𝑗via a Bernoulli distribution: 𝑋(new)(𝑖,𝑗) ∼ Bernoulliˆ𝑝𝑗,∀𝑖=1,..., V(new) 𝑡𝑡𝑟 and𝑗=1,...,𝑑. (5) HeteroBA: A...
https://arxiv.org/abs/2505.21140v1
V(2) aux. Specifically, we extract the embedding representations of all second- hop auxiliary-type neighbors V(2) auxand employ a clustering-based selection strategy to determine which nodes should connect to the newly inserted trigger nodes. Given the embedding matrix Z∈R|V(2) aux|×𝑑, where each row z𝑖 corresponds t...
https://arxiv.org/abs/2505.21140v1
poisoned testing set (Poison Testset) also accounts for 5%, allowing us to evaluate the attack’s effectiveness during infer- ence. The remaining 10% is allocated to the validation set, which is used for hyperparameter tuning and early stopping. The training parameters are provided in Appendix B. 5.1.3 Compared Methods....
https://arxiv.org/abs/2505.21140v1
the structural similarity is computed as: Sim struct=1 1+Δ𝑑. (16) The final stealthiness score is a weighted sum of both compo- nents: Stealthiness(𝐺,e𝐺)=𝑤1·Sim feat+𝑤2·Sim struct, (17) where𝑤1and𝑤2are weighting factors (default to 0.5). A higher score indicates that the injected nodes blend more naturally into ...
https://arxiv.org/abs/2505.21140v1
ASR on SimpleHGN (class 0), outperforming CGBA (0.3881) and UGBA (0.8443). These results highlight HeteroBA’s superior attack effectiveness. Despite high ASR, HeteroBA introduces minimal classification accuracy degradation (CAD), often close to zero or negative, indi- cating little impact on clean data. Although UGBA a...
https://arxiv.org/abs/2505.21140v1
Attention-based edge-generation strategies are crucial for improving attack effectiveness. Random edge selection reduces ASR and provides no significant benefit in maintaining clean data accuracy. (a) HeteroBA-A (b) HeteroBA-C Figure 3: Comparison of attack success rates for HeteroBA-A and HeteroBA-C under different po...
https://arxiv.org/abs/2505.21140v1
Ye, Haixing Zhao, and Ying Wang. 2023. Feature-Based Graph Backdoor Attack in the Node Classification Task. International Journal of Intelligent Systems 2023, 1 (2023), 5418398. [2]Pengzhou Cheng, Zongru Wu, Wei Du, Haodong Zhao, Wei Lu, and Gongshen Liu. 2023. Backdoor attacks and countermeasures in natural language p...
https://arxiv.org/abs/2505.21140v1
International Conference on Software Quality, Reliability and Security (QRS) . IEEE, 809–820. [18] George R Terrell and David W Scott. 1992. Variable kernel density estimation. The Annals of Statistics (1992), 1236–1265. [19] Jianfei Wang, Cuiqing Jiang, Lina Zhou, and Zhao Wang. 2024. Representing and discovering hete...
https://arxiv.org/abs/2505.21140v1
networks against adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence . Vol. 36. AAAI Press, Menlo Park, CA, 4363–4370. [34] Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021. Back- door attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Ac...
https://arxiv.org/abs/2505.21140v1
a specific label and identifies relevant nodes in their neighborhood. A single-hop neighbor search typically has a com- plexity of at most 𝑂(𝑚), depending on the adjacency structure. If multi-hop neighborhoods are considered, this step can be viewed as a breadth-first search (BFS) on the relevant subgraph, with an up...
https://arxiv.org/abs/2505.21140v1
GGBond: Growing Graph-Based AI-Agent Society for Socially-Aware Recommender Simulation Hailin Zhong1, Hanlin Wang1, Yujun Ye1, Meiyi Zhang1, Shengxin Zhu2,1 1Faculty of Science and Technology, Beijing Normal-Hong Kong Baptist University , Zhuhai, China 2Research Centers for Mathematics, Advanced Institute of Natural Sc...
https://arxiv.org/abs/2505.21154v1
ronments. To bridge these gaps, we argue for the construction of a high-fidelity simulation environment that jointly models cog- nitive user behavior, heterogeneous social structure, and causal feedback loops. In such an environment, each user should demonstrate rich internal decision-making behavior—affected by memory...
https://arxiv.org/abs/2505.21154v1
fidelity, we design a virtual society framework composed of a population of Sim- User Agents and a multi-layer heterogeneous social graph called the GGBond Social Network . The system architecture is clearly presented in Figure 1. This system is built not only to reproduce realistic cognitive and behavioral patterns of...
https://arxiv.org/abs/2505.21154v1
social robustness, and adapta- tion under long-term user drift. This dual evaluation paradigm not only benchmarks algorithmic performance in realistic environments but also serves as indirect validation of our simulation framework’s behavioral and structural credibility. Through the coordinated design of internal cogni...
https://arxiv.org/abs/2505.21154v1
|Iu|X i∈Iu1 pop(i),pop(i) =|{u′|i∈Iu′}|,(4) where pop (i)is the popularity of item i, defined as the number of users who rated it. This trained model is subsequently applied to structural features derived from the Stanford social graph, enabling cross-domain personality prediction for anonymized nodes.B. Structure-base...
https://arxiv.org/abs/2505.21154v1
The complete profile vector for node wthus becomes: pw= [tu,bv]. (12) This unified representation integrates structural social at- tributes and latent psychological traits, providing a comprehen- sive embedding for each node. Such enriched profiles facilitate downstream tasks, including personalized recommendation and ...
https://arxiv.org/abs/2505.21154v1
entirely driven by internal mechanisms, without reliance on external calls. The agent operates through a closed cogni- tive–motivational–behavioral loop: upon receiving an external stimulus—such as a system-generated recommendation or a peer-shared item—the agent invokes a series of internal cognition modules to encode...
https://arxiv.org/abs/2505.21154v1
(e.g., genre, keywords, language), the agent’s current emotional state (valence), and the target social circle (e.g., friends, interest groups, technical communities). These inputs are formatted into controllable prompt templates that are passed to DeepSeek-R1 for generation. For example, when recommending a science fi...
https://arxiv.org/abs/2505.21154v1
Module 3 and are subject to recursive updates through behavioral feedback handled in Module 4. This Module thus forms a closed adaptive loop, ensuring each agent gradually accumulates experiences, evolves preferences, and regulates emotion—collectively shaping temporally coher- ent, person-like behavioral profiles. C. ...
https://arxiv.org/abs/2505.21154v1
penalty [17]. •Trust s: recommender’s historical approval rate. •I: intimacy with recommender, as computed above. •P: user’s neuroticism level, reflecting risk aversion [26]. The sigmoid σ(·)ensures output normalization to [0,1], the wtis from exponential forgetting curve. Additionally, each agent has a static base ris...
https://arxiv.org/abs/2505.21154v1
positive M(indicating satisfaction) reinforces the social connection, while a negative Mweakens it. This process emulates long-term relationship dynamics as explained in Social Exchange Theory [ ?], and supports emergent behavior adaptation over multiple decision cycles. Together, the IC2engine and reciprocity regulato...
https://arxiv.org/abs/2505.21154v1
visuals and plot twists—highly recommend!” •“Too slow and predictable, not my thing, but others might enjoy it.” Finally, the logger triggers feedback propagation: the sat- isfaction score Mis sent to Module 1 to update emotion and memory, to Module 2 for adjusting trust and intimacy, and to Module 3 for tuning recipro...
https://arxiv.org/abs/2505.21154v1
quantified as the amount of distribution “mass” that must be moved times the distance it has to be moved: EMD( P, Q) = inf γ∈Γ(P,Q)Z X×X∥x−y∥dγ(x, y), (35) where Γ(P, Q)denotes the set of all joint distributions (or transport plans) with marginals PandQrespectively. c) Kullback-Leibler Divergence (KL): Given two dis- c...
https://arxiv.org/abs/2505.21154v1
[48]. Likewise, SimUSER shows that LLM-based agents equipped with memory modules can generate user–RS interactions that faithfully match observed click statistics [ ?]. Finally, studies on graph-invariant learning and low-homophily settings reveal that even when explicit similarity signals are weak, leveraging high-ord...
https://arxiv.org/abs/2505.21154v1
AT SPECIFIC ROUNDS (RECALL @20 AND NDCG@20) Model / Metric 0 Rounds 10 Rounds 20 Rounds 30 Rounds MF - Recall@20 0.1502 0.1574 0.1612 0.1623 MF - NDCG@20 0.3560 0.3612 0.3652 0.3669 MultV AE - Recall@20 0.1592 0.1636 0.1668 0.1674 MultV AE - NDCG@20 0.3482 0.3543 0.3592 0.3606 LightGCN - Recall@20 0.1721 0.1783 0.1804 ...
https://arxiv.org/abs/2505.21154v1
social feedback. This effect is most pronounced in LightGCN, suggesting that graph-based models are more sensitive to socially enriched profile updates and better capture latent preference drift over time. Second, Satisfaction ( Ssat) shows a clear upward trend, increasing from a flat baseline of 3.01 (i.e., neutral ra...
https://arxiv.org/abs/2505.21154v1
and engineering [9]. Xi et al. further explore the rise and potential of LLM- based agents, proposing a general framework comprising brain, perception, and action components, and examining their applications in single-agent scenarios, multi-agent scenarios, and human-agent cooperation [6]. These studies collectively un...
https://arxiv.org/abs/2505.21154v1
lar and interpretable decision-making processes. In parallel, we construct a dynamic, multi-layer social graph (GGBond Graph) that captures heterogeneous, multi-circle social rela- tions and their evolution over time. The entire system operates under a discrete-time simulation scheduler, coupling agent- level behavior ...
https://arxiv.org/abs/2505.21154v1
He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages 639–648, 2020. [12] Yuxuan Hu, Gemju Sherpa, Lan Z...
https://arxiv.org/abs/2505.21154v1
J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , pages 1–18. ACM, 2023. [29] Fernando Pérez-Cruz. Kullback-leibler divergence estimation of continu- ous distr...
https://arxiv.org/abs/2505.21154v1
, 2024. [45] Feiyu Xu et al. Can large language model agents simulate human trust behavior? In Advances in Neural Information Processing Systems (NeurIPS) , 2024. [46] Dayu Yang, Fumian Chen, and Hui Fang. Behavior alignment: A new perspective of evaluating llm-based conversational recommender sys- tems. In Proceedings...
https://arxiv.org/abs/2505.21154v1
arXiv:2505.21156v1 [cs.SD] 27 May 2025Model as Loss: A Self-Consistent Training Paradigm Saisamarth Rajesh Phaye1, Milos Cernak1, Andrew Harper1 1Audio Machine Learning, Logitech (sphaye, mcernak, aharper)@logitech.com Abstract Conventional methods for speech enhancement rely on handcrafted loss functions (e.g., time o...
https://arxiv.org/abs/2505.21156v1
a model with conventional loss func- tions and then using the trained encoder’s embeddings as a loss function for the next stage. This approach aligns the loss func- tion with the downstream task, leveraging the encoder’s abil- ity to extract task-specific features while ensuring contextual and hierarchical understandi...
https://arxiv.org/abs/2505.21156v1
loss Lnew will also be minimal. Our search is to find the ideal loss func- tionLideal, such that, once trained to convergence, for any given mathematical function F, we get the minimum loss: LF=∥F(yclean)− F (yenhanced )∥1 (2) Babaev et al. [3] propose a Signal-to-Noise (SNR) Rule , which suggests that as more noise is...
https://arxiv.org/abs/2505.21156v1
are three possible Lmalvariations: 1.Lmal−frozen-fe : Freeze the trained Encoder (FE) of Nthepoch and use it as the MAL-encoder to train only the decoder for subsequent epochs. 2.Lmal−frozen: Use the trained encoder of the Nthepoch as MAL- encoder for all subsequent epochs and train the full encoder- decoder model. 3.L...
https://arxiv.org/abs/2505.21156v1
as a feature extractor, while the decoder becomes the primary model for synthesizing the enhanced out- put. The MAL-encoder , acting as a loss function, ensures that the decoder produces an output closely aligned with the features of clean audio. The combination of supervised learning (match- ing the clean signal) and ...
https://arxiv.org/abs/2505.21156v1
•Out-of-domain test set (3900 samples) : We aggregated fully unseen test sets from 2024 and 2025 Urgent Challenges, combining the two nonblind and two blind sets [16]. This test setup enabled robust performance comparisons across diverse acoustic conditions. 3.2. Evaluation metrics Models are evaluated using SIGMOS, NI...
https://arxiv.org/abs/2505.21156v1
declines as speech quality degrades. Models trained withLmal−frozen-fe orLmal−dynamic better preserve speech, align- ing with the self-consistency criterion, which Lmal−frozen lacks. In the 1stiteration, all MAL models achieve a high MOS, then reach a higher peak before converging to a higher MOS. This aligns with Figu...
https://arxiv.org/abs/2505.21156v1
Signal Enhancement (IWAENC 2022) , 2022. [Online]. Available: https://github.com/Rikorose/DeepFilterNet [6] F. G. Germain, Q. Chen, and V . Koltun, “Speech denoising with deep feature losses,” Proc. Interspeech 2019, 2723-2727 , 2018. [7] S. Braun and I. Tashev, “A consolidated view of loss functions for supervised dee...
https://arxiv.org/abs/2505.21156v1
Sabharwal, S. Ramesh, J. Wang, D. M. Divakaran, and M. C. Chan, “Enhancing lora reception with generative models: Channel-aware denoising of loraphy signals,” in Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Sys- tems, 2024, pp. 507–520. [23] H. Zhang and D. Wang, “Neural cascade architecture for ...
https://arxiv.org/abs/2505.21156v1
arXiv:2505.21160v1 [cs.LG] 27 May 2025STEB: In Search of the Best Evaluation Approach for Synthetic Time Series Michael Stenger University of Wuerzburg 97074 Wuerzburg, Germany michael.stenger@uni-wuerzburg.deRobert Leppich University of Wuerzburg 97074 Wuerzburg, Germany robert.leppich@uni-wuerzburg.de André Bauer Ill...
https://arxiv.org/abs/2505.21160v1
data to learn. Note that massesses the generated data, not generators. Hence, we do not consider generator-dependent measures such as duality gap [ 50]. Similarly, we limit this study to quantitative measures with clear sand exclude, for instance, visualizations such as the popular t-SNE plot [60]. Contributions. With ...
https://arxiv.org/abs/2505.21160v1
fine-grained in its analysis, as it differentiates four aspects of synthesis quality, and it is more comprehensive in terms of experimental parameters. There are three related synthetic data benchmarks. Synthcity is a framework for benchmarking tabular, image, and TS data generators [ 45]. It incorporates multiple gene...
https://arxiv.org/abs/2505.21160v1
κ0< κ 1< κ 2< . . . . Using this condition, we compute a reliability indicator for munder different test cases, varying T,Dr, and the random seed. The average value across all tests serves as approximation for the measure’s reliability. Example. Let miMAE :D×D′7→ 10−3+1 nldX i,j,k|Di,j,k−D′ i,j,k| −1 (1) be the inv...
https://arxiv.org/abs/2505.21160v1
to leaking real TS into the synthetic set. It starts with Drs and gradually adds up to ten Drinstances to DTwith increasing κ. Salt & pepper adds noise to the data by replacing random values in Drby 0 and 1 with probability κ 2. Segment leaking builds the output DTby using Drsas a basis and replacing 30κrandom segments...
https://arxiv.org/abs/2505.21160v1
the included tests are gradually processed, starting with the transformation to create DT. If required by the measure, the test datasets are scaled to [0,1]or embedded. Finally, the data flow ar- rives at the Measure component, where a score forDtrain,DT, and Dheld-out is computed. The transformation, scaling/embedding...
https://arxiv.org/abs/2505.21160v1
end, the scores produced for the tested measures and the recorded running times (if available) are analyzed by this component based on three criteria: (i) the reliability of a measure to truthfully and accurately reflect the quality of a given synthetic dataset in its score; (ii) the consistency of the measure’s scores...
https://arxiv.org/abs/2505.21160v1
statistical difference between groups of rrelindicators, where each group has either the same random seed or underlying dataset. The idea is that the behavior ofmshould depend on the relationship of synthetic and real data and not be impacted significantly by randomness or the real dataset alone. We use rrelas a proxy....
https://arxiv.org/abs/2505.21160v1
the 68,666 tests are successful and count when evaluating the performance of the implemented measures. The success rate heavily depends on the resource and time demands of the measure, the size of the dataset, and the transformation applied. Most measures have a success rate of around 98%. The details are listed in Tab...
https://arxiv.org/abs/2505.21160v1
.209±.328 .673±.345 Detection_linear 0.101 0.884 0.416 0.521 0.431 0.499 .739±.246 .418±.300 .333±.212 .703±.236 Detection_MLP >10 0.290 0.337 0.517 0.471 0.243 .530±.424 .204±.326 .220±.331 .600±.401 Detection_XGB 0.100 0.968 0.093 0.981 0.113 0.975 .326±.274 .286±.229 .348±.204 .379±.252 Discr. score .594±.438 .188±....
https://arxiv.org/abs/2505.21160v1
one minute. Especially the more complex, often deep-learning based measures run significantly longer, up to 8minutes. Naturally, Concat is the fastest embedding, followed by Catch22 typically taking under 2 minutes and TS2Vec taking up to an hour. Still, the tables only reflect part of the picture. Tests are often stop...
https://arxiv.org/abs/2505.21160v1