text
string | source
string |
|---|---|
Ability of Large Lan- guage Models to Express Personality Traits. Preprint , arXiv:2305.02547. Yiqiao Jin, Qinlin Zhao, Yiyang Wang, Hao Chen, Kai- jie Zhu, Yijia Xiao, and Jindong Wang. 2024. Agen- tReview: Exploring peer review dynamics with LLM agents. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 1208–1226, Miami, Florida, USA. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Dis- tantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 1601–1611. Association for Computational Linguistics. Huang Kai, Zhang Zhonghua, Qin Zheng, and Liu Bing. 2010. Conflict resolution in multi-agent systems based on negotiation and arbitrage. In 2010 2nd IEEE International Conference on Information Man- agement and Engineering , pages 304–307. Junseok Kim, Nakyeong Yang, and Kyomin Jung. 2024. Persona is a double-edged sword: Mitigating the negative impact of role-playing prompts in zero-shot reasoning tasks. Preprint , arXiv:2408.08631. Chun-Yi Kuan, Chih-Kai Yang, Wei-Ping Huang, Ke- Han Lu, and Hung yi Lee. 2024. Speech-copilot: Leveraging large language models for speech pro- cessing via task decomposition, modularization, and program generation. Preprint , arXiv:2407.09886. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic Uncertainty: Linguistic Invariances for Un- certainty Estimation in Natural Language Generation. Preprint , arXiv:2302.09664. Harsh Kumar, Jonathan Vincentius, Ewan Jordan, and Ashton Anderson. 2025. Human Creativity in the Age of LLMs: Randomized Experiments on Diver- gent and Convergent Thinking. In International Con- ference on Human Factors in Computing Systems . Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. Propublica compas recidivism risk score data and analysis. Data and analysis for "Ma- chine Bias" investigation. Mohammad Lataifeh, Xavier A Carrasco, Ashraf M Elnagar, Naveed Ahmed, and Imran Junejo. 2024. Human–machine co-creation: a complementary cog- nitive approach to creative character design pro- cess using GANs. The Journal of Supercomputing , 80(11):16574–16610. 15 Tomas Lawton, Francisco J Ibarrola, Dan Ventura, and Kazjon Grace. 2023. Drawing with Reframer: Emer- gence and Control in Co-Creative AI. In Proceedings of the 28th International Conference on Intelligent User Interfaces , IUI ’23. John D. Lee and Katrina A. See. 2004. Trust in automa- tion: Designing for appropriate reliance. Human Factors , 46(1):50–80. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 110–119, San Diego, California. Association for Computational Linguistics. Xinyi Li, Sai Wang, Siqi Zeng, Yu Wu, and Yi Yang. 2024. A survey on LLM-based multi-agent sys- tems: workflow, infrastructure, and challenges. Vici- nagearth , 1(1):9. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. 2024. Encouraging divergent thinking in large language models through multi-agent debate. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing . Gionnieve Lim and Simon T. Perrault. 2024. Rapid AIdeation: Generating
|
https://arxiv.org/abs/2505.21116v1
|
Ideas With the Self and in Collaboration With Large Language Models. arXiv preprint arXiv:2403.12928 . Yi-Cheng Lin, Tzu-Quan Lin, Chih-Kai Yang, Ke-Han Lu, Wei-Chih Chen, Chun-Yi Kuan, and Hung-Yi Lee. 2024a. Listen and speak fairly: a study on semantic gender bias in speech integrated large lan- guage models. In 2024 IEEE Spoken Language Tech- nology Workshop (SLT) , pages 439–446. Zheng Lin, Zhenxing Niu, Zhibin Wang, and Yinghui Xu. 2024b. Interpreting and mitigating hallucina- tion in mllms through multi-agent debate. Preprint , arXiv:2407.20505. Zhiyu Lin, Rohan Agarwal, and Mark Riedl. 2022. Cre- ative wand: a system to study effects of communi- cations in co-creative settings. In Proceedings of the Eighteenth AAAI Conference on Artificial Intel- ligence and Interactive Digital Entertainment , AI- IDE’22. AAAI Press. Andy Liu, Mona Diab, and Daniel Fried. 2024a. Evalu- ating large language model biases in persona-steered generation. In Findings of the Association for Com- putational Linguistics: ACL 2024 , pages 9832–9850, Bangkok, Thailand. Association for Computational Linguistics. Yiren Liu, Si Chen, and Haocong et al. Cheng. 2024b. How AI Processing Delays Foster Creativity: Explor- ing Research Question Co-Creation with an LLM- based Agent. In Proc. CHI ’24 .Yiren Liu, Pranav Sharma, Mehul Jitendra Oswal, Hai- jun Xia, and Yun Huang. 2024c. PersonaFlow: Boosting Research Ideation with LLM-Simulated Ex- pert Personas. Preprint , arXiv:2409.12538. Tao Long, Katy Ilonka Gero, and Lydia B Chilton. 2024. Not just novelty: A longitudinal study on utility and customization of an ai workflow. In Proceedings of the 2024 ACM Designing Interactive Systems Confer- ence, DIS ’24, page 782–803, New York, NY , USA. Association for Computing Machinery. Li-Chun Lu, Shou-Jen Chen, Tsung-Min Pai, Chan- Hung Yu, Hung yi Lee, and Shao-Hua Sun. 2024. Llm discussion: Enhancing the creativity of large language models via discussion framework and role- play. Preprint , arXiv:2405.06373. Alaeddine Mellouli et al. 2024. Storeys: A neurosym- bolic approach to human-ai co-creation of novel action-oriented narratives in known story worlds. In Proceedings of the 15th International Conference on Computational Creativity, ICCC 2024, Jönköping, Sweden, June 17-21, 2024 . Chunjiang Mu, Hao Guo, Yang Chen, Chen Shen, Die Hu, Shuyue Hu, and Zhen Wang. 2024. Multi-agent, human–agent and beyond: A survey on cooperation in social dilemmas. Neurocomputing , 610:128514. Anirban Mukherjee and Hannah Hanwen Chang. 2025. Stochastic, dynamic, fluid autonomy in agentic ai: Implications for authorship, inventorship, and liabil- ity.Preprint , arXiv:2504.04058. Gabriel Mukobi, Hannah Erlebach, Niklas Lauffer, Lewis Hammond, Alan Chan, and Jesse Clifton. 2023. Welfare diplomacy: Benchmarking language model cooperation. In Socially Responsible Language Mod- elling Research . Mohammad Naiseh, Nan Jiang, Jianbing Ma, and Raian Ali. 2020. Personalising explainable recommenda- tions: Literature and conceptualisation. In Trends and Innovations in Information Systems and Tech- nologies , pages 518–533, Cham. Springer Interna- tional Publishing. Harshit Nigam, Manasi Patwardhan, Lovekesh Vig, and Gautam Shroff. 2024. An interactive co-pilot for ac- celerated research ideation. In Proceedings of the Third Workshop on Bridging Human–Computer In- teraction and Natural Language Processing , pages 60–73, Mexico City, Mexico. Association for Com- putational Linguistics. U.S. Copyright Office. 2025. Copyright and Artificial Intelligence, Part 2: Copyrightability Report .
|
https://arxiv.org/abs/2505.21116v1
|
U.S. Copyright Office. Bo Pan, Jiaying Lu, Ke Wang, Li Zheng, Zhen Wen, Yingchaojie Feng, Minfeng Zhu, and Wei Chen. 2024. Agentcoord: Visually exploring coordination strategy for llm-based multi-agent collaboration. Preprint , arXiv:2404.11943. 16 Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S. Bern- stein. 2023. Generative Agents: Interactive Simu- lacra of Human Behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology , UIST ’23, New York, NY , USA. Association for Computing Machinery. Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024. ChatDev: Communicative agents for software development. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15174–15186, Bangkok, Thailand. Association for Computational Linguistics. Hua Xuan Qin, Shan Jin, Ze Gao, Mingming Fan, and Pan Hui. 2024. CharacterMeet: Supporting Creative Writers’ Entire Story Character Construction Pro- cesses Through Conversation with LLM-Powered Chatbot Avatars. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems , CHI ’24. Association for Computing Machinery. Yansheng Qiu, Haoquan Zhang, Zhaopan Xu, Ming Li, Diping Song, Zheng Wang, and Kaipeng Zhang. 2025. AI Idea Bench 2025: AI Research Idea Gener- ation Benchmark. Preprint , arXiv:2504.14191. Marissa Radensky. 2024. Mixed-Initiative Methods for Co-Creation in Scientific Research. In Proceedings of the 16th Conference on Creativity & Cognition , C&C ’24, page 1–7. Association for Computing Ma- chinery. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. Preprint , arXiv:1908.10084. David Rein, Betty Li Hou, Asa Cooper Stickland, Jack- son Petty, Richard Yuanzhe Pang, Julien Dirani, Ju- lian Michael, and Samuel R. Bowman. 2024. GPQA: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling . Kai Ruan, Xuan Wang, Jixiang Hong, Peng Wang, Yang Liu, and Hao Sun. 2024. Liveideabench: Evaluating llms’ scientific creativity and idea generation with minimal context. arXiv preprint arXiv:2412.17596 . Vinay Samuel, Henry Peng Zou, Yue Zhou, Shreyas Chaudhari, Ashwin Kalyan, Tanmay Rajpurohit, Ameet Deshpande, Karthik Narasimhan, and Vishvak Murahari. 2024. Personagym: Evaluating persona agents and llms. Preprint , arXiv:2407.18416. Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, and Maja Matari ´c. 2025. Personality Traits in Large Language Mod- els.Preprint , arXiv:2307.00184. Orit Shaer, Angelora Cooper, Osnat Mokryn, Andrew L Kun, and Hagit Ben Shoshan. 2024. Ai-augmentedbrainwriting: Investigating the use of llms in group ideation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems , CHI ’24, New York, NY , USA. Association for Computing Machinery. C. E. Shannon. 1948. A mathematical theory of com- munication. The Bell System Technical Journal , 27(3):379–423. Li Shi, Houjiang Liu, Yian Wong, Utkarsh Mujumdar, Dan Zhang, Jacek Gwizdka, and Matthew Lease. 2024. Argumentative Experience: Reducing Con- firmation Bias on Controversial Issues through LLM- Generated Multi-Persona Debates. arXiv preprint arXiv:2412.04629 . Aakriti Singh, Shipra Saraswat, and Neetu Faujdar. 2017. Analyzing Titanic disaster using machine learning
|
https://arxiv.org/abs/2505.21116v1
|
algorithms. In 2017 International Confer- ence on Computing, Communication and Automation (ICCCA) . Peiyang Song, Kaiyu Yang, and Anima Anandkumar. 2025. Lean copilot: Large language models as copilots for theorem proving in lean. Preprint , arXiv:2404.12534. Aarohi Srivastava et al. 2023. Beyond the imitation game: Quantifying and extrapolating the capabili- ties of language models. Transactions on Machine Learning Research . Haoyang Su, Renqi Chen, Shixiang Tang, Zhenfei Yin, Xinzhe Zheng, Jinzhe Li, Biqing Qi, Qi Wu, Hui Li, Wanli Ouyang, Philip Torr, Bowen Zhou, and Nanqing Dong. 2025. Many Heads Are Bet- ter Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System. Preprint , arXiv:2410.09403. Jingyun Sun, Chengxiao Dai, Zhongze Luo, Yangbo Chang, and Yang Li. 2024. LawLuo: A Multi-Agent Collaborative Framework for Multi-Round Chinese Legal Consultation. Preprint , arXiv:2407.16252. Yuqian Sun and Stefano Gualeni. 2025. Between puppet and actor: Reframing authorship in this age of ai agents. Preprint , arXiv:2501.15346. Kamil Szczepanik and Jarosław Chudziak. 2025. TRIZ Agents: A Multi-Agent LLM Approach for TRIZ- Based Innovation. In 17th International Conference on Agents and Artificial Intelligence . SciTePress. Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. 2008. ArnetMiner: Extraction and Mining of Academic Social Networks. In Proceed- ings of the 14th ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining , pages 990–998. ACM. Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ro- nan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, and Faeze Brahman. 2024. Mac- Gyver: Are Large Language Models Creative Prob- lem Solvers? In Proceedings of the 2024 Annual 17 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) . E. Paul Torrance. 1966. Torrance Tests of Creative Thinking . Personnel Press, Princeton, NJ. Khanh-Tung Tran, Dung Dao, Minh-Duong Nguyen, Quoc-Viet Pham, Barry O’Sullivan, and Hoang D. Nguyen. 2025. Multi-Agent Collaboration Mechanisms: A Survey of LLMs. Preprint , arXiv:2501.06322. Ishwara Vasista, Imran Mirza, Cole Huang, Rohan Ra- jasekhara Patil, Aslihan Akalin, Kevin Zhu, and Sean O’Brien. 2025. MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered. In ICLR 2025 Work- shop on Building Trust in Language Models and Ap- plications . Tony Veale and F Amílcar Cardoso. 2019. Computa- tional Creativity: The Philosophy and Engineering of Autonomously Creative Systems . Springer. P Venkadesh, SV Divya, and K Subash Kumar. 2024. Unlocking AI Creativity: A Multi-Agent Approach with CrewAI. Journal of Trends in Computer Science Smart Technology . Kavana Venkatesh, Connor Dunlop, and Pinar Yanardag. 2025. CREA: A Collaborative Multi-Agent Frame- work for Creative Content Generation with Diffusion Models. Preprint , arXiv:2504.05306. Saranya Venkatraman, Nafis Irtiza Tripto, and Dongwon Lee. 2024. Collabstory: Multi-llm collaborative story generation and authorship analysis. In Proc. NAACL ’25. M. A. Wallach and N. Kogan. 1965. Modes of Think- ing in Young Children: A Study of the Creativity- Intelligence Distinction . Holt, Rinehart & Winston, New York. Qian Wan, Siying Hu, Yu Zhang, Piaohong Wang, Bo Wen, and Zhicong Lu. 2024. "It Felt Like Hav- ing a Second Mind": Investigating Human-AI Co- creativity in Prewriting with Large Language Models. Proc. ACM Hum.-Comput. Interact. ,
|
https://arxiv.org/abs/2505.21116v1
|
8. Yun Wan and Yoram M Kalman. 2025. Using Gener- ative AI Personas Increases Collective Diversity in Human Ideation. Preprint , arXiv:2504.13868. Boshi Wang, Xiang Deng, and Huan Sun. 2022. Itera- tively prompt pre-trained language models for chain of thought. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 2714–2730. Association for Computational Linguistics. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024a. A survey on large language model based autonomous agents. Frontiers of Computer Science , 18(6):186345.Yi Wang, Qian Zhou, and David Ledo. 2024b. Story- verse: Towards co-authoring dynamic plot with llm- based character simulation via narrative planning. In Proceedings of the 19th International Conference on the Foundations of Digital Games , FDG ’24, New York, NY , USA. Association for Computing Machin- ery. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2024c. Unleashing the emergent cognitive synergy in large language mod- els: A task-solving agent through multi-persona self- collaboration. Preprint , arXiv:2307.05300. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems , volume 35, pages 24824–24837. Curran Associates, Inc. Geraint A Wiggins. 2006. A preliminary framework for description, analysis and comparison of creative systems. Knowledge-based systems , 19(7):449–458. Haibin Wu, Xuanjun Chen, Yi-Cheng Lin, Kai wei Chang, Ho-Lam Chung, Alexander H. Liu, and Hung yi Lee. 2024. Towards audio language modeling – an overview. Preprint , arXiv:2402.13236. Zhikun Wu, Thomas Weber, and Florian Müller. 2025. One Does Not Simply Meme Alone: Evaluating Co- Creativity Between LLMs and Humans in the Gener- ation of Humor. In Proceedings of the 30th Interna- tional Conference on Intelligent User Interfaces , IUI ’25, page 1082–1092. ACM. Chengxing Xie and Difan Zou. 2024. A human-like reasoning framework for multi-phases planning task with large language models. In ICML 2024 Workshop on LLMs and Cognition . Frank F. Xu, Yufan Song, Boxuan Li, Yuxuan Tang, Kritanjali Jain, Mengxue Bao, Zora Z. Wang, Xuhui Zhou, Zhitong Guo, Murong Cao, Mingyang Yang, Hao Yang Lu, Amaad Martin, Zhe Su, Leander Maben, Raj Mehta, Wayne Chi, Lawrence Jang, Yiqing Xie, Shuyan Zhou, and Graham Neubig. 2024. TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks. Preprint , arXiv:2412.14161. Bingyu Yan, Xiaoming Zhang, Litian Zhang, Lian Zhang, Ziyi Zhou, Dezhuang Miao, and Chaozhuo Li. 2025. Beyond self-talk: A communication-centric survey of llm-based multi-agent systems. Preprint , arXiv:2502.14321. Kaixun Yang, Yixin Cheng, Linxuan Zhao, Mladen Rakovi ´c, Zachari Swiecki, Dragan Gaševi ´c, and Guanliang Chen. 2024. Ink and algorithm: Exploring temporal dynamics in human-ai collaborative writing. Preprint , arXiv:2406.14885. 18 Lyumanshan Ye, Jiandong Jiang, Yuhan Liu, Yihan Ran, Pengfei Liu, and Danni Chang. 2024. Colin: A Mul- timodal Human-AI Co-Creation Storytelling System To Support Children’s Multi-Level Narrative Skills. Preprint , arXiv:2405.06495. Version 4, last revised 17 Mar 2025. Junwei Yu, Yepeng Ding, and Hiroyuki Sato. 2025. Dyntaskmas: A dynamic
|
https://arxiv.org/abs/2505.21116v1
|
task graph-driven frame- work for asynchronous and parallel llm-based multi- agent systems. Preprint , arXiv:2503.07675. Mingyue Yuan, Jieshan Chen, and Aaron Quigley. 2024. Maxprototyper: A multi-agent generation system for interactive user interface prototyping. Preprint , arXiv:2405.07131. J.D. Zamfirescu-Pereira, Eunice Jun, Michael Terry, Qian Yang, and Bjoern Hartmann. 2025. Beyond code generation: Llm-supported exploration of the program design space. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems , CHI ’25, page 1–17. ACM. Lidong Zhai, Zhijie Qiu, Xizhong Guo, and Jiaqi Li. 2025. The Athenian Academy: A Seven-Layer Ar- chitecture Model for Multi-Agent Systems. arXiv preprint arXiv:2504.12735 . An Zhang, Yuxin Chen, Leheng Sheng, Xiang Wang, and Tat-Seng Chua. 2024a. On generative agents in recommendation. In Proceedings of the 47th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’24, page 1807–1817, New York, NY , USA. Association for Computing Machinery. Chao Zhang, Xuechen Liu, Katherine Ziska, Soobin Jeon, Chi-Lin Yu, and Ying Xu. 2024b. Mathemyths: Leveraging large language models to teach mathe- matical language through child-ai co-creative story- telling. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems , CHI ’24, New York, NY , USA. Association for Computing Machinery. Chao Zhang, Cheng Yao, Jiayi Wu, Weijia Lin, Lijuan Liu, Ge Yan, and Fangtian Ying. 2022. StoryDrawer: A Child–AI Collaborative Drawing System to Sup- port Children’s Creative Visual Storytelling. In Pro- ceedings of the 2022 CHI Conference on Human Factors in Computing Systems , CHI ’22. Association for Computing Machinery. Guibin Zhang, Luyang Niu, Junfeng Fang, Kun Wang, Lei Bai, and Xiang Wang. 2025. Multi-agent ar- chitecture search via agentic supernet. Preprint , arXiv:2502.04180. Jingyue Zhang and Ian Arawjo. 2025. ChainBuddy: An AI-assisted Agent System for Generating LLM Pipelines. In Proceedings of the 2025 CHI Confer- ence on Human Factors in Computing Systems , CHI ’25, page 1–21. ACM.Jintian Zhang, Xin Xu, Ningyu Zhang, Ruibo Liu, Bryan Hooi, and Shumin Deng. 2024c. Exploring collaboration mechanisms for LLM agents: A social psychology view. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14544– 14607, Bangkok, Thailand. Association for Compu- tational Linguistics. Zixiao Zhao, Jing Sun, Zhe Hou, Zhiyuan Wei, Cheng-Hao Cai, Miao Qiao, and Jin Song Dong. 2025. Mactg: Multi-agent collaborative thought graph for automatic programming. arXiv preprint arXiv:2410.19245 . Lianmin Zheng et al. 2023a. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. In Advances in Neural Information Processing Systems . Qingxiao Zheng, Zhongwei Xu, Abhinav Choudhry, Yuting Chen, Yongming Li, and Yun Huang. 2023b. Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents. Preprint , arXiv:2310.15065. Kunlun Zhu, Hongyi Du, Zhaochen Hong, Xiaocheng Yang, Shuyi Guo, Zhe Wang, Zhenhailong Wang, Cheng Qian, Xiangru Tang, Heng Ji, and Jiaxuan You. 2025. Multiagentbench: Evaluating the col- laboration and competition of llm agents. Preprint , arXiv:2503.01935. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. Preprint , arXiv:1802.01886. Hanxi Zhu Zijun
|
https://arxiv.org/abs/2505.21116v1
|
Liu, Cheng Gao. 2024. Systematic Idea Refinement for Machine Learning Research Agents. In Submitted to Tsinghua University Course: Advanced Machine Learning & Machine Learning . Under review. A Proactivity Spectrum Supplementary This section details how we classify the proactivity levels shown in Fig. 2. We classify agent proactiv- ity along two dimensions— Process andDecision Making —using a rainbow scale from red (highest) to purple (lowest). We also note that the Planning phase remains under-explored, likely due to LLM agents’ low confidence in autonomous planning. The highest level of agent proactivity, marked in red, embodies a fully agent-driven pipeline. At this level, agents autonomously perform all tasks, including discussion, idea sharing, and peer re- view, without any human guidance or feedback. InMaCTG (Zhao et al., 2025), agents are assigned individual tasks and kick off the project on their 19 own—they come up with ideas, write code, assem- ble components, validate results, and refine the out- put. From start to finish, the entire creative process runs without any human input. At the second-tier level of proactivity, repre- sented in yellow and green, human intervention is slightly enhanced and helps shape the agent’s output, resulting in more stable and predictable be- havior (Zhang et al., 2024b; Mellouli et al., 2024). Co-Scientist (Gottweis et al., 2025) enables hu- man users to inject additional ideas into a shared workspace among agents, stimulating agents’ diver- gent thinking during the creativity process. Collab- Story (Venkatraman et al., 2024) attempts to build a large-scale story creation database with minimal human effort. It provides LLM agents with brief human instructions and storylines from previous agents. These iterative human inputs have a latent influence on idea generation from agents, despite no direct outlines. Human-Agent synergy, otherwise, leads to medium proactivity, characterized by peer-level collaboration. Both parties jointly engage in the Process phase to enhance the diversity and fea- sibility of creative outputs. However, to prevent potential ethical hazards and unexpected outcomes, these frameworks tend to entrust the final evalua- tion to human users, thereby inevitably exhibiting low agent proactivity at Decision Making (Hou et al., 2024; Lin et al., 2022). In line with our definition of proactivity in Sec- tion 2, systems that follow data-driven processes or act beyond direct human instructions demonstrate a higher level of proactivity. In our classification, such cases are highlighted in blue, and this increase of proactivity is particularly prominent in the Pro- cess phase. For example, ContextCam (Fan et al., 2024) not only receives iterative user requests dur- ing refinement but also incorporates environmental data collected from its sensors, such as weather con- ditions, camera input, and audio input. Colin (Ye et al., 2024) exhibits the agent proactivity through a different way. The system initiates the interaction questions to trace the understanding and idea of users, rather than relying on reactive prompt-based communication like typical Human-Agent synergy frameworks. The purple-marked work shows relatively low proactivity in both phases. Humans mainly use the LLM agent to generate ideas from alternative per- spectives, helping to fill in where human thinking might be limited. The system keeps its creativeoutput
|
https://arxiv.org/abs/2505.21116v1
|
strong by leaning on a solid human-driven backbone and manual evaluation. While the results are good, it also imposes an excessive load on de- signers and creators (Wan et al., 2024; He et al., 2024; Lim and Perrault, 2024). 20 MAS Technique Task Domain Framework Divergent ExplorationAUT and RAT Long-Term Guidance (2025) Character Design PersonaGym (2024) Creative Writing Creativity Support (LLMs) (2024) Humor Co-Creation Meme Alone (2025) Idea Generation Co-GPT Ideation (2024) Idea Generation Group-AI Brainwriting (2024) Idea Generation Virtual Canvas (2024) Image Generation ContextCam (2024) Interior Color Design Ideation C2Ideas (2024) Research Ideation Ideation Co-Pilot (2024) Research Ideation PersonaFlow (2024c) Scientific Research Co-creation VirSci (2025) Sketches Generation StoryDrawer (2022) Story Generation ICCRI (2025) Story Generation SPARKIT (2024) Iterative RefinementCharacter Design CharacterMeet (2024) Debating (Fairness) Multi-Agent Debate (2024) Hallucination Mitigation Hallucination Mitigation (2024b) Legal Consultation LawLuo (2024) LLM Pipeline Generation ChainBuddy (2025) Product Design DesignGPT (2023) Scientific Research Co-creation Baby-AIGS-MLer (2024) Screenwriting HoLLMwood (2024) Sketches Generation CICADA (2024) Social Simulation Generative Agents (2023) Collaborative SynthesisAgent Benchmarking TheAgentCompany (2024) Agent Collab. Visualisation AgentCoord (2024) Cognitive Synergy Solo Performance Prompting (2024c) Creative Writing Human-AI Co-creativity (2024) Creativity Simulation Creative Agents (2024) Formal Proof ProofNet (2023) Program Design Beyond Code Generation (2025) Recommendation Agent4Rec (2024a) Research Ideation CoQuest (2024b) Research Peer Review MARG (2024) Scientific Peer Review AgentReview (2024) Scientific Research Co-creation CrewAI (2024) Scientific Research Co-creation Co-Scientist (2025) Silhouette Generation Human-Machine Co-Creation (2024) Software Engineering ChatDev (2024) Software Engineering MaCTG (2025) Story Generation CollabStory (2024) Story Generation Colin (2024) Story Generation Mathemyths (2024b) Story Generation StoReys (2024) Story Generation StoryVerse (2024b) UI Prototyping MAxPrototyper (2024) Table 1: Overview of representative MAS frameworks categorized by their core creative techniques—Divergent Exploration, Iterative Refinement, and Collaborative Synthesis. 21 Task Task Description Available Datasets Problem-Solving in Physically Grounded ScenariosTest multi-agent’s ability to think resourcefully and act creatively in novel physical situations.MacGyver (Tian et al., 2024) Creative Writing Evaluate the writing skills and collaborative abil- ities of multi-agents.Human-AI co-writing Stories (Chakrabarty et al., 2024), Collab- Story (Venkatraman et al., 2024) Music Genre Evaluate the multi-agents in robot dance cre- ation.(De Filippo et al., 2024) Character Design Evaluating the creativity of multi-agent systems in visualizing and generating new characters.(Lataifeh et al., 2024) QA problem Open-ended question task. TriviaQA (Joshi et al., 2017), QA in Game simulation (Park et al., 2023), GPQA Diamond Set (Rein et al., 2024) Codenames Task Evaluate the models’ ability to identify words associated with a given word.(Srivastava et al., 2023) Mathematical Formal Proof Generation and VerificationTest the model’s ability of autoformalization and formal proving of undergraduate-level mathe- matics.(Azerbayev et al., 2023) Idea Generation Quantitatively evaluate and compare the ideas generated by LLMs.AI Idea Bench 2025(Qiu et al., 2025), AMiner Computer Science Dataset (Tang et al., 2008), LiveIdeaBench (Ruan et al., 2024) Fairness-Aware Debating Evaluate the ethical and practical implications of automated decision-making systems in the justice system.COMPAS dataset (Larson et al., 2016) Table 2: Creative tasks along with their associated datasets. Tasks that lack datasets and rely primarily on user studies are not included. Framework Granularity Method Persona Example Solo Performance Prompting (2024c) CoarseModel-Generated Self-Defined LLM
|
https://arxiv.org/abs/2505.21116v1
|
Discussion (2024) Model-Generated Self-Defined PersonaGym (2024) Human-Defined Self-Defined Baby-AIGS-MLer (2024) Human-Defined Assistant SPARKIT (2024) Human-Defined Self-Defined Multi-Agent Debate (2024) Human-Defined Debater Acceleron (2024) Human-Defined Mentor & Colleague ChainBuddy (2025) Human-Defined Mentor & Planner TheAgentCompany (2024) Medium-CoarseModel-Generated Company Employee HoLLMwood (2024) Human-Defined Artist TRIZ Agents (2025) Human-Defined Problem Solver Co-Scientist (2025) Human-Defined Researcher MaCTG (2025) Human-Defined Programmer DesignGPT (2023) Human-Defined Self-Defined CoQuest (2024b) Human-Defined Researcher LawLuo (2024) Human-Defined Lawyer MARG (2024) Human-Defined Expert PersonaFlow (2024c) FineData-Derived Researcher VirSci (2025) Data-Derived Researcher Agent4Rec (2024a) Data-Derived Media User PersonaLLM (2024) Model-Generated Self-Defined The Power of Personality (2025) Model-Generated Self-Defined Creative Agents (2024) Model-Generated Artist CoAGent (2023b) Model-Generated Self-Defined Generative Agents (2023) Human-defined Sandbox Character AgentReview (2024) Human-defined Reviewer Table 3: Summary of agent profile granularity and generation methods in MAS, with each paradigm’s role definition and paper citation. ‘Self-defined’ personas grant agents the freedom to adopt diverse characters, promoting flexible collaboration and creative innovation. 22 Paper Task Subjective Objective Kumar et al. (2025) AUT and RAT TTCT, Boden’s Criterion, and othersSemantic similarity Duan et al. (2025) AUT and others TTCT - Lu et al. (2024) AUT and others TTCT - Ge et al. (2025) Conceptual Design TTCT - Lim and Perrault (2024) Idea Generation TTCT - Shaer et al. (2024) Idea Generation Innovation, Insightfullness, and othersSemantic Similarity Sun et al. (2024) Legal Consultation Personalization and Profes- sionalism- Azerbayev et al. (2023) Mathematical Proving - BLEU Score Ding et al. (2023) Product Design Novelty, Completeness, and Feasibility- Wan and Kalman (2025) Plot Generation - Semantic Similarity Chakrabarty et al. (2024) Poem Writing Fluency and Creativity - D’Arcy et al. (2024) Paper Review Generation Specificity and Overall Rat- ing- Liu et al. (2024b) Research Ideation Boden’s criterion - Liu et al. (2024c) Research Ideation Creativity, Usefulness, and Helpfulness- Chen et al. (2024) Screenwriting Interestingness, Relevance and othersEntropy-n, Self-BLEU and others Gottweis et al. (2025) Scientific Research Co-creation Novelty and Impact - Radensky (2024) Scientific Research Co-creation Novelty, Specificity, and othersSemantic Similarity Su et al. (2025) Scientific Research Co-creation Novelty, Clarity, Feasibility Semantic Euclidean Dis- tance Zhang et al. (2024b) Story Generation Readability, Perceived Cre- ativity, and others- Mellouli et al. (2024) Story Generation Interactivity, Coherence, and othersSelf-BLEU Lin et al. (2022) Story Generation Goal completion and Satisfi- cation- Venkatraman et al. (2024) Story Generation Creativity Entropy and others Ali et al. (2025) Story Generation TTCT - Hou et al. (2024) Interior Color Design Ideation Inspiring, Reasonableness, and others- Venkatesh et al. (2025) Image Editing & Generation Expressiveness, Aesthetic appeal, and othersCLIP scores and others Wu et al. (2025) Meme Generation Funniness, Creativity, and Shareability- Zhang et al. (2022) Sketches Generation TTCT - Ibarrola et al. (2024) Sketches Generation TTCT FID, TIE ,and Semantic loss Lawton et al. (2023) Sketches Generation Novelty and Surprise within MICSI- Lataifeh et al. (2024) Silhouette Generation Designer’s review FID Yuan et al. (2024) UI Prototype Generation - FID and Generation Diver- sity Table 4: Output evaluation methods employed across various tasks. The upper section details evaluations for text generation tasks, while the lower section focuses on image generation tasks. 23
|
https://arxiv.org/abs/2505.21116v1
|
arXiv:2505.21119v1 [cs.LG] 27 May 2025Universal Value-Function Uncertainties Moritz A. Zanger, Max Weltevrede, Yaniv Oren, Pascal R. Van der Vaart, Caroline Horsch, Wendelin Böhmer, Matthijs T. J. Spaan Department of Intelligent Systems Delft University of Technology Delft, 2628 XE, The Netherlands Correspondence: m.a.zanger@tudelft.nl Abstract Estimating epistemic uncertainty in value functions is a crucial challenge for many aspects of reinforcement learning (RL), including efficient exploration, safe decision-making, and offline RL. While deep ensembles provide a robust method for quantifying value uncertainty, they come with significant computational over- head. Single-model methods, while computationally favorable, often rely on heuristics and typically require additional propagation mechanisms for myopic uncertainty estimates. In this work we introduce universal value-function uncer- tainties (UVU), which, similar in spirit to random network distillation (RND), quantify uncertainty as squared prediction errors between an online learner and a fixed, randomly initialized target network. Unlike RND, UVU errors reflect policy- conditional value uncertainty , incorporating the future uncertainties any given policy may encounter. This is due to the training procedure employed in UVU: the online network is trained using temporal difference learning with a synthetic reward derived from the fixed, randomly initialized target network. We provide an extensive theoretical analysis of our approach using neural tangent kernel (NTK) theory and show that in the limit of infinite network width, UVU errors are exactly equivalent to the variance of an ensemble of independent universal value functions. Empirically, we show that UVU achieves equal performance to large ensembles on challenging multi-task offline RL settings, while offering simplicity and substantial computational savings. 1 Introduction Deep reinforcement learning (RL) has emerged as an essential paradigm for addressing difficult sequential decision-making problems [Mnih et al., 2015, Silver et al., 2016, Vinyals et al., 2019] but a more widespread deployment of agents to real-world applications remains challenging. Open problems such as efficient exploration, scalable offline learning and safety pose persistent obstacles to this transition. Central to these capabilities is the quantification of epistemic uncertainty , an agent’s uncertainty due to limited data. In the context of RL, uncertainty estimation relating to the value function is of particular importance as it reflects uncertainty about long-term consequences of actions. However, computationally tractable estimation of value-function uncertainty remains a challenge. Bayesian RL approaches, both in its model-based [Ghavamzadeh et al., 2015] and model-free [Dear- den et al., 1998] flavors, typically come with sound theoretical underpinnings but face significant computational hurdles due to the general intractability of posterior inference. Theoretical guarantees of the latter are moreover often complicated by the use of training procedures like temporal difference (TD) learning with bootstrapping. Conversely, deep ensembles [Lakshminarayanan et al., 2017] have emerged as a reliable standard for practical value uncertainty estimation in deep RL [Osband et al., 2016, Chen et al., 2017]. Empirically, independently trained value functions from random Preprint. Under review. initialization provide effective uncertainty estimates that correlate well with true estimation errors. Al- though in general more tractable than full posterior inference, this approach remains computationally challenging for larger models where a manyfold increase in computation and memory severely limits scalability. Various single-model approaches like
|
https://arxiv.org/abs/2505.21119v1
|
random network distillation (RND) [Burda et al., 2019], pseudo counts [Bellemare et al., 2016] or intrinsic curiosity [Pathak et al., 2017] efficiently capture myopic epistemic uncertainty but require additional propagation mechanisms to obtain value uncertainties [O’Donoghue et al., 2018, Janz et al., 2019, Zhou et al., 2020] and often elude a thor- ough theoretical understanding. We conclude that there persists a lack of computationally efficient single-model approaches with the ability to directly estimate policy-dependent value uncertainties with a strong theoretical foundation. To this end, we introduce universal value-function uncertainties (UVU), a novel method designed to estimate epistemic uncertainty of value functions for any given policy using a single-model architecture. Similar in spirit to the well-known RND algorithm, UVU quantifies uncertainty through a prediction error between an online learner uand a fixed, randomly initialized target network g. Crucially, and in contrast to the regression objective of RND, UVU optimizes its online network uusing temporal difference (TD) learning with a synthetic reward rggenerated entirely from the target network g. By construction, the reward rgimplies a value learning problem to which the target function gitself is a solution, forcing the online learner uto recover gthrough minimization of TD losses. UVU then quantifies uncertainty as the squared prediction error between online learner and fixed target function. Unlike previous methods, our design requires no training of multiple models (e.g., ensembles) nor separate value and uncertainty models (e.g., RND, ICM). Furthermore, we design UVU as a universal policy-conditioned model (comparable to universal value function approximators [Schaul et al., 2015]), that is, it takes as input a state, action, and policy encoding and predicts the epistemic uncertainty associated with the value function for the encoded policy. A key contribution of our work is a thorough theoretical analysis of UVU using the framework of neural tangent kernels (NTK) [Jacot et al., 2018]. Specifically, we characterize the learning dynamics of wide neural networks with TD losses and gradient descent to obtain closed-form solutions for the convergence and generalization behavior of neural network value functions. In the limit of infinite network width, we then show that prediction errors generated by UVU are equivalent to the variance of an ensemble of universal value functions, both in expectation and with finite sample estimators. We validate UVU empirically on an offline multi-task benchmark from the minigrid suite where agents are required to reject tasks they cannot perform to achieve maximal scores. We show that UVU’s uncertainty estimates perform comparably to large deep ensembles, while drastically reducing the computational footprint. 2 Preliminaries We frame our work within the standard Markov Decision Process (MDP) [Bellman, 1957] formalism, defined by the tuple (S,A,R, γ, P, µ ). Here, Sis the state space, Ais the action space, R:S×A → P(R)is the distribution of immediate rewards, γ∈[0,1)is the discount factor, P:S ×A → P(S) is the transition probability kernel, and µ:P(S)is the initial state distribution. An RL agent interacts with this environment by selecting actions according to a policy π:S →P(A). At each timestep t, the agent is in state St, takes action At∼π(·|St), receives a reward Rt∼ R(·|St, At),
|
https://arxiv.org/abs/2505.21119v1
|
and transitions to a new state St+1∼P(·|St, At). We quantify the merit of taking actions At=a in state St=sand subsequently following policy πby the action-value function, or Q-function Qπ:S × A − → R, which accounts for the cumulative discounted future rewards and adheres to a recursive consistency condition described by the Bellman equation Qπ(s, a) =ER,π,P[R0+γQπ(S1, A1)|S0=s, A0=a]. (1) The agent’s objective then is to maximize expected returns J(π) =ES0∼µ,A 0∼π(·|S0)[Qπ(S0, A0)]. Often, we may be interested in agents capable of operating a variety of policies to achieve different goals. Universal value function approximators (UVFAs) [Schaul et al., 2015] address this by conditioning value functions additionally on an encoding z∈ Z. This encoding specifies a current policy context, indicating for example a task or goal. We denote such universal Q-functions as Q(s, a, z ). In the context of this work, we consider zto be a parameterization or indexing of a specific policy π(·|s, z), or in other words Q:S × A × Z − → R, Q (s, a, z )≡Qπ(·|s,z)(s, a). 2 Both in the single and multi task settings, obtaining effective policies may require efficient exploration and an agent’s ability to reason about epistemic uncertainty. This source of uncertainty, in contrast toaleatoric uncertainty, stems from a lack of knowledge and may in general be reduced by the acquisition of data. In the context of RL, we make an additional distinction between myopic uncertainty andvalue uncertainty . 2.1 Myopic Uncertainty and Neural Tangent Kernels Myopic uncertainty estimation methods, such as RND or ensembles predicting immediate rewards or next states, quantify epistemic uncertainty without explicitly accounting for future uncertainties along trajectories. We first briefly recall the RND algorithm [Burda et al., 2019] , before introducing the neural tangent kernel (NTK) [Jacot et al., 2018] framework. Random network distillation comprises two neural networks: A fixed, randomly initialized target network g(x;ψ0), and a predictor network u(x;ϑt). The online predictor u(x;ϑt)is trained via gradient descent to minimize a square loss between its own predictions and the target network’s output on a set of data points X={xi∈Rdin}ND i=1. The RND prediction error at a test point xthen serves as an uncertainty or novelty signal. The loss and error function of RND are then given as Lrnd(θt) =1 2(u(X;θt)−g(X;ψ0))2,and ϵ2 rnd(x;ϑt, ψ0) =1 2(u(x;ϑt)−g(x;ψ0))2.(2) This mechanism relies on the idea that the predictor network recovers the outputs of the target network only for datapoints contained in the dataset xi∈ X, while a measurable error ϵ2 rndpersists for out-of-distribution test samples xT/∈ X, yielding a measure of epistemic uncertainty. Next, we introduce the framework of neural tangent kernels, an analytical framework we intend to employ for the study of neural network and deep ensemble behavior. Consider a neural network f(x, θt) :Rnin→Rwith hidden layer widths n1, . . . , n L=nand inputs x∈Rnin, a dataset X, and labels Y={yi∈R}ND i=1. Inputs ximay, for example, be state-action tuples and labels yimay be rewards. The network parameters θ0∈Rnpare initialized randomly θ0∼ N(0,1)and updated with gradient descent with infinitesimal step sizes, also called gradient flow. In the limit
|
https://arxiv.org/abs/2505.21119v1
|
of infinite width n, the function initialization f(·, θ0), as shown by Lee et al. [2018], is equivalent to a Gaussian process prior with a specific kernel κ:Rnin×Rnin− →Rcalled the neural network Gaussian process (NNGP). The functional evolution of fthrough gradient flow is then governed by a gradient inner product kernel Θ :Rnin×Rnin− →Ryielding Θ(x, x′) =∇θf(x, θ0)⊤∇θf(x′, θ0),and κ(x, x′) =E[f(x, θ0)f(x′, θ0)]. (3) Remarkably, seminal work by Jacot et al. [2018] showed that in the limit of infinite width and appro- priate parametrization1, the kernel Θbecomes deterministic and remains constant throughout training. This limiting kernel, referred to as the neural tangent kernel (NTK), leads to analytically tractable training dynamics for various loss functions, including the squared loss L(θt) =1 2∥f(X;θt)− Y∥2 2. Owing to this, one can show [Jacot et al., 2018, Lee et al., 2020] that for t− → ∞ post conver- gence function evaluations f(XT, θ∞)on a set of test points XT, too, are Gaussian with mean E[f(XT, θ∞)] = Θ XTXΘ−1 XXYand covariance Cov[f(XT, θ∞)] =κXTXT− ΘXT,XΘ−1 XXκXXT+h.c. + ΘXTXΘ−1 XXκXXΘ−1 XXΘXXT,(4) where h.c. denotes the Hermitian conjugate of the preceding term and we used the shorthands ΘX1X2= Θ(X1,X2)andκX1X2=κ(X1,X2). This expression provides a closed-form solution for the epistemic uncertainty captured by an infinite ensemble of NNs in the NTK regime trained with square losses. For example, the predictive variances of such ensembles are easily obtained as the diagonal entries of Eq. 4. While requiring an idealized setting, NTK theory offers a solid theoretical grounding for quantifying the behavior of deep ensembles and, by extension, myopic uncertainty estimates from related approaches. However, this analysis does not extend to value functions trained with TD losses and bootstrapping as is common in practical reinforcement learning settings. 2.2 Value Uncertainty In contrast to myopic uncertainties, value uncertainty quantifies a model’s lack of knowledge in the value Qπ(s, a). As such it inherently depends on future trajectories induced by policies π. Due to 1so-called NTK parametrization scales forward/backward passes appropriately, see Jacot et al. [2018] 3 this need to account for accumulated uncertainties over potentially long horizons, value uncertainty estimation typically renders more difficult than its myopic counterpart. A widely used technique[Osband et al., 2016, Chen et al., 2017, An et al., 2021] to this end is the use of deep ensembles of value functions Q(s, a, θ t) :S × A − → Rfrom random initializations θ0.Q- functions are trained on transitional data XTD={si, ai}ND i=1,X′ TD={s′ i, a′ i}ND i=1, and r={ri}ND i=1, where s′ iare samples from the transition kernel Panda′ iare samples from a policy π.Q-functions are then optimized through gradient descent on a temporal difference (TD) loss given by L(θt) =1 2∥[γQπ(X′ TD, θt)]sg+r−Qπ(XTD, θt)∥2 2, (5) where [·]sgindicates a stop-gradient operation. Due to the stopping of gradient flow through Q(X′, θt), we refer to this operation as semi-gradient updates. Uncertainty estimates can then be obtained as the variance σ2 q(s, a) =Vθ0[Q(s, a, θ t)]between ensembles of Q-functions from random initializations. While empirically successful, TD-trained deep ensembles are not as well understood as the supervised learning
|
https://arxiv.org/abs/2505.21119v1
|
setting outlined in the previous section 2.1. Due to the use of bootstrapped TD losses, the closed-form NTK regime solutions in Eq. 4 do not apply to deep value function ensembles. An alternative to the above approach is the propagation of myopic uncertainty estimates. Several prior methods[O’Donoghue et al., 2018, Zhou et al., 2020, Luis et al., 2023] formalize this setting under a model-based perspective, where transition models ˜P(·|s, a)are sampled from a Bayesian posterior conditioned on transition data up to t. For acyclic MDPs, this setting permits a consistency condition similar to the Bellman equation that upper bounds value uncertainties recursively. While this approach devises a method for obtaining value uncertainties from propagated myopic uncertainties, several open problems remain, such as the tightness of model-free bounds of this kind [Janz et al., 2019, Van der Vaart et al., 2025] as well as how to prevent underestimation of these upper bounds due to the use of function approximation [Rashid et al., 2020, Zanger et al., 2024]. 3 Universal Value-Function Uncertainties Our method, universal value-function uncertainties (UVU), measures epistemic value uncertainty as the prediction errors between an online learner and a fixed target network, similar in spirit to random network distillation [Burda et al., 2019]. However, while RND quantifies myopic uncertainty through immediate prediction errors, UVU modifies the training process of the online learner such that the resulting prediction errors reflect value-function uncertainties, that is, uncertainty about long-term returns under a given policy. Our method centers around the interplay of two distinct neural networks: an online learner u(s, a, z, ϑ t) :S × A × Z − → R, parameterized by weights ϑt, and a fixed, randomly initial- ized target network g(s, a, z, ψ 0) :S ×A×Z − → R, parameterized by weights ψ0. Given a transition (s, a, s′)and policy encoding z, we draw subsequent actions a′from a policy π(·|s′, z). Then, we use the fixed target network gto generate synthetic rewards as rz g(s, a, s′, a′) =g(s, a, z, ψ 0)−γg(s′, a′, z, ψ 0). (6) While the weights ψ0of the target network remain fixed at initialization, the online network uis trained to minimize a TD loss using the synthetic reward rπ g. Given a dataset X={si, ai, zi}ND i=1, we have L(ϑt) =1 2NDNDP i γ[u(s′ i, a′ i, zi, ϑt)]sg+rz g(si, ai, s′ i, a′ i)−u(si, ai, zi, ϑt)2, (7) where [·]sgindicates a stop-gradient operation. For any tuple (s, a, z )(∈ X or not), we measure predictive uncertainties as squared prediction errors between the learner and the target function ϵ(s, a, z, ϑ t, ψ0)2= u(s, a, z, ϑ t)−g(s, a, z, ψ 0)2. (8) The intuition behind this design is that, by construction, the value-function associated with policy π(·|s, z)and the synthetic rewards rz g(s, a, s′, a′)exactly equals the fixed target network g(s, a, z, ψ 0). As a sanity check, note that the target function g(s, a, z, ψ 0)itself satisfies the Bellman equation for the policy π(·|s, z)and the synthetic reward definition in Eq. (6), constituting a random value function torz
|
https://arxiv.org/abs/2505.21119v1
|
gand hence achieves zero-loss according to Eq. (7). Therefore, if the dataset Xsufficiently covers the dynamics induced by π(·|s, z), the online network u(s, a, z, ϑ 0)is able to recover g(s, a, z, ψ 0) 4 Figure 2: ( left:) Illustration of uncertainty estimation in tabular UVU with 4independently initialized tables for uandg. Access to full trajectory data allows uto recover g. (right :) By executing action “b”, trajectories are effectively truncated, preventing ufrom recovering g. All plots use γ= 0.7. exactly, nullifying prediction errors. However, when data coverage is incomplete for the evaluated policy, minimization of the TD loss 7 is not sufficient for the online network u(s, a, z, ϑ 0)to recover target network predictions g(s, a, z, ψ 0). This discrepancy is captured by the prediction errors, which quantify epistemic uncertainty regarding future gaps of the available data. 3.1 Building Intuition by an Example aa ab? b? b? ... a Figure 1: Chain MDP of length N with unexplored actions b.To build intuition for how UVU operates and captures value uncertainty, we first consider a tabular setting with a simple chain MDP as illustrated in Figure 1. Suppose we collect data from a deterministic policy πdusing action aexclusively. Given this dataset, suppose we would like to estimate the uncertainty associated with the value Qπ(·|s,z)(s, a)of a policy π(·|s, z) that differs from the data-collection policy in that it chooses action “ b” ins3. In our tabular setting, we then initialize random tables usaandgsa. For every transition (st, at, st+1)contained in our single- trajectory dataset, we draw at+1∼π(·|s, z), compute the reward rg,tasrg,t=gstat−γgst+1at+1 and update table entries with the rule ustat← −rg,t+γust+1at+1. Fig. 2 visualizes this process for several independently initialized tables (rows in Fig. 2) for the data-collecting policy πd(left), and for the altered policy π(·|s, z)(right), which chooses action “ b” ins3. We outline how this procedure yields uncertainty estimates: We first note, that one may regard gas a randomly generated value-function, for which we derive the corresponding reward function as rg. Asgsa, by construction, is the value-function corresponding to rg, one may expect that the update rule applied to usacauses usato recover gsa. Crucially, however, this is only possible if sufficient data is available for the evaluated policy. When a policy diverges from available data, as occurs under π(·|s, z)ins3, this causes an effective truncation of the collected trajectory. Consequently, us1aandus2areceive updates fromus3b, which remains at its initialization, rather than inferring the reward-generating function gsa. In the absence of long-term data, the empirical Bellman equations reflected in our updates do not uniquely determine the underlying value function gsa. Indeed, both usaandgsaincur zero TD-error in the r.h.s. of Fig. 2, yet differ significantly from each other. It is this ambiguity that UVU errors (gsa−usa)2quantify. To ensure urecovers g, longer rollouts under the policy π(·|s, z)are required to sufficiently constrain the solution space dictated by the Bellman equations (as seen in Fig. 2 left). Figure 3 illustrates uncertainty estimates for the shown chain MDP using neural networks and for a whole family of policies π(·|s, z)which select the
|
https://arxiv.org/abs/2505.21119v1
|
unexplored action bwith probability 1−z. We analyze the predictive variance of an ensemble of 128universal Q-functions, each conditioned on the policy π(·|s, z). In the bottom row, we plot the squared prediction error of a single UVU model, averaged over 128independent heads. Both approaches show peaked uncertainty in early sections, as policies are more likely to choose the unknown action “ b” eventually, and low uncertainty closer to the terminal state and for zclose to 1. A comparison with RND is provided in the Appendix B.3. 4 What Uncertainties Do Universal Value-Function Uncertainties Learn? While the previous section provided intuition for UVU, we now derive an analytical characterization of the uncertainties captured by the prediction errors ϵbetween a converged online learner uand the fixed target g. We turn to NTK theory to characterize the generalization properties of the involved neural networks in the limit of infinite width, allowing us to draw an exact equality between the squared predictions errors of UVU and the variance of universal value function ensembles. In the following analysis, we use the notational shorthand x= (s, a, z )andx′= (s′, a′, z)and denote a neural network f(x, θt)with hidden layer widths n1, . . . , n L=n, transitions from X= 5 0 5 10 15 20s0.50.6 0.7 0.8 0.9 1.0z Ensem ble [Q(s,a0,z)] - 128 m odels 0 5 10 15 20s0.50.6 0.7 0.8 0.9 1.0z UVU (s,a0,z)2 - 1 m odel Figure 3: From left to right, (1. and 2.): Variance of an ensemble of 128 universal Q-functions trained on a chain MDP dataset. (3. and 4.): Value uncertainty as measured by UVU prediction errors with a single 128-headed model. All plots evaluate the “ a” action of the chain MDP. {(si, ai, zi)}ND i=1toX′={(s′ i, a′ i, zi)}ND i=1, where a′ i∼π(·|s′ i, zi), and rewards r={ri}ND i=1. The evolution of the parameters θtunder gradient descent with infinitesimal step sizes, also called gradient flow, is driven by the minimization of TD losses with d dtθt=−α∇θL(θt),andL(θt) =1 2∥[γf(X′, θt)]sg+r−f(X, θt)∥2 2. (9) We study the dynamics induced by this parameter evolution in the infinite-width limit n→ ∞ . In this regime, the learning dynamics of fbecome linear as the NTK becomes deterministic and stationary, permitting explicit closed-form expressions for the evolution of the function f(x, θt). In particular, we show that the post convergence function limt− →∞f(x, θt)is given by f(x, θ∞) =f(x, θ0)−ΘxX(ΘXX−γΘX′X)−1(f(X, θ0)−(γf(X′, θ0) +r)), (10) where Θxx′is the NTK of f. Proof is given in Appendix A.1. This identity is useful to our analysis as it delineates any converged function f(x, θ∞)trained with TD losses 9 through its initialization f(x, θ0). Theorem 1 leverages this deterministic dependency to express the distribution of post convergence functions over random initializations θ0. Theorem 1. Letf(x, θt)be a NN with Lhidden layers of width n1, . . . , n L=ntrained with gradient flow to reduce the TD loss L(θt) =1 2∥γ[f(X′, θt)]sg+r−f(X, θt)∥2 2. In the limit of infinite width n− → ∞ and time t− → ∞ , the distribution of predictions f(XT, θ∞)on a
|
https://arxiv.org/abs/2505.21119v1
|
set of test points XTconverges to a Gaussian with mean and covariance given by Eθ0 f(XT, θ∞) = ΘXTX∆−1 Xr, Covθ0 f(XT, θ∞) =κXTXT−(ΘXTX∆−1 XΛXT+h.c.)+(Θ XTX∆−1 X(ΛX−γΛX′)∆−1⊤ XΘXXT), where Θxx′is the NTK, κxx′is the NNGP kernel, h.c.denotes the Hermitian conjugate, and ∆˜X= ΘX˜X−γΘX′˜X,and Λ˜X=κX˜X−γκX′˜X. Proof is provided in Appendix A.1. Theorem 1 is significant as it allows us to formalize explicitly the expected behavior and uncertainties of neural networks trained with semi-gradient TD losses, including universal value function ensembles and the prediction errors of UVU. In particular, the variance of an ensemble of universal Q-functions Q(XT, θ∞)over random initializations θ0is readily given by the diagonal entries of the covariance matrix Cov[Q(XT, θ∞)]. Applied to the UVU setting, Theorem 1 gives an expression for the converged online network u(x, ϑ∞) = Θ xX∆−1 Xrz gtrained with the synthetic rewards rz g=g(X, ψ0)−γg(X′, ψ0). From this, It is straightforward to obtain the distribution of post convergence prediction errors1 2ϵ(x, ϑ∞, ψ0)2. In Corollary 1, we use this insight to conclude that the expected squared prediction errors of UVU precisely match the variance of value functions Q(x, θ∞)from random initializations θ0. Corollary 1. Under the conditions of Theorem 1, let u(x, ϑ∞)be a converged online predictor trained with synthetic rewards generated by the fixed target network g(x, ψ0)withrz g=g(X, ψ0)− γg(X′, ψ0). Furthermore denote the variance of converged universal Q-functions Vθ0[Q(x, θ∞)]. Assume u,g, and Qare architecturally equal and parameters are drawn i.i.d. θ0, ϑ0, ψ0∼ N(0,1). The expected squared prediction error coincides with Q-function variance Eϑ0,ψ01 2ϵ(x, ϑ∞, ψ0)2 =Vθ0 Q(x, θ∞) , (11) where the l.h.s. expectation and r.h.s. variance are taken over random initializations ϑ0, ψ0, θ0. 6 64 128 256 512 1024 2048 Network width0246Avg. ReturnsWidth ablation BDQNP(8) UVU DQN BDQNP(35) BDQNP(15) BDQNP(3)UVU BDQNP(1)020406080Avg. Runtime (min) 100k Gradient StepsRuntime Analysis (a) (b) (c) Figure 4: (a) Ablation on GoToDoor -10with different network widths. Shaded region indicates standard deviations over 5 seeds. (b) Runtime of various ensemble sizes vs. UVU. Ensembles are implemented with vmap in JAX[Bradbury et al., 2018]. (c) Illustration of the GoToDoor environment. The agent (red triangle) must navigate to the door indicated by the task specification z. Proof is given in Appendix A.1.3. This result provides the central theoretical justification for UVU: in the limit of infinite width, our measure of uncertainty, the expected squared prediction error between the online and target network, is mathematically equivalent to the variance one would obtain by training an ensemble of universal Q-functions. In practice, we are moreover interested in the behavior of finite estimators, that is, ensemble variances are estimated with a finite number of models. We furthermore implement UVU with a number of multiple independent heads uiandgiwith shared hidden layers. Corollary 2 shows that the distribution of the sample mean squared prediction error from Mheads is identical to the distribution of the sample variance of an ensemble of M+ 1independently trained universal Q-functions. Corollary 2. Under the conditions of Theorem 1, consider online and target networks with M independent heads ui, gi,i= 1, . . . , M , each
|
https://arxiv.org/abs/2505.21119v1
|
trained to convergence with errors ϵi(x, ϑ∞, ψ0). Let 1 2¯ϵ(x, ϑ∞, ψ0)2=1 2MPM i=1ϵi(x, ϑ∞, ψ0)2be the sample mean squared prediction error over M heads. Moreover, consider M+ 1independent converged Q-functions Qi(x;θ∞)and denote their sample variance ¯σ2 Q(x, θ∞) =1 MPM+1 i=1(Qi(x;θ∞)−¯Q(x;θ∞))2, where ¯Qis the sample mean. The two estimators are identically distributed according to a scaled Chi-squared distribution 1 2¯ϵ(x, ϑ∞, ψ0)2D= ¯σ2 Q(x, θ∞),¯σ2 Q(x, θ∞)∼σ2 Q Mχ2(M), (12) withMdegrees of freedom and σ2 Q(x, θ∞) =Vθ0[Q(x, θ∞)]is the analytical variance of converged Q-functions given by Theorem 1. Proof is provided in Appendix A.2.3. The distributional equivalence of these finite sample estimators provides theoretical motivation for using a multi headed architecture with shared hidden layers within a single UVU model and its use as an estimator for ensemble variances of universal Q-functions. While the assumptions of infinite width and gradient flow are theoretical idealizations, several empirical results suggest that insights from the NTK regime can translate well to practical finite width deep learning models [Lee et al., 2020, Liu et al., 2020, Tsilivis and Kempe, 2022], motivating further empirical investigation in Section 5. 5 Empirical Analysis Our empirical analysis is designed to assess whether UVU can effectively quantify value function uncertainty in practical settings, comparing its performance against established baselines, particularly deep ensembles. Specifically, we aim to address the following questions: 1.Does the theoretical motivation for UVU hold in practice and do its uncertainty estimates enable effective decision-making comparable to deep ensembles? 2.How are uncertainty estimates generated by UVU affected by deviations from our theoretical analysis, namely finite network width? To address these questions, we focus on an offline multitask RL setting with incomplete data where reliable uncertainty estimation is crucial to attain high performance. 7 Table 1: Results of offline multitask RL with task rejection on different variations of the GoToDoor environment. Results are average evaluation returns of the best-performing policy over 105gradient steps and intervals are 90% student’s tconfidence intervals. Size DQN BDQNP(3) BDQNP(15) BDQNP(35) DQN-RND DQN-RND-P UVU (Ours) 5 5 .50±.15 8 .69±.24 10.50±.04 10 .58±.03 3.94±.50 10.41±.12 10 .54±.03 6 4 .93±.12 7 .66±.09 9 .39±.04 9.57±.04 1.99±.40 9 .28±.12 9.54±.03 7 4 .58±.09 6 .61±.16 8 .49±.05 8.75±.06 2.66±.43 8 .12±.23 8.73±.04 8 4 .06±.12 5 .91±.10 7 .68±.05 7 .92±.05 2 .53±.54 7 .40±.14 8.03±.04 9 3 .66±.09 5 .04±.08 6 .69±.07 7 .03±.13 2 .39±.38 6 .39±.19 7.29±.10 10 3 .39±.11 4 .64±.14 6 .09±.13 6.53±.16 2.25±.48 5 .64±.17 6.72±.12 5.1 Experimental Setup In our experimental analysis, we use an offline variant of the GoToDoor environment from the Minigrid benchmark suite [Chevalier-Boisvert et al., 2023]. An example view is shown in Figure 4 (c). In this task, the agent navigates a grid world containing four doors of different colors, placed at random locations and receives a task specification zindicating a target door color. Upon opening the correct door, the agent receives a reward and is placed in a random different location. Episodes are of fixed length and feature a randomly generated grid layout and random door positions / colors. In our experiments,
|
https://arxiv.org/abs/2505.21119v1
|
we use variations of different difficulties by increasing maximum grid sizes. Dataset Collection. A dataset D={(si, ai, ri, zi, s′ i,)}ND i=1is collected using a policy that per- forms expertly but systematically fails for certain task/grid combinations (e.g., it can not successfully open doors on the “north” wall, irrespective of color or grid layout). Policies seeking to improve upon the behavior policy thus ought to deviate from the dataset, inducing value uncertainty. Task Rejection Protocol. All baselines implement a DQN-based agent trained in an offline fashion onD. As the agents aim to learn an optimal policy for all grids and tasks contained in D, the resulting greedy policy tends to deviate from the available data when the collecting policy is suboptimal. We employ a task-rejection protocol to quantify an agent’s ability to recognize this divergence and the associated value uncertainty. As most task/grid combinations are contained in D, though with varying levels of policy expertise, myopic uncertainty is not sufficient for fulfilling this task. Specifically, upon encountering the initial state s0, the agent is given opportunity to reject a fixed selection of tasks (here door colors). It is subsequently given one of the remaining, non-rejected tasks and performance is measured by the average return achieved on the attempted task. Successful agents must thus either possess uncertainty estimates reliable enough to consistently reject tasks associated with a data/policy mismatch or rely on out-of-distribution generalization. Similar protocols, known as accuracy rejection curves, have been used widely in the supervised learning literature[Nadeem et al., 2009]. 5.2 Results We conduct experiments according to the above protocol and perform a quantitative evaluation of UVU and several baseline algorithms. All agents are trained offline and use the basic DQN architecture [Mnih et al., 2015] adapted for universal value functions, taking the task encoding z as an additional input to the state (details are provided in Appendix B). Specifically, we compare UVU against several baselines: A DQN baseline with random task rejection (DQN); Bootstrapped DQN with randomized priors (BDQNP) [Osband et al., 2019]; A DQN adaptation of random network distillation (DQN-RND) [Burda et al., 2019] and a version adapted with the uncertainty prior mechanism proposed by Zanger et al. [2024] (DQN-RND-P). Except for the DQN baseline, all algorithms reject tasks based on the highest uncertainty estimate, given the initial state s0and action a0, which is chosen greedily by the agent. Table 1 shows the average return achieved by each method on the GoToDoor experiment across different maximum grid sizes, with average runtimes displayed in Fig. 4 (b). This result addresses our first research question regarding the practical effectiveness of UVU compared to ensembles and other baseline methods. As shown, the standard DQN baseline performs significantly worse than uncertainty-based algorithms, indicating that learned Q-functions do not generalize sufficiently to counterbalance inadequate uncertainty estimation. Both small and large ensembles significantly improve performance by leveraging uncertainty to reject tasks and policies associated with missing data. RND-based agents perform well when intrinsic reward priors are used. Our approach scores 8 highly and outperforms many of the tested baselines with statistical significance, indicating that
|
https://arxiv.org/abs/2505.21119v1
|
it is indeed able to effectively quantify value uncertainty using a single-model multi-headed architecture. We furthermore ablate UVU’s dependency on network width, given that our theoretical analysis is situated in the infinite width limit. Fig. 4 (a) shows that UVU’s performance scales similarly with network width to DQN and BDQNP baselines, indicating that finite-sized networks, provided appropriate representational capacity, are sufficient for effective uncertainty estimates. 6 Related Work A body of literature considers the quantification of value function uncertainty in the context of exploration. Early works [Dearden et al., 1998, Engel et al., 2005] consider Bayesian adoptions of model-free RL algorithms. More recent works provide theoretical analyses of the Bayesian model- free setting and correct applications thereof [Fellows et al., 2021, Schmitt et al., 2023, Van der Vaart et al., 2025], which is a subject of debate due to the use TD losses. Several works furthermore derive provably efficient model-free algorithms using frequentist upper bounds on values in tabular [Strehl et al., 2006, Jin et al., 2018] and linear settings [Jin et al., 2020]. Similarly, Yang et al. [2020] derive provably optimisic bounds of value functions in the NTK regime, but in contrast to our work uses local bonuses to obtain these. The exact relationship between bounds derived from local bonuses and the functional variance in ensemble or Bayesian settings remains open. The widespread use and empirical success of ensembles for uncertainty quantification in deep learning [Dietterich, 2000, Lakshminarayanan et al., 2017] has motivated several directions of research towards a better theoretical understanding of their behavior. Following seminal works by Jacot et al. [2018] and Lee et al. [2020] who characterize NN learning dynamics in the NTK regime, a number of works have connected deep ensembles to Bayesian interpretations [He et al., 2020, D’Angelo and Fortuin, 2021]. Moreover, a number of papers have studied the learning dynamics of model-free RL: in the overparametrized linear settings [Xiao et al., 2021]; in neural settings for single [Cai et al., 2019] and multiple layers [Wai et al., 2020]; to analyze generalization behavior [Lyle et al., 2022] with linear and second-order approximations. It should be noted that the aforementioned do not focus on probabilistic descriptions of posterior distributions in the NTK regime. In contrast, our work provides probabilistic closed-form solutions for this setting with semi-gradient TD learning. In practice, the use of deep ensembles is common in RL, with applications ranging from efficient exploration [Osband et al., 2016, Chen et al., 2017, Osband et al., 2019, Nikolov et al., 2019, Zanger et al., 2024] to off-policy or offline RL [An et al., 2021, Chen et al., 2021, Lee et al., 2021] and conservative or safe RL [Lütjens et al., 2019, Lee et al., 2022, Hoel et al., 2023]. Single model methods that aim to reduce the computational burden of ensemble methods typically operate as myopic uncertainty estimators [Burda et al., 2019, Pathak et al., 2017, Lahlou et al., 2021, Zanger et al., 2025] and require additional mechanisms [O’Donoghue et al., 2018, Janz et al., 2019, Zhou et al., 2020, Luis et al., 2023]. 7 Limitations and
|
https://arxiv.org/abs/2505.21119v1
|
Discussion In this work, we introduced universal value-function uncertainties (UVU), an efficient single-model method for uncertainty quantification in value functions. Our method measures uncertainties as prediction error between a fixed, random target network and an online learner trained with a tem- poral difference (TD) loss. This induces prediction errors that reflect long-term, policy-dependent uncertainty rather than myopic novelty. One of our core contributions is a thorough theoretical analysis of this approach via neural tangent kernel theory, which, in the limit of infinite network width, establishes an equivalence between UVU errors and the variance of ensembles of universal value functions. Empirically, UVU achieves performance comparable and sometimes superior to sizeable deep ensembles and other baselines in challenging offline task-rejection settings, while offering substantial computational savings. We believe our work opens up several avenues for future research: Although our NTK analysis provides a strong theoretical backing, it relies on idealized assumptions, notably the limit of infinite network width (a thoroughgoing exposition of our approximations is provided in Appendix A.3). Our experiments suggest UVU’s performance is robust in practical finite-width regimes (Figure 4), yet bridging this gap between theory and practice remains an area for future work. On a related 9 note, analysis in the NTK regime typically eludes feature learning. Combinations of UVU with representation learning approaches such as self-predictive auxiliary losses [Schwarzer et al., 2020, Guo et al., 2022, Fujimoto et al., 2023] are, in our view, a very promising avenue for highly challenging exploration problems. Furthermore, while our approach estimates uncertainty for given policies, it does not devise a method for obtaining diverse policies and encodings thereof. We thus believe algorithms from the unsupervised RL literature[Touati and Ollivier, 2021, Zheng et al., 2023] naturally integrate with our approach. In conclusion, we believe UVU provides a strong foundation for future developments in uncertainty-aware agents that are both capable and computationally feasible. 10 References G. An, S. Moon, J.-H. Kim, and H. O. Song. Uncertainty-based offline reinforcement learning with diversified q-ensemble. Advances in neural information processing systems , 34:7436–7447, 2021. J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. Advances in neural information processing systems , 29, 2016. R. Bellman. A Markovian decision process. Journal of mathematics and mechanics , 6, 1957. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/jax-ml/jax . Y . Burda, H. Edwards, A. J. Storkey, and O. Klimov. Exploration by random network distillation. In International conference on learning representations, ICLR , 2019. Q. Cai, Z. Yang, J. D. Lee, and Z. Wang. Neural temporal-difference learning converges to global optima. Advances in Neural Information Processing Systems , 32, 2019. R. Y . Chen, S. Sidor, P. Abbeel, and J. Schulman. UCB exploration via Q-ensembles. arXiv preprint arXiv:1706.01502 , 2017. X. Chen, C. Wang, Z. Zhou, and K. Ross. Randomized
|
https://arxiv.org/abs/2505.21119v1
|
ensembled double Q-learning: Learning fast without a model. arXiv preprint arXiv:2101.05982 , 2021. M. Chevalier-Boisvert, B. Dai, M. Towers, R. Perez-Vicente, L. Willems, S. Lahlou, S. Pal, P. S. Castro, and J. Terry. Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. In Advances in Neural Information Processing Systems 36, New Orleans, LA, USA , December 2023. F. D’Angelo and V . Fortuin. Repulsive deep ensembles are bayesian. Advances in Neural Information Processing Systems , 34:3451–3465, 2021. R. Dearden, N. Friedman, S. Russell, et al. Bayesian Q-learning. Aaai/iaai , 1998:761–768, 1998. T. G. Dietterich. Ensemble methods in machine learning. In Multiple classifier systems: First international workshop, MCS . Springer, 2000. Y . Engel, S. Mannor, and R. Meir. Reinforcement learning with gaussian processes. In Proceedings of the 22nd international conference on Machine learning , pages 201–208, 2005. M. Fellows, K. Hartikainen, and S. Whiteson. Bayesian Bellman operators. Advances in neural information processing systems , 34, 2021. S. Fujimoto, W.-D. Chang, E. Smith, S. S. Gu, D. Precup, and D. Meger. For sale: State-action representation learning for deep reinforcement learning. Advances in neural information processing systems , 36:61573–61624, 2023. M. Gallici, M. Fellows, B. Ellis, B. Pou, I. Masmitja, J. N. Foerster, and M. Martin. Simplifying deep temporal difference learning. arXiv preprint arXiv:2407.04811 , 2024. S. Gerschgorin. Uber die abgrenzung der eigenwerte einer matrix. Izvestija Akademii Nauk SSSR, Serija Matematika , 7(3):749–754, 1931. M. Ghavamzadeh, S. Mannor, J. Pineau, and A. Tamar. Bayesian reinforcement learning: A survey. Foundations and trends in machine learning , 8, 2015. Z. Guo, S. Thakoor, M. Pîslar, B. Avila Pires, F. Altché, C. Tallec, A. Saade, D. Calandriello, J.-B. Grill, Y . Tang, et al. BYOL-Explore: Exploration by bootstrapped prediction. Advances in neural information processing systems , 35:31855–31870, 2022. H. Hasselt. Double Q-learning. Advances in neural information processing systems , 23, 2010. 11 B. He, B. Lakshminarayanan, and Y . W. Teh. Bayesian deep ensembles via the neural tangent kernel. Advances in neural information processing systems , 33, 2020. K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level per- formance on Imagenet classification. In Proceedings of the IEEE international conference on computer vision , 2015. C.-J. Hoel, K. Wolff, and L. Laine. Ensemble quantile networks: Uncertainty-aware reinforcement learning with applications in autonomous driving. IEEE Transactions on intelligent transportation systems , 2023. A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems , 31, 2018. D. Janz, J. Hron, P. Mazur, K. Hofmann, J. M. Hernández-Lobato, and S. Tschiatschek. Successor uncertainties: Exploration and uncertainty in temporal difference learning. Advances in neural information processing systems , 32, 2019. C. Jin, Z. Allen-Zhu, S. Bubeck, and M. I. Jordan. Is q-learning provably efficient? Advances in neural information processing systems , 31, 2018. C. Jin, Z. Yang, Z. Wang, and M. I. Jordan. Provably efficient reinforcement learning with linear function approximation. In Conference on learning theory , pages 2137–2143. PMLR, 2020. A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative
|
https://arxiv.org/abs/2505.21119v1
|
q-learning for offline reinforcement learning. Advances in neural information processing systems , 33:1179–1191, 2020. S. Lahlou, M. Jain, H. Nekoei, V . I. Butoi, P. Bertin, J. Rector-Brooks, M. Korablyov, and Y . Bengio. Deup: Direct epistemic uncertainty prediction. arXiv preprint arXiv:2102.08501 , 2021. B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems , 30, 2017. J. Lee, J. Sohl-dickstein, J. Pennington, R. Novak, S. Schoenholz, and Y . Bahri. Deep neural networks as gaussian processes. In International conference on learning representations , 2018. J. Lee, L. Xiao, S. S. Schoenholz, Y . Bahri, R. Novak, J. Sohl-Dickstein, and J. Pennington. Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. Journal of Statistical Mechanics: Theory and Experiment , 2020, Dec. 2020. K. Lee, M. Laskin, A. Srinivas, and P. Abbeel. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning. In International Conference on Machine Learning , pages 6131–6141. PMLR, 2021. S. Lee, Y . Seo, K. Lee, P. Abbeel, and J. Shin. Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble. In Conference on Robot Learning , pages 1702–1712. PMLR, 2022. C. Liu, L. Zhu, and M. Belkin. On the linearity of large non-linear models: when and why the tangent kernel is constant. Advances in Neural Information Processing Systems , 33:15954–15964, 2020. C. E. Luis, A. G. Bottero, J. Vinogradska, F. Berkenkamp, and J. Peters. Model-based uncertainty in value functions. In International Conference on Artificial Intelligence and Statistics , pages 8029–8052. PMLR, 2023. B. Lütjens, M. Everett, and J. P. How. Safe reinforcement learning with model uncertainty estimates. In2019 International Conference on Robotics and Automation (ICRA) , pages 8662–8668. IEEE, 2019. C. Lyle, M. Rowland, W. Dabney, M. Kwiatkowska, and Y . Gal. Learning dynamics and generalization in reinforcement learning. arXiv preprint arXiv:2206.02126 , 2022. V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried- miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature , 518, 2015. 12 M. S. A. Nadeem, J.-D. Zucker, and B. Hanczar. Accuracy-rejection curves (arcs) for comparing classification methods with a reject option. In Machine Learning in Systems Biology , pages 65–81. PMLR, 2009. N. Nikolov, J. Kirschner, F. Berkenkamp, and A. Krause. Information-directed exploration for deep reinforcement learning. In International conference on learning representations, ICLR , 2019. I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped DQN. Advances in neural information processing systems , 29, 2016. I. Osband, B. Van Roy, D. J. Russo, Z. Wen, et al. Deep exploration via randomized value functions. Journal of machine learning research , 20, 2019. B. O’Donoghue, I. Osband, R. Munos, and V . Mnih. The uncertainty Bellman equation and explo- ration. In International conference on machine learning . PMLR, 2018. D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning . PMLR, 2017. T. Rashid, B. Peng,
|
https://arxiv.org/abs/2505.21119v1
|
W. Böhmer, and S. Whiteson. Optimistic exploration even with a pessimistic initialisation. Proceedings of ICLR 2020 , 2020. T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. In International conference on machine learning , pages 1312–1320. PMLR, 2015. S. Schmitt, J. Shawe-Taylor, and H. van Hasselt. Exploration via epistemic value estimation. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, 2023. M. Schwarzer, A. Anand, R. Goel, R. D. Hjelm, A. Courville, and P. Bachman. Data-efficient reinforcement learning with self-predictive representations. arXiv preprint arXiv:2007.05929 , 2020. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V . Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature , 529(7587):484–489, 2016. A. L. Strehl, L. Li, E. Wiewiora, J. Langford, and M. L. Littman. Pac model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning , pages 881–888, 2006. A. Touati and Y . Ollivier. Learning one representation to optimize all rewards. Advances in Neural Information Processing Systems , 34:13–23, 2021. N. Tsilivis and J. Kempe. What can the neural tangent kernel tell us about adversarial robustness? Advances in Neural Information Processing Systems , 35:18116–18130, 2022. P. R. Van der Vaart, M. T. Spaan, and N. Yorke-Smith. Epistemic Bellman operators. In Proceedings of the AAAI Conference on Artificial Intelligence , 2025. O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. nature , 575(7782):350–354, 2019. H.-T. Wai, Z. Yang, Z. Wang, and M. Hong. Provably efficient neural gtd for off-policy learning. Advances in Neural Information Processing Systems , 33:10431–10442, 2020. C. Xiao, B. Dai, J. Mei, O. A. Ramirez, R. Gummadi, C. Harris, and D. Schuurmans. Understanding and leveraging overparameterization in recursive value estimation. In International Conference on Learning Representations , 2021. G. Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760 , 2019. 13 Z. Yang, C. Jin, Z. Wang, M. Wang, and M. Jordan. Provably efficient reinforcement learning with kernel and neural function approximations. Advances in Neural Information Processing Systems , 33:13903–13916, 2020. Y . Yue, R. Lu, B. Kang, S. Song, and G. Huang. Understanding, predicting and better resolving q-value divergence in offline-rl. Advances in Neural Information Processing Systems , 36:60247– 60277, 2023. M. A. Zanger, W. Böhmer, and M. T. Spaan. Diverse projection ensembles for distributional reinforcement learning. In International conference on learning representations , 2024. M. A. Zanger, P. R. Van der Vaart, W. Böhmer, and M. T. Spaan. Contextual similarity distillation: Ensemble uncertainties with a single model. arXiv preprint arXiv:2503.11339 , 2025. C. Zheng, R. Salakhutdinov, and B. Eysenbach. Contrastive difference predictive coding. arXiv preprint arXiv:2310.20141 , 2023. Q. Zhou, H. Li, and J. Wang. Deep model-based reinforcement learning via estimated uncertainty and conservative policy optimization. In Proceedings of the AAAI Conference
|
https://arxiv.org/abs/2505.21119v1
|
on Artificial Intelligence , 2020. 14 A Theoretical Results This section provides proofs and further theoretical results for universal value-function uncertainties (UVU). A.1 Learning Dynamics of UVU We begin by deriving learning dynamics for general functions with temporal difference (TD) losses and gradient descent, before analyzing the post training distribution of deep ensembles and prediction errors of UVU. A.1.1 Linearized Learning Dynamics with Temporal Difference Losses We analyze the learning dynamics of a function trained using semi-gradient temporal difference (TD) losses on a fixed dataset of transitions X,X′. Letf(x, θt)denote a NN of interest with depth Land widths n1, . . . , n L−1=n. Proposition 1. In the limit of infinite width n− → ∞ and infinite time t− → ∞ , the function f(x, θt) converges to f(x, θ∞) =f(x, θ0)−ΘxX(ΘXX−γΘX′X)−1(f(X, θ0)−(γf(X′, θ0) +r)), (13) where Θxx′is the neural tangent kernel of f. Proof. We begin by linearizing the function faround its initialization parameters θ0: flin(x, θt) =f(x, θ0) +∇θf(x, θ0)⊤(θt−θ0). (14) We assume gradient descent updates with infinitesimal step size and a learning rate αon the loss L(θt) =1 2∥γflin(X′, θt)sg+r−flin(X, θt)∥2 2, (15) yielding the parameter evolution d dtθt=−α∇θL(θt). (16) Setting wt=θt−θ0and find the learning dynamics: d dtwt=−α∇θf(X, θ0) (flin(X, θt)−(γflin(X′, θt) +r)). (17) Thus, the evolution of the linearized function is given by d dtflin(x, θt) =−α∇θf(x, θ0)⊤∇θf(X, θ0) (flin(X, θt)−(γflin(X′, θt) +r)). (18) Letting δTD(θt) =flin(X, θt)−(γflin(X′, θt) +r), we obtain the differential equation d dtδTD(θt) =−α Θt0 XX−γΘt0 X′X δTD(θt), (19) where Θt0 xx′=∇θf(x, θ0)⊤∇θf(x′, θ0)is the (empirical) tangent kernel of flin(x, θt). Since the linearization flin(x, θt)has constant gradients ∇θf(x, θ0), the above differential equation is linear and solvable so long as the matrix Θt0 XX−γΘt0 X′Xis positive definite. With an exponential ansatz, we obtain the solution δTD(θt) =e−αt(Θt0 XX−γΘt0 X′X)δTD(θ0), (20) where eXis a matrix exponential. Reintegrating yields the explicit evolution of predictions flin(x, θt) =f(x, θ0) +Zt 0d dt′flin(x, θt′)dt′(21) =f(x, θ0)−Θt0 xX Θt0 XX−γΘt0 X′X−1 e−αt(Θt0 XX−γΘt0 X′X)−I δTD(θ0). (22) Jacot et al. [2018] show that in the limit of infinite layer widths of the neural network, the NTK Θt0 xx′becomes deterministic and constant Θt0 xx′− →Θxx′. As a consequence, the linear approximation flin(x;θt)becomes exact w.r.t. the original function lim width− →∞flin(x;θt) =f(x, θt)[Lee et al., 2020]. 15 Remark on the constancy of the NTK in TD learning. We note here, that our proof assumed the results by Jacot et al. [2018] to hold for the case of semi-gradient TD updates, namely that the NTK becomes deterministic and constant Θt0 xx′− →Θxx′in the limit of infinite width under the here shown dynamics. First, the determinacy of the NTK at initialization follows from the law of large numbers and applies in our case equally as in the least squares case. The constancy of the NTK throughout training is established by Theorem 2 in Jacot et al. [2018], which we restate informally below. Theorem 2. [Jacot et al. [2018]] In the limit of infinite layer widths n→ ∞ andn1, . . . , n L=n, the kernel Θt0 xx′converges uniformly on the interval
|
https://arxiv.org/abs/2505.21119v1
|
t∈[0, T]to the constant neural tangent kernel Θt0 xx′→Θxx′, provided that the integralRT 0∥dt∥2dtstays bounded. Here, dt∈RNDis the training direction of the parameter evolution such thatd dtθt=−α∇θf(X, θ)dt In the here studied case of semi-gradient TD learning, the parameter evolution (as outlined above in Eq. (17)) is described by the gradient ∇θf(X, θ0)and the training direction dtaccording to d dtθt=−α∇θf(X, θ0) (flin(X, θt)−(γflin(X′, θt) +r))| {z } dt, (23) where the training direction is given by dt=flin(X, θt)−(γflin(X′, θt) +r) =δTD(θt). Provided that the matrix Θt0 XX−γΘt0 X′Xis positive definite, the norm of the training direction ∥dt∥2decays exponentially by Eq. 20. This implies ∥dt∥2<∥d0∥2e−tλmin, (24) where λminis the smallest eigenvalue of Θt0 XX−γΘt0 X′X. Assuming Θt0 XX−γΘt0 X′Xis positive definite, λminis positive and as a consequence, we have Z∞ 0∥dt∥2dt <Z∞ 0∥d0∥2e−tλmindt <∞, (25) bounding the required integral of Theorem 2 for any Tand establishing Θt0 xx′− →Θxx′uniformly on the interval [0,∞)(see Theorem 2 in Jacot et al. [2018] for detailed proof for the last statement). We note, however, that the condition for Θt0 XX−γΘt0 X′Xto be positive definite is, for any γ >0, stronger than in the classical results for supervised learning with least squares regression. While ΘXXcan be guaranteed to be positive definite for example by restricting Xto lie on a unit-sphere, xi∈ X to be unique, and by assuming non-polynomial nonlinearities in the neural network (so as to prevent rank decay in the network expressivity), the condition is harder to satisfy in the TD learning setting. Here, the eigenspectrum of Θt0 XX−γΘt0 X′Xtends to depend on the transitions X → X′ themselves and thus is both dependent on the discount γas well as the interplay between gradient structures of the NTK and the MDP dynamics. We note here, that this is not primarily a limitation of applying NTK theory to TD learning, but is reflected in practical experience: TD learning can, especially in offline settings, indeed be instable and diverge. Instability of this form is thus inherent to the learning algorithm rather than an artifact of our theoretical treatment. Informally, one approach towards guaranteeing positive definiteness ofΘt0 XX−γΘt0 X′Xis by enforcing diagonal dominance, appealing to the Gershgorin circle theorem [Gerschgorin, 1931]. For a matrix A= [aij], every real eigenvalue λmust lie in aii−Ri≤λ≤aii+Ri, (26) where Ri=P i̸=j|aij|is the sum of off-diagonal elements of a row i. In other words, a lower bound on the smallest real eigenvalue can be increased by increasing diagonal entries aiiwhile decreasing off-diagonal elements aij. In the TD learning setting, this translates to gradient conditioning, e.g., by ensuring ∥∇θf(x, θ)∥2=∥∇θf(x′, θ)∥2=Cfor any pair x, x′, guaranteeing cross-similarities to be smaller than self-similarities. Indeed several recent works pursue similar strategies to stabilize offline TD learning [Yue et al., 2023, Gallici et al., 2024] and rely on architectural elements like layer normalization [Ba et al., 2016] to shape gradient norms. 16 A.1.2 Post Training Function Distribution with Temporal Difference Dynamics We now aim to establish the distribution of post-training functions f(x, t∞)when initial parameters θ0are drawn randomly i.i.d. For the remainder of this section, we will assume
|
https://arxiv.org/abs/2505.21119v1
|
the infinite width limit, s.t.flin(x, θ∞) =f(x, θ∞)andΘt0 xx′= Θ xx′. The post-training function f(x, θ∞)is given by f(x, θ∞) =f(x, θ0)−ΘxX Θt0 XX−γΘt0 X′X−1(f(X, θ0)−(γf(X′, θ0) +r)), (27) and is thus a deterministic function of the initialization θ0. Theorem 1. Letf(x, θt)be a NN with Lhidden layers of width n1, . . . , n L=ntrained with gradient flow to reduce the TD loss L(θt) =1 2∥γ[f(X′, θt)]sg+r−f(X, θt)∥2 2. In the limit of infinite width n− → ∞ and time t− → ∞ , the distribution of predictions f(XT, θ∞)on a set of test points XTconverges to a Gaussian with mean and covariance given by Eθ0 f(XT, θ∞) = ΘXTX∆−1 Xr, Covθ0 f(XT, θ∞) =κXTXT−(ΘXTX∆−1 XΛXT+h.c.)+(Θ XTX∆−1 X(ΛX−γΛX′)∆−1⊤ XΘXXT), where Θxx′is the NTK, κxx′is the NNGP kernel, h.c.denotes the Hermitian conjugate, and ∆˜X= ΘX˜X−γΘX′˜X,and Λ˜X=κX˜X−γκX′˜X. Proof. We begin by introducing a column vector of post-training function evaluations on a set of test points XT, and the training data XandX′. Moreover, we introduce the shorthand ∆X= ΘXX−γΘX′X, (28) and similarly ∆X′= ΘXX′−γΘX′X′. The vector can then be compactly described in block matrix notation by f(XT, θ∞) f(X, θ∞) f(X′, θ∞)! |{z } f∞= I−ΘXTX∆−1 XγΘXTX∆−1 X I−ΘXX∆−1 X γΘXX∆−1 X I−ΘX′X∆−1 X γΘX′X∆−1 X | {z } A f(XT, θ0) f(X, θ0) f(X′, θ0)! |{z} f0+ ΘXTX∆−1 Xr ΘXX∆−1 Xr ΘX′X∆−1 Xr |{z } b.(29) Lee et al. [2018] show that neural networks with random Gaussian initialization θ0(including NTK parametrization) are described by the neural network Gaussian process (NNGP) f(XT, θ0)∼ N(0, κXTXT)withκXTXT=E[f(XT, θ0)f(XT, θ0)⊤]. By extension, the initializations f0are jointly Gaussian with zero mean and covariance matrix Cov f0 = κXTXTκXTXκXTX′ κXXTκXX κXX′ κX′XTκX′XκX′X′! | {z } K. (30) As the post-training function evaluations f∞given in Eq. (29) are affine transformations of the multivariate Gaussian random variables f0∼ N(0, K), they themselves are multivariate Gaussian with distribution f∞∼ N(b, AKA⊤). We are content with obtaining an expression for the distribution of f(XT, θ∞)and thus in the following focus on the top-left entry of the block matrix (AKA⊤)11. For notational brevity, we introduce the following shorthand notations Λ˜X=κX˜X−γκX′˜X(31) After some rearranging, one obtains the following expression for the covariance Cov (f∞ XT) Covθ0 f(XT, θ∞) =κXTXT−(ΘXTX∆−1 XΛXT+h.c.)+(Θ XTX∆−1 X(ΛX−γΛX′)∆−1⊤ XΘXXT). 17 A.1.3 Distribution of UVU Predictive Errors We now aim to find an analytical description of the predictive errors as generated by our approach. For this, let u(x, ϑt)denote the predictive (online) network and g(x;ψ0)the fixed target network. We furthermore denote ϵ(x, ϑt, ψ0) =u(x, ϑt)−g(x, ψ0)the prediction error between online and target network. Corollary 1. Under the conditions of Theorem 1, let u(x, ϑ∞)be a converged online predictor trained with synthetic rewards generated by the fixed target network g(x, ψ0)withrz g=g(X, ψ0)− γg(X′, ψ0). Furthermore denote the variance of converged universal Q-functions Vθ0[Q(x, θ∞)]. Assume u,g, and Qare architecturally equal and parameters are drawn i.i.d. θ0, ϑ0, ψ0∼ N(0,1). The expected squared prediction error coincides with Q-function variance Eϑ0,ψ01 2ϵ(x, ϑ∞, ψ0)2 =Vθ0 Q(x, θ∞) , (11) where the l.h.s. expectation and r.h.s. variance are taken over random
|
https://arxiv.org/abs/2505.21119v1
|
initializations ϑ0, ψ0, θ0. Proof. Since our algorithm uses semi-gradient TD losses to train u(x, ϑt), the linearized dynamics of Theorem (1)apply. However, we consider a fixed target network g(x;ψ0)to produce synthetic rewards according to rg=g(x, ψ0)−γg(x′, ψ0). (32) With the post training function as described by Eq. 27, the post-training prediction error in a query point xfor this reward is given by u(x, ϑ∞)−g(x, ψ0) = u(x, ϑ0)−g(x, ψ0)−ΘxX∆−1 X(u(X, ϑ0)−(γu(X′, ϑ0) +g(X, ψ0)−γg(X′, ψ0))).(33) We again use the shorthand ϵt= (ϵ(XT, ϑt, ψ0), ϵ(X, ϑt, ψ0), ϵ(X′, ϑt, ψ0))⊤and reusing the block matrix Afrom Eq. 29, we can write ϵ∞=Aϵ0. (34) By assumption, u(x, ϑ0)andg(x, ψ0)are architecturally equivalent and initialized i.i.d., and ϵ0is simply the sum of two independent Gaussian vectors with covariance Cov[ϵ0] = 2K. We conclude that prediction errors ϵ∞are Gaussian with distribution ϵ∞∼ N(0,2AKA⊤). Taking the diagonal of the covariance matrix AKA⊤ 11, we obtain Eϑ0,ψ01 2ϵ(x, ϑ∞, ψ0)2 =Vθ0 Q(x, θ∞) , (35) where Vθ0 Q(x, θ∞) =κxx−(ΘxX∆−1 XΛx+h.c.)+(Θ xX∆−1 X(ΛX−γΛX′)∆−1⊤ XΘXx). (36) A.2 Multiheaded UVU We now show results concerning the equivalence of multiheaded UVU prediction errors and finite ensembles of Q-functions. We first outline proofs for two results by Lee et al. [2018] and Jacot et al. [2018], which rely on in our analysis. A.2.1 Neural Network Gaussian Process Propagation and Independence Consider a deep neural network fwithLlayers. Let zl i(x)denote the i-th output of layer l= 1, . . . , L , defined recursively as: zl i(x) =σbbl i+σw√nl−1nl−1X j=1wl ijxl j(x), xl j(x) =ϕ(zl−1 j(x)), (37) where nlis the width of layer lwithn0=ninandx0=x. Further, σwandσbare constant variance multipliers, weights wland biases blare initialized i.i.d. with N(0,1), andϕis a Lipschitz-continuous nonlinearity. The i-th function output fi(x)of the NN is then given by fi(x) =zL i(x). 18 Proposition 2 (Lee et al. [2018]) .At initialization and in the limit n1. . . , n L−1− → ∞ , thei-th output at layer l,zl i(x), converges to a Gaussian process with zero mean and covariance function κl ii given by κ1 ii(x, x′) =σ2 w n0x⊤x′+σ2 b,and k1 ij= 0, i̸=j. (38) κl ii(x, x′) =σ2 b+σ2 wEzl−1 i∼GP (0,κl−1 ii)[ϕ(zl−1 i(x))ϕ(zl−1 i(x′))]. (39) (40) and κl ij(x, x′) =E[zl i(x)zl j(x′)] =κl(x, x′) ifi=j, 0 ifi̸=j.(41) Proof. The proof is done by induction. The induction assumption is that if outputs at layer l−1 satisfy a GP structure zl−1 i∼ GP (0, κl−1 ii), (42) with the covariance function defined as κl−1 ii(x, x′) =E[zl−1 i(x)zl−1 i(x′)] =kl−1 jj(x, x′),∀i, j, (43) κl−1 ij(x, x′) =E[zl−1 i(x)zl−1 j(x′)] = 0 ,fori̸=j, (44) then, outputs at layer lfollow zl i(x)∼ GP (0, κl ii), (45) where the kernel at layer lis given by: κl ii(x, x′) =E[zl i(x)zl i(x′)] =κl jj(x, x′),∀i, j, (46) κl ij(x, x′) =E[zl i(x)zl j(x′)] = 0 ,ifi̸=j. (47) with the recursive definition κl ii(x, x′) =σ2 b+σ2 wEzl−1 i∼GP (0,kl−1 ii)[ϕ(zl−1 i(x))ϕ(zl−1 i(x′))]. (48) Base case (l= 1) . At layer l= 1we have: z1 i(x) =σw√n0n0X j=1w1 ijxj+σbb1 i. (49) This is an affine transform of Gaussian random
|
https://arxiv.org/abs/2505.21119v1
|
variables; thus, z1 i(x)is Gaussian distributed with z1 i(x)∼ GP (0, κ1 ii), (50) with kernel κ1 ii(x, x′) =σ2 w n0x⊤x′+σ2 b,and κ1 ij= 0, i̸=j. (51) Induction step l >1.For layers l >1we have zl i(x) =σbbl i+σw√nl−1nl−1X j=1wl ijxl j(x), xl j(x) =ϕ(zl−1 j(x)). (52) By the induction assumption, zl−1 j(x)are generated by independent Gaussian processes. Hence, xl i(x)andxl j(x)are independent for i̸=j. Consequently, zl i(x)is a sum of independent random variables. By the Central Limit Theorem (as n1, . . . , n L−1→ ∞ ) the tuple {zl i(x), zl i(x′)}tends to be jointly Gaussian, with covariance given by: E[zl i(x)zl i(x′)] =σ2 b+σ2 wEzl−1 i∼GP (0,κl−1 ii)[ϕ(zl−1 i(x))ϕ(zl−1 i(x′))]. (53) Moreover, as zl iandzl jfori̸=jare defined through independent rows of the parameters wl, bland independent pre-activations xl(x), we have κl ij=E[zl i(x)zl j(x′)] = 0 , i̸=j, (54) completing the proof. 19 A.2.2 Neural Tangent Kernel Propagation and Independence We change notation slightly from the previous section to make the parametrization of fi(x, θL)and zl i(x;θl)explicit with zl i(x, θl) =σbbl i+σw√nl−1nl−1X j=1wl ijxl j(x), xl j(x) =ϕ(zl−1 j(x;θl−1)), (55) where θldenotes the parameters {w1, b1, . . . , wl, bl}up to layer landfi(x, θL) =zL i(x;θL). Let furthermore ϕbe a Lipschitz-continuous nonlinearity with derivative ˙ϕ(x) =d dxϕ(x). Proposition 3 (Jacot et al. [2018]) .In the limit n1. . . , n L−1− → ∞ , the neural tangent kernel Θl ii(x, x′)of the i-th output zl i(x, θl)at layer l, defined as the gradient inner product Θl ii(x, x′) =∇θlzl i(x, θl)⊤∇θlzl i(x′, θl), (56) is given recursively by Θ1 ii(x, x′) =κ1 ii(x, x′) =σ2 w n0x⊤x′+σ2 b,and Θ1 ij(x, x′) = 0 , i̸=j. (57) Θl ii(x, x′) = Θl−1 ii(x, x′) ˙κl−1 ii(x, x′) +κl ii(x, x′), (58) (59) where ˙κl ii(x, x′) =σ2 wEzl−1 i∼GP (0,κl−1 ii)[˙ϕ(zl−1 i(x))˙ϕ(zl−1 i(x′))] (60) and Θl ij(x, x′) =∇θlzl i(x, θl)⊤∇θlzl j(x′, θl) =Θl(x, x′) ifi=j, 0 ifi̸=j.(61) Proof. We again proceed by induction. The induction assumption is that if gradients satisfy at layer l−1 Θl−1 ij(x, x′) =∇θl−1zl−1 i(x, θl−1)⊤∇θl−1zl−1 j(x′, θl−1) =Θl−1(x, x′) ifi=j, 0 ifi̸=j,(62) then at layer lwe have Θl ii(x, x′) = Θl−1 ii(x, x′) ˙κl ii(x, x′) +κl ii(x, x′) (63) and Θl ij(x, x′) =∇θlzl i(x, θl)⊤∇θlzl j(x′, θl) = 0 ifi̸=j. (64) Base Case ( l= 1).At layer l= 1, we have z1 i(x) =σbb1 i+σw√n0n0X jw1 ijxj, (65) and the gradient inner product is given by: ∇θ1z1 i(x, θ1)⊤∇θ1z1 i(x′, θ1) =σ2 w n0x⊤x′+σ2 b=κ1 ii(x, x′). (66) Inductive Step ( l >1).For layers l >1, we split parameters θl=θl−1∪ {wl, bl}and split the inner product by Θl ii(x, x′) =∇θl−1zl i(x, θl)⊤∇θl−1zl i(x′, θl)| {z } l.h.s+∇{wl,bl}zl i(x, θl)⊤∇{wl,bl}zl i(x′, θl)| {z } r.h.s. (67) Note that the r.h.s involves gradients w.r.t. last-layer parameters, i.e. the post-activation outputs of the previous layer, and by the same arguments as in the NNGP derivation of Proposition 2, this is a sum of independent post activations s.t. in the limit nl−1− → ∞ ∇{wl,bl}zl i(x, θl)⊤∇{wl,bl}zl j(x′,
|
https://arxiv.org/abs/2505.21119v1
|
θl) =kl ii(x, x′), i =j, 0, i ̸=j.(68) 20 For the l.h.s. , we first apply chain rule to obtain ∇θl−1zl i(x, θl) =σw√nl−1nl−1X jwl ij˙ϕ(zl−1 j(x, θl−1))∇θl−1zl−1 j(x, θl−1). (69) The gradient inner product of outputs iandjthus reduces to ∇θl−1zl i(x, θl)⊤∇θl−1zl j(x′, θl) = σ2 w nl−1nl−1X kwl ikwl jk˙ϕ(zl−1 k(x, θl−1))˙ϕ(zl−1 k(x′, θl−1))Θl−1 kk(x, x′). (70) By the induction assumption Θl−1 kk(x, x′) = Θl−1(x, x′)and again by the independence of the rows wl iandwl jfori̸=j, the above expression converges in the limit nl−1− → ∞ to an expectation with Θl ij(x, x′) =Θl−1(x, x′) ˙κl ii(x, x′) +κl ii(x, x′) i=j, 0 i̸=j.(71) This completes the induction. A.2.3 Multiheaded UVU: Finite Sample Analysis We now define multiheaded predictor with Moutput heads ui(x, ϑt)fori= 1, . . . , M and a fixed multiheaded target network gi(xt;ψ0)of equivalent architecture as uwith the corresponding prediction error ϵi(x, ϑt, ψ0)accordingly. Let ui(x, ϑt)be trained such that each head runs the same algorithm as outlined in Section 3 independently. Corollary 2. Under the conditions of Theorem 1, consider online and target networks with M independent heads ui, gi,i= 1, . . . , M , each trained to convergence with errors ϵi(x, ϑ∞, ψ0). Let 1 2¯ϵ(x, ϑ∞, ψ0)2=1 2MPM i=1ϵi(x, ϑ∞, ψ0)2be the sample mean squared prediction error over M heads. Moreover, consider M+ 1independent converged Q-functions Qi(x;θ∞)and denote their sample variance ¯σ2 Q(x, θ∞) =1 MPM+1 i=1(Qi(x;θ∞)−¯Q(x;θ∞))2, where ¯Qis the sample mean. The two estimators are identically distributed according to a scaled Chi-squared distribution 1 2¯ϵ(x, ϑ∞, ψ0)2D= ¯σ2 Q(x, θ∞),¯σ2 Q(x, θ∞)∼σ2 Q Mχ2(M), (12) withMdegrees of freedom and σ2 Q(x, θ∞) =Vθ0[Q(x, θ∞)]is the analytical variance of converged Q-functions given by Theorem 1. Proof. By Collorary. 1, the prediction error of a single headed online and target network ϵ(x, ϑt, ψ0) =u(x, ϑt)−g(x, ψ0)converges in the limit n1. . . , n L−1− → ∞ andt− → ∞ to a Gaussian with zero mean and variance ϵ(x, ϑ∞, ψ0)∼ N(0,2σ2 Q)where σ2 Q=Vθ0 Q(x, θ∞) =κxx−(ΘxX∆−1 XΛx+h.c.)+(Θ xX∆−1 X(ΛX−γΛX′)∆−1⊤ XΘXx).(72) By Propositions 2 and 3, the NNGP and NTK associated with each online head ui(x, ϑ∞)in the infinite width and time limit are given by κij(x, x′) =E[ui(x, ϑ∞)uj(x′, ϑ∞)] =κ(x, x′) ifi=j, 0 ifi̸=j,(73) Θij(x, x′) =∇ϑul i(x, ϑ∞)⊤∇ϑul j(x′, ϑ∞) =Θ(x, x′) ifi=j, 0 ifi̸=j.(74) Due to the independence of the NNGP and NTK for different heads ui, prediction errors ϵi(xt;ϑ∞, ψ0)are i.i.d. draws from a zero mean Gaussian with variance equal as given in Eq. 72. Note that this is despite the final feature layer being shared between the output functions. The empirical mean squared prediction errors are thus Chi-squared distributed with Mdegrees of freedom 1 MMX i=11 2ϵi(xt;ϑ∞, ψ0)2∼σ2 Q Mχ2(M) (75) 21 Now, let {Qi(x;θt)}M+1 i=1be a deep ensemble of M+ 1Q-functions from independent initializations. By Corollary 1, these Q-functions, too, are i.i.d. draws from a Gaussian, now with mean ΘxX∆−1 Xr and variance as given in Eq. 72. The sample variance of this ensemble thus also follows a Chi-squared distribution with
|
https://arxiv.org/abs/2505.21119v1
|
Mdegrees of freedom 1 MM+1X i=11 2 Qi(x;θ∞)−¯Q(x;θ∞)2∼σ2 Q Mχ2(M), (76) where ¯Q(x;θ∞) =1 M+1PM+1 iQi(x;θ∞)is the sample mean of M+ 1universal Q-functions, completing the proof. A.3 Limitations and Assumptions In this section, we detail central theoretical underpinnings and idealizations upon which our theoretical analysis is built. A central element of our theoretical analysis is the representation of neural network learning dynamics via the Neural Tangent Kernel (NTK), an object in the theoretical limit of infinite network width. The established NTK framework, where the kernel is deterministic despite random initialziation and and constant throughout training, typically applies to fully connected networks with NTK parameterization, optimized using a squared error loss [Jacot et al., 2018]. Our framework instead accommodates a semi-gradient TD loss, and thereby introduces an additional prerequisite for ensuring the convergence of these dynamics: the positive definiteness of the matrix expression ΘXX−γΘX′X. This particular constraint is more a characteristic inherent to the TD learning paradigm itself than a direct consequence of the infinite-width abstraction. Indeed, the design of neural network architectures that inherently satisfy such stability conditions for TD learning continues to be an active area of contemporary research [Yue et al., 2023, Gallici et al., 2024]. The modeling choice of semi-gradient TD losses moreover does not incorporate the use of target networks, where bootstrapped values do not only stop gradients but are generated by a separate network altogether that slowly moves towards the online learner. Our analysis moreover considers the setting of offline policy evaluation , that is, we do not assume that additional data is acquired during learning and that policies evaluated for value learning remain constant. The assumption of a fixed, static dataset diverges from the conditions of online reinforcement learning with control, where the distribution of training data (X,X′)typically evolves as the agent interacts with its environment, both due to its collection of novel transitions and due to adjustments to the policy, for example by use of a Bellman optimality operator. Lastly, our theoretical model assumes, primarily for simplicity, that learning occurs under gradient flow with infinitesimally small step sizes and with updates derived from full-batch gradients. Both finite-sized gradient step sizes and stochastic minibatching has been treated in the literature, albeit not in the TD learning setting [Jacot et al., 2018, Lee et al., 2020, Liu et al., 2020, Yang, 2019]. We believe our analysis could be extended to these settings without major modifications. B Experimental Details We provide details on our experimental setup, implementations and additional results. This includes architectural design choices, algorithmic design choices, hyperparameter settings, hyperparameter search procedures, and environment details. B.1 Implementation Details All algorithms are self-implemented and tuned in JAX [Bradbury et al., 2018]. A detailed exposition of our design choices and parameters follows below. Environment Setup. We use a variation of the GoToDoor environment of the minigrid suite [Chevalier-Boisvert et al., 2023]. As our focus is not on partially observable settings, we use fully observable 35-dimensional state descriptions with S=R35. Observation vectors comprise the factors: o= o⊤ agent-pos , o⊤ agent-dir , o⊤ door-config , o⊤ door-pos⊤, (77) 22 where
|
https://arxiv.org/abs/2505.21119v1
|
oagent-pos ∈R2is the agent position in x, y-coordinates, oagent-dir ∈Ris a scalar integer indicating the agent direction (takes on values between 1 and 4), odoor-config ∈R24is the door configuration, comprising 4one-hot encoded vectors indicating each door’s color, and odoor-pos ∈R8 is a vector containing the x, y-positions of the four doors. The action space is discrete and four- dimensional with the following effects aeffect= turn left if a= 0, turn right if a= 1, go forward if a= 2, open door if a= 3.(78) Tasks are one-hot encodings of the target door color, that is z∈R6and in the online setting are generated such that they are achievable. The reward function is an indicator function of the correct door being opened, in which case a reward of 1is given to the agent and the agent position is reset to a random location in the grid. Episodes terminate only upon reaching the maximum number of timesteps ( 50in our experiments). In the task rejection setting described in our evaluation protocol, an agent in a start state s0is presented a list of tasks, which may or may not be attainable, and is allowed to reject a fixed number of tasks from this list. In our experiments, the agent is allowed to reject 4out of 6total tasks at the beginning of each episode. Data Collection. Our offline datasets are recorded replay buffers from a DQN-agent deployed to the GoToDoor environment with an ϵ-greedy exploration strategy and a particular policy: When the door indicated by the task encoding zprovided by the environment lies at the south or west wall, the regular policy by the online DQN agent is executed. If the target door lies at the north or east wall, however, actions are generated by a fixed random Q-network. This mixture policy emulates a policy that exhibits expert performance on certain combinations of tasks and states, but suboptimal behavior for other combinations. The replay buffer does, however, contain most combinations of states and tasks, albeit some with trajectories from suboptimal policies. Hyperparameter details of the online agent are provided in section B.2. Figure 5: Illustration of the used architecture. ⊙ indicates elementwise multiplication.Algorithmic Details. All tested algorithms and experiments are based on DQN agents[Mnih et al., 2015] which we adapted for the task- conditioned universal value function [Schaul et al., 2015] setting. While our theoretical anal- ysis considers full-batch gradient descent, in practice we sample minibatches from offline datasets with Xmb={(si, ai, zi)}Nmb i=1,X′ b= {(s′ i, a′ i, zi)}Nb i=1, where next-state actions are generated a′ i=argmaxa∈AQ(s′ i, a, z i, θt)and rewards are r={ri}Nmb i=1. Moreover, we devi- ate from our theoretical analysis and use target networks in place of the stop-gradient operation. Here, a separate set of parameters ˜θtis used to generate bootstrap targets in the TD loss which is in practice given by L(θt) =1 2∥γQ(X′ mb,˜θt) +r−Q(Xmb, θt)∥2 2. (79) The parameters ˜θtare updated towards the on- line parameters θtat fixed intervals through polyak updating, as is common. We use this basic algorithmic pipeline for all tested algo- rithms, including the
|
https://arxiv.org/abs/2505.21119v1
|
online agent used for data collection. 23 Architectural Details. We use a hypernet- work MLP architecture adapted to the DQN set- ting, as depicted in Fig. 5. Specifically, this means we pass states sand task encodings z through single-layer encoders, which are then joint by elementwise multiplication. The resulting vector is normalized by its l2norm, x′=x ∥x∥2. This joint vector is passed thorugh a 3-layer MLP with network width 512, again normalized by its l2norm and finally passed through a fully-connected layer to obtain a vector of dimension R|A|. Although our experiments are conducted in the offline RL setting, preliminary experiments showed no benefits of using ensemble-based pessimism [An et al., 2021] or conservative Q-updates [Kumar et al., 2020]. Instead, our normalization pipeline appears to sufficiently address overestimation issues as is suggested by several recent works [Yue et al., 2023, Gallici et al., 2024]. Independent Bootstrapping. For the ensemble-based BDQNP baseline and our UVU model, we perform independent bootstrapping in the TD loss computation. By this, we mean that both the bootstrapped value and actions are generated by individual Q-functions. In the case of BDQNP, this means we compute Loss 79 for each model Qk, indexed by k∈[1, . . . , K ]withXmb,k=Xmband bootstraps are generated as X′ mb,k={(s′ i, a′ ik, zi)}Nmb i=1,and a′ ik=argmaxa∈AQk(s′ i, a, z i, θt). (80) Note, that this procedure is established [Osband et al., 2016] and serves the purpose of maintaining independence between the models in the ensemble. In order to conduct the same procedure in our UVU method, where we have access to only one Q-function, we generate Kdistinct Q-estimates by computing QUV U k(s, a, z, θ t) :=Q(s, a, z, θ t) +ϵk(s, a, z, ϑ t, ψ0), (81) that is, by adding the UVU error of the k-th output head. Bootstraps are then generated according to Eq. 80. Intrinsic reward priors. Intrinsic reward priors are a trick suggested by Zanger et al. [2024] to address a shortcoming of propagation methods used for intrinsic reward methods like RND[Burda et al., 2019, O’Donoghue et al., 2018]. The issue is that while learning a Q-function with intrinsic rewards can, with the right choice of intrinsic reward, provide optimistic estimates of the value function, but only for state-action regions covered in the data. A potential underestiation of the optimistic bound, however, counteracts its intention, a phenomenon also described by Rashid et al. [2020]. Intrinsic reward priors are a heuristic method to address this issue by adding local, myopic uncertainty estimates automatically to the forward pass of the intrinsic Q-function, leading to a “prior” mechanism that ensures a ˆQintr(s, a, z, θ t) =Qintr(s, a, z, θ t) +1 2ϵrnd(s, a, z, θ rnd)2 where ϵrnd(s, a, z, θ rnd)denotes a local RND error as an example. The altered function ˆQintr(s, a, z, θ t)is trained as usual with Loss 79 and intrinsic rewards1 2ϵrnd(s, a, z, θ rnd)2. B.2 Hyperparameter Settings To ensure a consistent basis for comparison across our findings, all experimental work was carried out using a shared codebase. We adopted standardized
|
https://arxiv.org/abs/2505.21119v1
|
modeling approaches, including uniform choices for elements like network architectures and optimization algorithms, where appropriate. Specifically, every experiment employed the same architecture as detailed in Appendix B.1. Key hyperparameters, encompassing both foundational and algorithm-specific settings, were tuned through a grid search on the10×10variation of the GoToDoor environment. The search grid and final hyperparamters are provided in Tables 2 and 3 respectively. DQN in Table 3 refers to the online data collection agent. B.3 Additional Experimental Results We report additional results of the illustrative experiment shown in Section 3. Fig. 6 shows different uncertainty estimates in the described chain environment. The first row depicts myopic uncertainty estimates or, equivalently, RND errors. The second and third row show propagated local uncertainties 24 RND (s,a,z) - 1 m odel 0 5 10 15 20s0.50.60.70.80.91.0z Q-RND u(s,a,z) - 2 m odels 0 5 10 15 20s0.50.60.70.80.91.0z Q-RND w. intrinsic priors u(s,a,z) - 2 m odels 0 5 10 15 20s0.50.60.70.80.91.0z Figure 6: Top Row : RND errors. 2nd Row : Value uncertainty as measured by an intrinsic Q-function. 3rd Row : Value uncertainty as measured by an intrinsic Q-function with intrinsic reward priors. with and without the intrinsic reward prior mechanism respectively. This result shows clearly the shortcoming of the standard training pipeline for intrinsic rewards: in a standard training pipeline, the novelty bonus of RND is given only for transitions (si, ai, zi, s′ i)already present in the dataset and is never evaluated for OOD-actions. To generate reliable uncertainty estimates, RND requires, in addition to the RND network and the additional intrinsic Q-function, an algorithmic mechanism such as the intrinsic reward priors or even more sophisticated methods as described by Rashid et al. [2020]. 25 Table 2: Hyperparameter search space Hyperparameter Values QLearning rate (all) [1·10−6,3·10−6,1·10−5, 3·10−5,1·10−4,3·10−4,1·10−3] Prior function scale (BDQNP) [0.1,0.3,1.0,3.0,10.0] RND Learning rate (RND, RND-P) [1·10−6,3·10−6,1·10−5, 3·10−5,1·10−4,3·10−4,1·10−3] UVU Learning rate (UVU) [1·10−6,3·10−6,1·10−5, 3·10−5,1·10−4,3·10−4,1·10−3] Table 3: Hyperparameter settings for GoToDoor experiments. Hyperparameter DQN BDQNP DQN-RND DQN-RND+P UVU Adam Q-Learning rate 3·10−43·10−43·10−43·10−43·10−4 Prior function scale n/a 1.0 n/a n/a n/a N-Heads 1 1 1 1 /512 1 /512 1 /512 Ensemble size n/a 3/15 n/a n/a n/a MLP hidden layers 3 MLP layer width 512 Discount γ 0.9 Batch size 512 Adam epsilon 0.005/batch size Initialization He uniform [He et al., 2015] Double DQN Yes [Hasselt, 2010] Update frequency 1 Target lambda 1.0 Target frequency 256 Table 4: GoToDoor Environment Settings Parameter Value State space dim 35 Action space dim 3 Task space dim 6 N Task Rejections 4 Max. Episode Length 50 26
|
https://arxiv.org/abs/2505.21119v1
|
arXiv:2505.21136v2 [cs.LG] 28 May 2025SageAttention2++: A More Efficient Implementation of SageAttention2 Jintao Zhang1Xiaoming Xu1Jia Wei1Haofeng Huang1Pengle Zhang1Chendong Xiang1 Jun Zhu1Jianfei Chen1 Abstract The efficiency of attention is critical because its time complexity grows quadratically with se- quence length. SageAttention2 addresses this by using quantization to speed up matrix multiplica- tions (Matmul) in attention. To further accelerate SageAttention2, we propose to utilize the faster instruction of FP8 Matmul accumulated in FP16. The instruction is 2 ×faster than the FP8 Mat- mul used in SageAttention2. Our experiments show that SageAttention2++ achieves a 3.9×speedup over FlashAttention while main- taining the same attention accuracy as SageAt- tention2. This means SageAttention2++ ef- fectively accelerates various models, including those for language, image, and video generation, with negligible end-to-end metrics loss. The code will be available at https://github.com/ thu-ml/SageAttention . 1. Introduction The quadratic time complexity of attention necessitates effi- cient implementations for real-world applications with long sequences (Jiang et al., 2024). Current approaches to reduce attention’s computational demands fall into three main cat- egories: (1) linear attention methods (Wang et al., 2020; Choromanski et al., 2021; Yu et al., 2022; Katharopoulos et al., 2020; Qin et al., 2024; Yang et al., 2024) that achieve O(N)complexity, and (2) sparse attention techniques (Liu et al., 2021; Chu et al., 2021; Li et al., 2022; Xiao et al., 2024b;a; Chen et al., 2024; Jiang et al., 2024; Venkatara- manan et al., 2024; Gao et al., 2024; Fu et al., 2024; Zhang et al., 2025e; Xi et al., 2025; Zhang et al., 2025f) that process only relevant context portions. While effective, these meth- ods often exhibit limited generality across models and tasks. (3) An alternative direction focuses on hardware-optimized attention implementations that maintain full sequence com- putation while achieving superior speed and accuracy. No- 1Department of Computer Science, Tsinghua University. Preprint .table examples include FlashAttention (Dao et al., 2022), FlashAttention2 (Dao, 2024; Shah et al., 2024), xForm- ers (Lefaudeux et al., 2022), and SageAttentions (Zhang et al., 2025c;c;d;b) which demonstrate strong performance across diverse applications. Motivation, problem, and our approach . For the second matrix multiplication (Matmul) PV in attention, SageAttention2 accelerates it by quantizing to FP8 and using the mma.f32.f8.f8.f32 instruction. How- ever,mma.f32.f8.f8.f32 employs an FP32 accumu- lator and is only 2 ×faster than FP16. We find that themma.f16.f8.f8.f16 instruction (using FP16 ac- cumulator for FP8 Matmul) achieves 4 ×speedup over FP16 (NVIDIA, 2022). Therefore, we aim to accelerate SageAttention2 by using the faster instruction. However, directly using the faster instruction will lead to the values of PVexceeding the representable range of FP16. To address the problem, we propose to narrow the quantization range ofPandVto satisfy the accumulator range in FP16. Performance. For efficiency, SageAttention2++ de- livers a 3.9 ×speedup over FlashAttention. In terms of ac- curacy, SageAttention2++ matches SageAttention2’s performance. We conduct comprehensive evaluations on state-of-the-art models for text, image, and video generation. The results demonstrate that SageAttention2++ pro- vides plug-and-play acceleration with negligible end-to-end metrics loss across diverse models. 2. Preliminary Table 1. Speedup compared to matrix multiplication in FP16 with an FP32 accumulator. GPU MM Input MM Accumulator
|
https://arxiv.org/abs/2505.21136v2
|
Speedup RTX4090, RTX5090FP16 FP32 1x FP8 FP32 2x FP8 FP16 4x 2.1. SageAttention2 SageAttention2 (Zhang et al., 2025a) is a quantiza- tion (Zhang et al., 2025g; Hu et al., 2025) method based on FlashAttention (Dao et al., 2022). FlashAttention tiles 1 SageAttention2++: A More Efficient Implementation of SageAttention2 Table 2. Average attention accuracy across all attention layers of CogvideoX. Method Pr Vr Cossim ↑ L1↓ SageAttn2 448 448 99.97% 0.01862 SageAttn2++ 448 2.25 99.97% 0.01863 SageAttn2++ 224 4.5 99.97% 0.01862 SageAttn2++ 112 9 99.97% 0.01863 Q, K, P, V into blocks ( {Qi},{Ki},{ePi},{Vi}) and uses online softmax (Milakov & Gimelshein, 2018) to compute attention progressively. For simplicity, we omit the sub- scripts ( {Qi},{Ki},{ePi},{Vi}) in the following content and use Q, K,eP, V to represent the tiled blocks. SageAt- tention2 quantizes Q, K to INT4/INT8 with per-block gran- ularity,ePto FP8 in E4M3 with per-block granularity, and quantizes Vto FP8 in E4M3 with per-channel granular- ity. This means each Q, K,ePhas a separate scale fac- tor:δQ= max( |Q|)/127, δK= max( |K|)/127, δP= max(|eP|)/448, and each channel of Vhas a separate scalar scale: δV= colmax( |V|)/448. By doing so, SageAtten- tion2 accelerates matrix multiplications in attention through low-bit Tensor Core operations. For example, ˆP=⌈eP/δP⌋, ˆV=⌈V/δv⌋. Then, PV=ˆPˆV∗δP∗δV. 2.2. Data Type of Accumulator for Matmul In some GPUs, the speed of Matmul instructions de- pends on the accumulator data type. For instance, mma.f32.f8.f8.f32 uses FP32 accumulator for FP8 Matmul and is only 2 ×faster than FP16. The in- struction using FP16 accumulator for FP8 Matmul (mma.f16.f8.f8.f16 ) is 4×faster than FP16. Table 1 summarizes the speedup of Matmul instructions with differ- ent accumulators. 3. SageAttention2++ In this section, we introduce SageAttention2++ . The workflow of SageAttention2++ is based on SageAttention2, also using the smoothing of QandK, INT4/INT8 quantization for QK⊤Matmul and FP8 quan- tization for PV Matmul. The main difference is that forPV,SageAttention2++ uses the faster instruction (mma.f16.f8.f8.f16 ), which employs an FP16 accu- mulator for the FP8 Matmul. To ensure the results of FP8 Matmul remain within FP16’s representable range, we ad- just the scale factor of the FP8 quantization. 3.1. Narrowing the FP8 Quantization Range The specific MMA (NVIDIA, 2025) instruction used for the MatMul between PandVismma.m16n8k32 . If we quantize PandVto FP8 in E4M3 (-448 ∼448) as in SageAt-tention2, the results may exceed FP16’s representable range (-65504 ∼65504). This occurs because 32 product values pvare accumulated in FP16, where pandvcome from quantized ˆPandˆV(derived from ePandV). To ensure the accumulated results stay within FP16’s range: |32×pv| ≤65504 (1) For instance, choosing |p| ≤224and|v| ≤9satisfies this condition. We therefore narrow the quantization ranges of PandVby adjusting their scale factors: δP=|max(eP)|/Pr, δV=|max( V)|/Vr (2) where we constrain Pr×Vr≤2047 (since 65504 /32 = 2047 ). 3.2. Delayed FP32 Buffering The transformation of accumulated values from mma.m16n8k32 (in FP16) to FP32 incurs overhead because it needs additional data type conversion PTX instructions (NVIDIA, 2025) to execute. To reduce this over- head, we accumulate two consecutive mma.m16n8k32 results in FP16 before performing FP32 conversion, effec- tively halving the transformation overhead. Maintaining the FP16 representable range requires: Pr×Vr≤2047/2 (3)
|
https://arxiv.org/abs/2505.21136v2
|
Choice of PrandVr. Table 2 shows attention accuracy for feasible (Pr, Vr)pairs. The results demonstrate that narrowing quantization ranges introduces negligible error. We select Pr= 224 andVr= 4.5for optimal performance. 4. Experiment Main result. SageAttention2++ achieves up to 3.9× speedup over FlashAttention2 while consistently outper- forming both SageAttention and SageAttention2 in compu- tational efficiency. Importantly, these performance gains are achieved with negligible impact on end-to-end metrics across diverse model architectures. 4.1. Setup Models and attentions. We evaluate SageAttention2++ across diverse representative models spanning language, image, and video generation: Llama3.1 (8B) (Dubey et al., 2024) for text2text, CogvideoX (2B), HunyuanVideo (Kong et al., 2024), and Wan (Wan et al., 2025) for text2video, andFlux (schnell) (Black Forest Labs, 2023) and Stable-Diffusion3.5 (turbo) (Stability AI, 2023) for text2image. We compare our method with FlashAtten- tion2 (Dao, 2024), SageAttention (Zhang et al., 2025c), 2 SageAttention2++: A More Efficient Implementation of SageAttention2 1K 2K 4K 8K 16K 32K Sequence Length05001000Speed (TOPS) 145 151 161 164 164 164259 303 322 317 323 322346 382 421 423 423 431411 461 489 490 493 493422 440 465 485 489 489574 601 629 638 639 640RTX4090, (Head dim = 128, causal = False) FlashAttn Sage1Sage2(8+8) Sage2++(8+8)Sage2(4+8) Sage2++(4+8) 1K 2K 4K 8K 16K 32K Sequence Length05001000 118 138 155 160 161 161188 247 292 307 316 322252 326 377 400 420 424268 365 422 450 468 476314 361 417 452 471 478409 467 554 596 606 619RTX4090, (Head dim = 128, causal = True) FlashAttn Sage1Sage2(8+8) Sage2++(8+8)Sage2(4+8) Sage2++(4+8) Figure 1. Speed comparison between SageAttention2++ and baselines (RTX4090, headdim=128). 1K 2K 4K 8K 16K 32K Sequence Length02505007501000Speed (TOPS) 143 153 161 167 167 167273 312 323 322 326 320339 352 366 390 392 391399 417 437 444 443 445385 413 415 435 443 451444 483 471 493 496 501RTX4090, (Head dim = 64, causal = False) FlashAttn Sage1Sage2(8+8) Sage2++(8+8)Sage2(4+8) Sage2++(4+8) 1K 2K 4K 8K 16K 32K Sequence Length02505007501000 94 124 150 159 162 164174 246 289 308 315 322252 299 336 361 376 382287 343 400 428 443 453287 355 376 406 434 440347 421 479 520 535 537RTX4090, (Head dim = 64, causal = True) FlashAttn Sage1Sage2(8+8) Sage2++(8+8)Sage2(4+8) Sage2++(4+8) Figure 2. Speed comparison between SageAttention2++ and baselines (RTX4090, headdim=64). 1K 2K 4K 8K 16K 32K Sequence Length05001000Speed (TOPS) 173 198 208 212 215 214442 467 473 479 480 479489 527 532 544 551 554583 608 618 628 638 643RTX5090, (Head dim = 128, causal = False) FlashAttn Sage1Sage2(8+8) Sage2++(8+8) 1K 2K 4K 8K 16K 32K Sequence Length05001000 147 175 192 202 207 209289 369 433 457 469 468362 420 482 519 540 551388 444 524 567 594 609RTX5090, (Head dim = 128, causal = True) FlashAttn Sage1Sage2(8+8) Sage2++(8+8) Figure 3. Speed comparison between SageAttention2++ and baselines (RTX5090, headdim=128). 1K 2K 4K 8K 16K 32K Sequence Length02505007501000Speed (TOPS) 191 206 213 218 220 220405 441 423 428 427 427441 492 491 490 497 500509 570 557 556 560 567RTX5090, (Head dim = 64, causal = False) FlashAttn Sage1Sage2(8+8) Sage2++(8+8) 1K 2K 4K 8K 16K 32K Sequence Length02505007501000 149 174 195 207 213 217239
|
https://arxiv.org/abs/2505.21136v2
|
312 379 404 416 421317 396 429 461 479 488350 451 476 519 548 562RTX5090, (Head dim = 64, causal = True) FlashAttn Sage1Sage2(8+8) Sage2++(8+8) Figure 4. Speed comparison between SageAttention2++ and baselines (RTX5090, headdim=64). and SageAttention2 (Zhang et al., 2025a). Please note that FlashAttention3 can only run on Hopper GPUs, so FlashAttention2 is already the fastest version for RTX5090 and RTX4090. Following SageAttention2’s approach, we implement two SageAttention2++ variants: SageAttn2++(8+8) (INT8 for Q, K ) and SageAttn2++(4+8) (INT4 for Q, K ), both using FP8 in E4M3 for eP, V . Datasets and metrics. Detailed dataset and metric informa-tion appears in Appendix A.2. Implementation. We implement SageAttention2++ using CUDA. 4.2. Speed of Kernels Kernel Speed. We benchmark the speed of SageAttention2++ against baselines using con- figurations with headdim=64 and headdim=128, both with 3 SageAttention2++: A More Efficient Implementation of SageAttention2 Table 3. End-to-end metrics across text, image, and video generation models. Model Attention WikiText (Ppl.) ↓ Lambda (Acc.) ↑ NIAH (Acc.) ↑ Llama3.1Full-Precision 6.013 0.815 0.906 SageAttn2(8+8) 6.019 0.811 0.903 SageAttn2++(8+8) 6.020 0.813 0.901 Model Attention CLIPSIM ↑ CLIP-T ↑ VQA-a ↑ VQA-t ↑ FScore ↑ CogvideoX (2B)Full-Precision 0.179 0.997 74.499 74.642 4.974 SageAttn2(4+8) 0.179 0.997 76.309 66.396 4.386 SageAttn2(8+8) 0.178 0.997 74.322 74.447 4.899 SageAttn2++(4+8) 0.179 0.997 74.387 66.568 4.333 SageAttn2++(8+8) 0.179 0.997 76.309 73.165 4.386 Hunyuan VideoFull-Precision 0.175 0.999 77.437 52.731 1.169 SageAttn2(4+8) 0.176 0.999 73.282 55.141 0.968 SageAttn2(8+8) 0.175 0.999 78.145 54.878 1.176 SageAttn2++(4+8) 0.176 0.999 73.282 52.258 0.968 SageAttn2++(8+8) 0.175 0.999 78.569 51.080 1.192 WanFull-Precision 0.172 0.999 53.255 59.989 1.843 SageAttn2(4+8) 0.176 0.998 29.728 38.533 0.994 SageAttn2(8+8) 0.172 0.999 49.794 55.712 1.870 SageAttn2++(4+8) 0.176 0.998 29.728 38.023 0.994 SageAttn2++(8+8) 0.172 0.999 50.876 57.140 1.902 Model Attention FID↓ sFID↓ CLIP↑ IR↑ FluxFull-Precision 165.117 147.831 31.401 0.912 SageAttn2(4+8) 164.170 147.185 31.358 0.910 SageAttn2(8+8) 163.185 146.101 31.453 0.905 SageAttn2++(4+8) 164.170 147.185 31.358 0.910 SageAttn2++(8+8) 163.555 146.036 31.445 0.902 Stable-Dif fusion3.5Full-Precision 166.369 146.514 31.876 0.929 SageAttn2(4+8) 164.610 147.350 31.912 0.914 SageAttn2(8+8) 164.971 148.498 31.964 0.931 SageAttn2++(4+8) 164.610 147.350 31.912 0.914 SageAttn2++(8+8) 165.842 146.465 31.968 0.929 SageAttn2++Full PrecisionSageAttn2++Full PrecisionSageAttention2++ on WanSageAttention2++ on Flux Figure 5. A visible example of using SageAttention2++ . and without a Causal Mask (Vaswani, 2017). Specifically, Fig. 1 shows the speed across varying sequence lengths on RTX4090, indicating that SageAttn2++(4+8) and SageAttn2++(8+8) are approximately 3.9x and 3.0x faster than FlashAttention2, respectively. Fig. 2, 3 and 4show more kernel speed comparison on RTX4090 and RTX5090 GPUs. 4 SageAttention2++: A More Efficient Implementation of SageAttention2 Sage2++ (8+8)Full Precision Sage2++ (4+8)Sage2++ (8+8)Full PrecisionSage2++ (4+8)SageAttention2++ on CogvideoX SageAttention2++ on HunyuanVideo Figure 6. Visible examples of using SageAttention2++ on video generation. 4.3. End-to-end Performance Metrics loss. We evaluate end-to-end model perfor- mance using SageAttention2++ against baseline meth- ods. Detailed evaluation results are presented in Ta- ble 3. The results indicate that SageAttn2++(8+8) andSageAttn2++(4+8) match the end-to-end metrics of SageAttention2. Specifically, SageAttn2++(8+8) incurs almost no metrics loss across various models and SageAttn2++(4+8) brings a little metrics loss. Visible image and video examples. Fig.5, 7, and 6 show some visible comparison examples.5. Conclusion We introduce SageAttention2++ to further acceler- ate SageAttention2. We propose to utilize
|
https://arxiv.org/abs/2505.21136v2
|
the faster in- struction of FP8 Matmul accumulated in FP16 for the matrix multiplication of PV. Experiments show that SageAttention2++ achieves a 3.9×speedup (SageAt- tention2 has a 3 ×speedup) over FlashAttention, while main- taining the same attention accuracy as SageAttention2. This means SageAttention2++ can accelerate various mod- els, including those for language, image, and video genera- tion, with negligible end-to-end metrics loss. 5 SageAttention2++: A More Efficient Implementation of SageAttention2 References Black Forest Labs. Flux. https://github.com/ black-forest-labs/flux , 2023. Chen, Y ., Qian, S., Tang, H., Lai, X., Liu, Z., Han, S., and Jia, J. Longlora: Efficient fine-tuning of long-context large language models. In The International Conference on Learning Representations , 2024. Choromanski, K. M., Likhosherstov, V ., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J. Q., Mo- hiuddin, A., Kaiser, L., Belanger, D. B., Colwell, L. J., and Weller, A. Rethinking attention with performers. In International Conference on Learning Representations , 2021. Chu, X., Tian, Z., Wang, Y ., Zhang, B., Ren, H., Wei, X., Xia, H., and Shen, C. Twins: Revisiting the design of spatial attention in vision transformers. In Beygelzimer, A., Dauphin, Y ., Liang, P., and Vaughan, J. W. (eds.), Ad- vances in Neural Information Processing Systems , 2021. Dao, T. Flashattention-2: Faster attention with better paral- lelism and work partitioning. In The Twelfth International Conference on Learning Representations , 2024. Dao, T., Fu, D. Y ., Ermon, S., Rudra, A., and Re, C. Flashat- tention: Fast and memory-efficient exact attention with IO-awareness. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Pro- cessing Systems , 2022. Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Fu, T., Huang, H., Ning, X., Zhang, G., Chen, B., Wu, T., Wang, H., Huang, Z., Li, S., Yan, S., Dai, G., Yang, H., and Wang, Y . Moa: Mixture of sparse attention for automatic large language model compression. arXiv preprint arXiv:2406.14909 , 2024. Gao, Y ., Zeng, Z., Du, D., Cao, S., So, H. K.-H., Cao, T., Yang, F., and Yang, M. Seerattention: Learning intrinsic sparse attention in your llms. arXiv preprint arXiv:2410.13276 , 2024. Hessel, J., Holtzman, A., Forbes, M., Le Bras, R., and Choi, Y . Clipscore: A reference-free evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pp. 7514–7528, 2021. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems , 30, 2017.Hu, Y ., Huang, W., Liang, Z., Chen, C., Zhang, J., Zhu, J., and Chen, J. Identifying sensitive weights via post- quantization integral. 2025. Jelinek, F., Mercer, R. L., Bahl, L. R., and Baker, J. K. Per- plexity—a measure of the difficulty of speech recognition tasks. The Journal of the Acoustical Society of America , 62(S1):S63–S63, 1977. Jiang, H., LI,
|
https://arxiv.org/abs/2505.21136v2
|
Y ., Zhang, C., Wu, Q., Luo, X., Ahn, S., Han, Z., Abdi, A. H., Li, D., Lin, C.-Y ., Yang, Y ., and Qiu, L. MInference 1.0: Accelerating pre-filling for long-context LLMs via dynamic sparse attention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Kamradt, G. Llmtest needle in a haystack-pressure testing llms. https://github.com/gkamradt/ LLMTest_NeedleInAHaystack , 2023. Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on ma- chine learning , pp. 5156–5165. PMLR, 2020. Kong, W., Tian, Q., Zhang, Z., Min, R., Dai, Z., Zhou, J., Xiong, J., Li, X., Wu, B., Zhang, J., Wu, K., Lin, Q., Wang, A., Wang, A., Li, C., Huang, D., Yang, F., Tan, H., Wang, H., Song, J., Bai, J., Wu, J., Xue, J., Wang, J., Yuan, J., Wang, K., Liu, M., Li, P., Li, S., Wang, W., Yu, W., Deng, X., Li, Y ., Long, Y ., Chen, Y ., Cui, Y ., Peng, Y ., Yu, Z., He, Z., Xu, Z., Zhou, Z., Xu, Z., Tao, Y ., Lu, Q., Liu, S., Zhou, D., Wang, H., Yang, Y ., Wang, D., Liu, Y ., Jiang, J., and Zhong, C. Hunyuanvideo: A systematic framework for large video generative models. arXiv preprint arXiv:2412.03603 , 2024. Lefaudeux, B., Massa, F., Liskovich, D., Xiong, W., Caggiano, V ., Naren, S., Xu, M., Hu, J., Tintore, M., Zhang, S., Labatut, P., Haziza, D., Wehrstedt, L., Reizen- stein, J., and Sizov, G. xformers: A modular and hack- able transformer modelling library. https://github. com/facebookresearch/xformers , 2022. Li, K., Wang, Y ., Peng, G., Song, G., Liu, Y ., Li, H., and Qiao, Y . Uniformer: Unified transformer for efficient spatial-temporal representation learning. In International Conference on Learning Representations , 2022. Lin, T.-Y ., Maire, M., Belongie, S., Hays, J., Perona, P., Ra- manan, D., Doll ´ar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 , pp. 740– 755. Springer, 2014. 6 SageAttention2++: A More Efficient Implementation of SageAttention2 Liu, Y ., Cun, X., Liu, X., Wang, X., Zhang, Y ., Chen, H., Liu, Y ., Zeng, T., Chan, R., and Shan, Y . Evalcrafter: Benchmarking and evaluating large video generation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 22139– 22149, 2024. Liu, Z., Lin, Y ., Cao, Y ., Hu, H., Wei, Y ., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision , pp. 10012–10022, 2021. Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer sentinel mixture models. In International Conference on Learning Representations , 2022. Milakov, M. and Gimelshein, N. Online normalizer cal- culation for softmax. arXiv preprint arXiv:1805.02867 , 2018. NVIDIA. Nvidia ada gpu architecture, 2022. URL https://images.nvidia.com/ aem-dam/Solutions/geforce/ada/ nvidia-ada-gpu-architecture.pdf . Techni- cal whitepaper. NVIDIA. Parallel Thread Execution ISA Version 8.7. https://docs.nvidia.com/cuda/pdf/ ptx_isa_8.4.pdf , 2025. Accessed: 2025-05-16.
|
https://arxiv.org/abs/2505.21136v2
|
Paperno, D., Kruszewski, G., Lazaridou, A., Pham, N.-Q., Bernardi, R., Pezzelle, S., Baroni, M., Boleda, G., and Fern ´andez, R. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pp. 1525– 1534, 2016. Qin, Z., Sun, W., Li, D., Shen, X., Sun, W., and Zhong, Y . Lightning attention-2: A free lunch for handling unlim- ited sequence lengths in large language models. arXiv preprint arXiv:2401.04658 , 2024. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V ., Radford, A., and Chen, X. Improved techniques for training gans. Advances in neural information processing systems , 29, 2016. Shah, J., Bikshandi, G., Zhang, Y ., Thakkar, V ., Ramani, P., and Dao, T. Flashattention-3: Fast and accurate attention with asynchrony and low-precision. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Stability AI. Introducing stable diffusion 3.5. https://stability.ai/news/ introducing-stable-diffusion-3-5 , 2023.Vaswani, A. Attention is all you need. Advances in Neural Information Processing Systems , 2017. Venkataramanan, S., Ghodrati, A., Asano, Y . M., Porikli, F., and Habibian, A. Skip-attention: Improving vision transformers by paying less attention. In The Twelfth International Conference on Learning Representations , 2024. Wan, T., Wang, A., Ai, B., Wen, B., Mao, C., Xie, C.-W., Chen, D., Yu, F., Zhao, H., Yang, J., Zeng, J., Wang, J., Zhang, J., Zhou, J., Wang, J., Chen, J., Zhu, K., Zhao, K., Yan, K., Huang, L., Feng, M., Zhang, N., Li, P., Wu, P., Chu, R., Feng, R., Zhang, S., Sun, S., Fang, T., Wang, T., Gui, T., Weng, T., Shen, T., Lin, W., Wang, W., Wang, W., Zhou, W., Wang, W., Shen, W., Yu, W., Shi, X., Huang, X., Xu, X., Kou, Y ., Lv, Y ., Li, Y ., Liu, Y ., Wang, Y ., Zhang, Y ., Huang, Y ., Li, Y ., Wu, Y ., Liu, Y ., Pan, Y ., Zheng, Y ., Hong, Y ., Shi, Y ., Feng, Y ., Jiang, Z., Han, Z., Wu, Z.-F., and Liu, Z. Wan: Open and advanced large-scale video generative models. arXiv preprint arXiv:2503.20314 , 2025. Wang, S., Li, B. Z., Khabsa, M., Fang, H., and Ma, H. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768 , 2020. Wu, H., Zhang, E., Liao, L., Chen, C., Hou, J., Wang, A., Sun, W., Yan, Q., and Lin, W. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 20144– 20154, 2023. Xi, H., Yang, S., Zhao, Y ., Xu, C., Li, M., Li, X., Lin, Y ., Cai, H., Zhang, J., Li, D., et al. Sparse videogen: Acceler- ating video diffusion transformers with spatial-temporal sparsity. arXiv preprint arXiv:2502.01776 , 2025. Xiao, C., Zhang, P., Han, X., Xiao, G., Lin, Y ., Zhang, Z., Liu, Z., and Sun, M. Infllm: Training-free long-context extrapolation for llms with an efficient context memory. InFirst Workshop on Long-Context Foundation Models@ ICML 2024 , 2024a. Xiao, G., Tian, Y ., Chen,
|
https://arxiv.org/abs/2505.21136v2
|
B., Han, S., and Lewis, M. Ef- ficient streaming language models with attention sinks. InThe Twelfth International Conference on Learning Representations , 2024b. Xu, J., Liu, X., Wu, Y ., Tong, Y ., Li, Q., Ding, M., Tang, J., and Dong, Y . Imagereward: Learning and evaluat- ing human preferences for text-to-image generation. In Thirty-seventh Conference on Neural Information Pro- cessing Systems , 2023. 7 SageAttention2++: A More Efficient Implementation of SageAttention2 Yang, S., Kautz, J., and Hatamizadeh, A. Gated delta net- works: Improving mamba2 with delta rule. arXiv preprint arXiv:2412.06464 , 2024. Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y ., Wang, X., Feng, J., and Yan, S. Metaformer is actually what you need for vision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pp. 10819– 10829, 2022. Zhang, J., Huang, H., Zhang, P., Wei, J., Zhu, J., and Chen, J. Sageattention2: Efficient attention with thorough outlier smoothing and per-thread int4 quantization. In Interna- tional Conference on Machine Learning (ICML) , 2025a. Zhang, J., Huang, H., Zhang, P., Wei, J., Zhu, J., and Chen, J. Sageattention2: Efficient attention with smoothing q and per-thread quantization. 2025b. Zhang, J., Wei, J., Zhang, P., Chen, J., and Zhu, J. Sageatten- tion: Accurate 8-bit attention for plug-and-play inference acceleration. In The International Conference on Learn- ing Representations , 2025c. Zhang, J., Wei, J., Zhang, P., Xu, X., Huang, H., Wang, H., Jiang, K., Zhu, J., and Chen, J. Sageattention3: Microscal- ing fp4 attention for inference and an exploration of 8-bit training. arXiv preprint arXiv:2505.11594 , 2025d. Zhang, J., Xiang, C., Huang, H., Wei, J., Xi, H., Zhu, J., and Chen, J. Spargeattn: Accurate sparse attention accelerat- ing any model inference. In International Conference on Machine Learning (ICML) , 2025e. Zhang, J., Xiang, C., Huang, H., Wei, J., Xi, H., Zhu, J., and Chen, J. Spargeattn: Training-free sparse attention accelerating any model inference. 2025f. Zhang, P., Wei, J., Zhang, J., Zhu, J., and Chen, J. Accu- rate int8 training through dynamic block-level fallback. 2025g. Zhao, T., Fang, T., Huang, H., Liu, E., Wan, R., Soedar- madji, W., Li, S., Lin, Z., Dai, G., Yan, S., Yang, H., et al. Vidit-q: Efficient and accurate quantization of dif- fusion transformers for image and video generation. In International Conference on Learning Representations , 2025. Zheng, Z., Peng, X., Yang, T., Shen, C., Li, S., Liu, H., Zhou, Y ., Li, T., and You, Y . Open-sora: Democratiz- ing efficient video production for all. arXiv preprint arXiv:2412.20404 , 2024. 8 SageAttention2++: A More Efficient Implementation of SageAttention2 A. Appendix A.1. Visible Comparison Examples Sage2++ (8+8)Full PrecisionSage2++ (4+8)Flux Stable-Diffusion-3.5Sage2++ (8+8)Full PrecisionSage2++ (4+8) Figure 7. Visible examples of using SageAttention2++ on image generation. A.2. Datasets and Metrics in Experiments Datasets. Text-to-text models are evaluated on: WikiText (Merity et al., 2022) to assess the model’s prediction confidence, LAMBADA (Paperno et al., 2016) for contextual understanding, and Needle-in-A-Haystack (NIAH) task (Kamradt, 2023). Text-to-video models are evaluated using the open-sora (Zheng et al., 2024) prompt sets. Text-to-image models are assessed on COCO annotations (Lin et al., 2014). End-to-end
|
https://arxiv.org/abs/2505.21136v2
|
metrics. For text-to-text models, we use perplexity (Ppl.) (Jelinek et al., 1977) for WikiText, accuracy (Acc.) for LAMBADA and NIAH. For text-to-video models, following Zhao et al. (2025), we evaluate the quality of generated videos on five metrics: CLIPSIM and CLIP-Temp (CLIP-T) (Liu et al., 2024) to measure the text-video alignment; VQA-a and VQA-t to assess the video aesthetic and technical quality, respectively; and Flow-score (FScore) for temporal consistency (Wu et al., 2023). For text-to-image models, generated images are compared with the images in three aspects: FID (Heusel et al., 2017) and sFID (Salimans et al., 2016) for fidelity evaluation, Clipscore (CLIP) (Hessel et al., 2021) for text-image alignment, and ImageReward (IR) (Xu et al., 2023) for human preference. Accuracy metrics. We use three metrics to assess the accuracy of quantized attention output O′compared to atten- tion output in full-precision O. First, we flatten O′andOinto vectors in the shape of 1×n. Then, Cosine similar- ity:CosSim =POO′/pPO2pPO′2, Relative L1 distance: L1 =P|O−O′|/P|O|, Root mean square error: RMSE =p (1/n)P(O−O′)2. 9
|
https://arxiv.org/abs/2505.21136v2
|
arXiv:2505.21140v1 [cs.LG] 27 May 2025HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Honglin Gao School of Electrical and Electronic Engineering Nanyang Technological University Singapore honglin001@e.ntu.edu.sgXiang Li School of Electrical and Electronic Engineering Nanyang Technological University Singapore xiang002@e.ntu.edu.sg Lan Zhao School of Electrical and Electronic Engineering Nanyang Technological University Singapore zhao0468@e.ntu.edu.sgGaoxi Xiao School of Electrical and Electronic Engineering Nanyang Technological University Singapore egxxiao@ntu.edu.sg Abstract Heterogeneous graph neural networks (HGNNs) have recently drawn increasing attention for modeling complex multi-relational data in domains such as recommendation, finance, and social net- works. While existing research has been largely focusing on en- hancing HGNNs’ predictive performance, their robustness and se- curity, especially under backdoor attacks, remain underexplored. In this paper, we propose a novel Heterogeneous Backdoor Attack (HeteroBA) framework for node classification tasks on heteroge- neous graphs. HeteroBA inserts carefully crafted trigger nodes with realistic features and targeted structural connections, leveraging attention-based and clustering-based strategies to select influential auxiliary nodes for effective trigger propagation, thereby causing the model to misclassify specific nodes into a target label while maintaining accuracy on clean data. Experimental results on three datasets and various HGNN architectures demonstrate that Heter- oBA achieves high attack success rates with minimal impact on the clean accuracy. Our method sheds light on potential vulnerabilities in HGNNs and calls for more robust defenses against backdoor threats in multi-relational graph scenarios. CCS Concepts •Do Not Use This Code →Generate the Correct Terms for Your Paper ;Generate the Correct Terms for Your Paper ; Generate the Correct Terms for Your Paper; Generate the Correct Terms for Your Paper. Keywords Heterogeneous Graph, Backdoor Attack, Heterogeneous Graph Neural Networks Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference acronym ’XX, Woodstock, NY ©2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-XXXX-X/2018/06 https://doi.org/XXXXXXX.XXXXXXXACM Reference Format: Honglin Gao, Xiang Li, Lan Zhao, and Gaoxi Xiao. 2018. HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs. In Proceedings of Make sure to enter the correct conference title from your rights confirmation email (Conference acronym ’XX). ACM, New York, NY, USA, 11 pages. https://doi.org/XXXXXXX.XXXXXXX 1 Introduction Graph data is prevalent in various applications, including social networks [ 10,12], signal processing [ 36], biological networks [ 14], and knowledge graphs [ 28]. Unlike homogeneous graphs, heteroge- neous graphs (HGs) contain multiple node and edge types, making them particularly effective for modeling complex real-world rela- tionships. For instance, an academic graph comprises researchers, papers, and institutions connected through authorship, citation, and collaboration links. Such structural flexibility allows heterogeneous graphs to serve as the backbone for various domains,
|
https://arxiv.org/abs/2505.21140v1
|
including recommendation systems and financial risk modeling [ 16,19], etc. Heterogeneous Graph Neural Networks (HGNNs) extend GNNs to incorporate diverse relational information, making them well- suited for tasks like node classification [ 30,31] and link predic- tion [ 15,26]. In financial applications, HGNNs have been utilized for fraud detection and risk assessment [ 21,24], while in recom- mender systems, they enhance personalized recommendations by capturing cross-domain interactions [ 4,27]. Despite these advan- tages, research has primarily focused on improving HGNN per- formance, leaving their security vulnerabilities relatively underex- plored. Recent studies highlight their susceptibility to adversarial threats [ 6,32], among which backdoor attacks pose a particularly severe risk due to their stealthiness and potential impact on critical decision-making [22]. Backdoor attacks aim to manipulate a model’s behavior under specific input conditions by intentionally altering its training data. While traditional backdoor attacks have been extensively studied in domains such as computer vision [ 9,11] and natural language processing [ 2,17], research on backdoor attacks in HGNNs remains scarce. Unlike homogeneous graphs, heterogeneous graphs cap- ture richer structural and semantic information through diverse node types and relationships. HGNNs leverage these heterogeneous connections to learn comprehensive feature representations for Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Trovato et al. downstream tasks. However, the complexity and diversity of het- erogeneous graphs also introduce new vulnerabilities, providing attackers with opportunities to embed backdoors by carefully mod- ifying the graph structure or feature representations in a targeted manner. Once a backdoor is successfully implanted, an attacker can exploit specific trigger patterns to mislead the model into producing incorrect outputs, potentially leading to severe security risks. For example, in financial systems, an attacker can introduce a fake bank account under a customer’s identity into the network. By creating hidden connections between the fraudulent account and transac- tion history, the attacker can manipulate the system’s judgment, potentially evading detection. Existing backdoor attack methods primarily focus on homoge- neous graphs. Dai et al. argue that trigger insertion disrupts the message-passing process between nodes, thereby compromising the model’s integrity [ 3]. To address this, they propose Unnoticeable Graph Backdoor Attack (UGBA), which leverages a bi-level opti- mization framework to execute backdoor attacks under a limited attack budget while minimizing detectability. In contrast, Xing et al. introduce Clean-Label Graph Backdoor Attack (CGBA) [ 25], which injects triggers into node feature representations without altering node labels or graph structure. By selecting triggers from existing node features with high similarity to their neighbors, CGBA en- hances attack stealthiness and avoids structural modifications, mak- ing it more resistant to defense mechanisms. However, these meth- ods of backdoor attacks on node classification tasks overlook the diversity of edge relationships and node types. To address this limi- tation and exploit potential vulnerabilities in HGNNs, we propose a novel Hetero geneous Backdoor Attack (HeteroBA) method specifi- cally designed for node classification in heterogeneous graphs. Unlike existing node classification backdoor attack methods, HeteroBA represents a novel and targeted approach specifically engineered for heterogeneous graphs. By strategically injecting minimally invasive triggers, HeteroBA effectively achieves supe- rior attack performance while maintaining
|
https://arxiv.org/abs/2505.21140v1
|
exceptional invisibility, setting a new standard in graph-based adversarial techniques. Specifically, when the targeted nodes of the attack have been selected, new trigger nodes are introduced into the graph to carry out the attack. These trigger nodes are strategically connected to the targeted nodes and some highly influential nodes, i.e., those nodes that can significantly impact information propagation within the graph, forming subtle but effective perturbations. The features of the trigger nodes are constructed based on the statistical prop- erties of nodes of the same type, ensuring consistency with the existing graph structure and enhancing stealthiness. Finally, after modifying the labels of the targeted nodes, the poisoned graph, embedded with these adversarial triggers and connections, ensures that when the backdoored model encounters similar trigger patterns during inference, it misclassifies the targeted nodes as belonging to a specific wrongful class as intended by the attacker. Note that the proposed attack can lead to misclassification of multiple targeted nodes into the same wrongful class (hereafter termed as designated target class ). The main contribution of this paper is as follows: •We propose HeteroBA, the first dedicated backdoor attack on heterogeneous graphs for the node classification task,achieving high attack success rates across various models and datasets. •HeteroBA effectively manipulates the graph structure to enhance the attack while maintaining efficiency in execution. •Extensive experiments on multiple benchmark datasets vali- date the effectiveness of HeteroBA, outperforming baselines in multiple cases and demonstrating strong attack capability. We also propose a method to calculate the stealthiness score for node injection-based backdoor attacks. The remainder of this paper is structured as follows: Section 2 provides a brief review of related work. Section 3 introduces the necessary preliminaries and definitions. Our proposed methodology is presented in detail in Section 4. To evaluate its effectiveness, we conduct extensive experiments and analyses on multiple benchmark datasets and models in Section 5. Finally, Section 6 concludes the paper. Our code has been open-sourced and is publicly available1. 2 Related Work 2.1 Heterogeneous Graph Neural Networks HGNNs have evolved significantly in recent years [ 5,7,37,38], with various architectures designed to effectively capture heterogeneous relationships. Among them, the most representative models include a few as follows. HAN [ 20] introduces meta-path-based attention to selectively aggregate information along predefined relational paths, providing interpretability in node representations. HGT [ 8] extends this approach by leveraging a transformer-based architec- ture to dynamically model heterogeneous interactions. Meanwhile, SimpleHGN [ 13] optimizes message passing by simplifying the het- erogeneity modeling process, making it computationally efficient while maintaining strong performance. While these methods significantly improve learning on hetero- geneous graphs, their robustness under malicious manipulation has received limited attention. The unique characteristics of HGNNs, such as diverse node and edge types and advanced attention mech- anisms, present both opportunities and challenges for potential attackers, making them an important area for further exploration. 2.2 Backdoor Attack on Homogeneous Graph Backdoor attacks on graph neural networks (GNNs) embed hidden triggers during training, allowing adversaries to control outputs under specific conditions. Existing attacks on homogeneous graphs are classified by their trigger injection strategies. Feature-based Backdoor
|
https://arxiv.org/abs/2505.21140v1
|
Attacks introduce malicious triggers by modifying node attributes while keeping the graph structure unchanged. NFTA (Node Feature Target Attack) [ 1] injects fea- ture triggers without requiring knowledge of GNN parameters, disrupting the feature space and confusing model predictions. It also introduces an adaptive strategy to balance feature smoothness. Xing et al. [ 25] selected trigger nodes with high similarity to neigh- bors, ensuring stealthiness without modifying labels or structure. However, both methods rely solely on feature manipulation, mak- ing them less effective when structural changes significantly impact message passing. 1https://anonymous.4open.science/r/HeteroBA-EEAF HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Structure-based Backdoor Attacks manipulate the graph topol- ogy by adding or removing edges to implant triggers. Zhang et al. [34] introduced a subgraph-based trigger to mislead graph clas- sification models while maintaining high attack success rates. Xi et al. [ 23] extended this concept by generating adaptive subgraph triggers that dynamically tailor the attack for different inputs. Dai et al. [ 3] employed a bi-level optimization strategy to modify graph structures under an attack budget, maximizing stealthiness while ensuring effectiveness. Although these methods demonstrate the feasibility of backdoor attacks on homogeneous graphs, their reliance on uniform graph structures and simple node relationships limits their applicability to the more complex heterogeneous graph setting. 2.3 Attacks on Heterogeneous Graph Neural Networks Heterogeneous graphs have shown vulnerabilities under adversar- ial attacks, and several studies have explored this area. Zhang et al. [33] proposed RoHe, a robust HGNN framework that defends against adversarial attacks by pruning malicious neighbors using an attention purifier. Zhao et al. [ 35] introduced HGAttack, the first grey-box evasion attack specifically targeting HGNNs, which lever- ages a semantic-aware mechanism and a novel surrogate model to generate perturbations. These works highlight the susceptibility of HGNNs to adversarial manipulations and the progress made in addressing these threats. However, while adversarial attacks on heterogeneous graphs and backdoor attacks on homogeneous graphs have been explored, no prior work has investigated backdoor vulnerabilities in HGNNs. Our research addresses this gap by proposing a novel backdoor attack method specifically designed for heterogeneous graphs, lever- aging their unique structural properties to embed triggers while maintaining high attack success rates and stealthiness. 3 Preliminaries and problem formulation In this section, we introduce the preliminaries of backdoor attacks on heterogeneous graphs and define the problem. Table 1 summa- rizes the notation used throughout this section for clarity. 3.1 Preliminaries Definition 3.1 (Heterogeneous graph) A heterogeneous graph is defined as 𝐺={V,E,𝑋}, whereV={𝑣1,𝑣2,...,𝑣𝑛}is the node set, and𝑋∈R|V|×𝑑is a node feature matrix with 𝑑being the dimension of each node feature. The setT={𝑡1,𝑡2,...,𝑡𝑇}represents𝑇different node types, where each node 𝑣∈ V belongs to one specific type 𝑡∈ T . Nodes for each type 𝑡is represented by the subset V𝑡, and its size is denoted as∥V𝑡∥. The set of edge types is denoted as R= 𝑟𝑡𝑎,𝑡𝑏|𝑡𝑎,𝑡𝑏∈T,𝑡𝑎≠𝑡𝑏 where each edge type 𝑟𝑡𝑎,𝑡𝑏represents connections between nodes of type 𝑡𝑎and nodes of type 𝑡𝑏. For each pair of node types(𝑡𝑎,𝑡𝑏), we maintain an adjacency matrix 𝐴𝑡𝑎,𝑡𝑏∈ {0,1}|V𝑡𝑎|× V𝑡𝑏 , where𝐴𝑡𝑎,𝑡𝑏 𝑣𝑖,𝑣𝑗=1indicates an edge be-
|
https://arxiv.org/abs/2505.21140v1
|
tween node 𝑣𝑖∈V𝑡𝑎and node𝑣𝑗∈V𝑡𝑏. We then define the edge setEas the union of all such edges, recorded as triples 𝑣𝑖,𝑣𝑗,𝑟𝑡𝑎,𝑡𝑏: E=Ð 𝑟𝑡𝑎,𝑡𝑏∈R 𝑣𝑖,𝑣𝑗,𝑟𝑡𝑎,𝑡𝑏|𝑣𝑖∈V𝑡𝑎,𝑣𝑗∈V𝑡𝑏,𝐴𝑡𝑎,𝑡𝑏 𝑣𝑖,𝑣𝑗=1 Figure 1: Overall Backdoor Attack Process on a Heteroge- neous Graph. Hence, each adjacency matrix 𝐴𝑡𝑎,𝑡𝑏describes the connectivity be- tween nodes of types 𝑡𝑎and𝑡𝑏, and each nonzero entry in 𝐴𝑡𝑎,𝑡𝑏 corresponds to an edge in E. A heterogeneous graph satisfies the condition𝑇+|R| >2. Definition 3.2 (Primary type, trigger type and auxiliary type) We define three key types. The primary type 𝑡𝑝∈T refers to the type of nodes for classification. The trigger type 𝑡𝑡𝑟∈T denotes the type of nodes added as backdoor triggers. The auxiliary type comprises node types 𝑡𝑎∈ T that can be reached from a primary-type node 𝑣𝑡𝑝∈V𝑡𝑝via a trigger-type node 𝑣𝑡𝑡𝑟∈V𝑡𝑡𝑟 in exactly two hops. Formally, Taux= 𝑡𝑎∈T ∃𝑣𝑡𝑝∈V𝑡𝑝,𝑣𝑡𝑡𝑟∈ V𝑡𝑡𝑟,𝑣𝑡𝑎∈V𝑡𝑎:(𝑣𝑡𝑝,𝑣𝑡𝑡𝑟,𝑟𝑡𝑝,𝑡𝑡𝑟)∈E,(𝑣𝑡𝑡𝑟,𝑣𝑡𝑎,𝑟𝑡𝑡𝑟,𝑡𝑎)∈E . Definition 3.3 (Designated target class and non-target classes) LetYdenote the set of class labels in the classification task. The des- ignated target class is defined as 𝑦𝑡∈Y, representing the label to which the attacker aims to misclassify certain nodes. The non-target classes are given by Y¬𝑡=Y\{𝑦𝑡}. In a heterogeneous graph, clas- sification is performed on nodes of the primary type 𝑡𝑝, whose node set is denoted as V𝑡𝑝. Based on their ground-truth labels, we define:V𝑦𝑡=n 𝑣∈V𝑡𝑝|𝑦𝑣=𝑦𝑡o ,V¬𝑦𝑡=n 𝑣∈V𝑡𝑝|𝑦𝑣≠𝑦𝑡o . By definition,V𝑦𝑡∪V¬𝑦𝑡=V𝑡𝑝andV𝑦𝑡∩V¬𝑦𝑡=∅. 3.2 Problem definition Given a heterogeneous graph 𝐺=(V,E,𝑋)and a node clas- sification model 𝑓𝜃:V𝑡𝑝→ Y trained on the primary node typeV𝑡𝑝⊆ V , a backdoor attack alters 𝐺to construct a poi- soned graph e𝐺=(eV,eE,e𝑋), ensuring that after training on e𝐺, the model misclassifies specific target nodes while preserving overall classification accuracy. To construct e𝐺, the attacker introduces a set of new trigger nodes V(new) 𝑡𝑡𝑟with feature matrix 𝑋(new)and new edgesE(𝑛𝑒𝑤)connecting them to existing nodes, resulting ineV=V∪V(new) 𝑡tr,eE=E∪E(new), ande𝑋=𝑋 𝑋(new) . Specifically, the attacker selects a subset of primary-type nodes, denoted asV(𝑝)⊆V𝑡𝑝, as targeted nodes, and aims to enforce their wrongful classification into a designated target class 𝑦𝑡dur- ing inference, i.e., 𝑓𝜃(e𝐺,𝑣)=𝑦𝑡,∀𝑣∈V(𝑝). Meanwhile, for the Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Trovato et al. Table 1: Notation and Definitions Symbol Meaning 𝐺=(V,E,𝑋) Heterogeneous graph V,E,𝑋 Nodes, edges, feature matrix T,R Node/edge type sets 𝑡𝑝,𝑡𝑡𝑟 Primary/trigger node types Taux Auxiliary node types 𝐴𝑡𝑎,𝑡𝑏∈{0,1}|V𝑡𝑎|× V𝑡𝑏 Adjacency matrix between types 𝑡𝑎,𝑡𝑏 Y Class label set 𝑦𝑡 Target class V𝑡𝑝,V(𝑝)Primary-type nodes, poisoned subset V𝑦𝑡,V¬𝑦𝑡 Primary-type nodes w/ or w/o label 𝑦𝑡 V(new) 𝑡trNewly added trigger nodes 𝑋(new),E(new)New trigger-node features, edges e𝐺=(fV,eE,e𝑋) Poisoned graph 𝑓𝜃 Classification model F(𝐺) Allowed modifications 1(·) Indicator function remaining primary-type nodes V𝑡𝑝\V(𝑝), the model should retain its correct predictions, i.e., 𝑓𝜃(e𝐺,𝑣)=𝑦𝑣,∀𝑣∈V𝑡𝑝\V(𝑝). Formally, the attack is formulated as an optimization problem: e𝐺∗=arg max e𝐺∈F(𝐺)∑︁ 𝑣∈V(𝑝)1 𝑓𝜃(e𝐺,𝑣)=𝑦𝑡+∑︁ 𝑣∈V𝑡𝑝\V(𝑝)1 𝑓𝜃(e𝐺,𝑣)=𝑦𝑣(1) Here, 1(·)is an indicator function that returns 1 if the condition is satisfied and 0 otherwise, and F(𝐺)denotes the space of permis- sible modifications to 𝐺, which may include adding or modifying nodes, edges, or node features. 4 Methodology In this section, we introduce the details of HeteroBA, which aims to meet Eq. (1) to conduct backdoor attacks on heterogeneous graphs. Since directly optimizing the features and connections of the in-
|
https://arxiv.org/abs/2505.21140v1
|
serted trigger node 𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟to ensure both attack effectiveness and stealthiness is computationally expensive and challenging, Het- eroBA decomposes the attack process into two key components, addressing the following two core challenges: (i) how to generate the features of the inserted trigger node 𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟to enhance their stealthiness and make them less detectable; (ii) how to construct the connections of the inserted trigger node 𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟to maximize attack effectiveness while maintaining structural consistency with the original graph. To tackle these challenges, HeteroBA consists of two main modules. The Feature Generator is responsible for gener- ating trigger node features by learning the distribution of existing V𝑡𝑡𝑟nodes. This process ensures that the injected trigger nodes blend seamlessly into the overall feature space, thereby improving stealthiness. The Edge Generator determines how the inserted trig- ger node𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟connects to existing nodes. Specifically, 𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟is linked to primary-type nodes 𝑣𝑡𝑝∈V𝑡𝑝and auxiliary type nodes 𝑣𝑡𝑎∈Taux. To establish these connections, HeteroBA introduces two strategies: a clustering-based strategy and an attention-based strategy.By integrating these two modules, HeteroBA generates a new poisoned graph e𝐺=(eV,eE,e𝑋), which ensures both high attack effectiveness and strong stealthiness. Fig. 1 provides a conceptual illustration of how HeteroBA generates trigger nodes and estab- lishes connections. The pseudocodes of this process are provided in Appendix D, and the corresponding time complexity analysis is detailed in Appendix A. 4.1 Feature generator In HeteroBA, the Feature Generator is responsible for generating feature embeddings for the inserted trigger nodes 𝑣(𝑛𝑒𝑤) 𝑡𝑡𝑟, ensur- ing that they remain indistinguishable from existing nodes in the feature space. To achieve this, we first extract the set of non-target class nodesV¬𝑦𝑡and identify their neighbors connected via edges of type𝑟𝑡𝑝,𝑡𝑡𝑟. We can get the subset of trigger-type nodes V𝑡𝑡𝑟 that are linked toV¬𝑦𝑡and denote this subset as V′ 𝑡𝑡𝑟, which is used as the basis for feature generation by two strategies, one for continuous features and one for binary features. 4.1.1 Continuous Feature Generation. For each feature dimension 𝑗∈ {1,2,...,𝑑}in𝑋′ 𝑡𝑡𝑟, we employ Kernel Density Estimation (KDE) [ 18] to approximate the underlying probability distribution: ˆ𝑓(𝑥)=1 𝑚ℎ𝑚∑︁ 𝑘=1𝐾𝑥−𝑥𝑘 ℎ , (2) where𝐾(·)is the kernel function (commonly Gaussian), ℎis the bandwidth, and{𝑥𝑘}𝑚 𝑘=1represents the feature values of dimension 𝑗for all nodes inV′ 𝑡𝑡𝑟, with𝑚= V′ 𝑡𝑡𝑟 denoting the number of nodes in this subset. Let ˆ𝑓𝑗denote the fitted KDE for the 𝑗-th feature dimension. We then sample the feature values for newly inserted trigger nodesV(new) 𝑡𝑡𝑟from this estimated distribution: 𝑋(new)(𝑖,𝑗) ∼ ˆ𝑓𝑗,∀𝑖=1,..., V(new) 𝑡𝑡𝑟 and𝑗=1,...,𝑑. (3) This ensures that each dimension of the newly generated features follows the same statistical profile as the existing trigger-type nodes inV′ 𝑡𝑡𝑟. Consequently, the resulting feature matrix 𝑋(new)for the inserted nodes seamlessly aligns with the original distribution, thereby enhancing the stealth of the injected triggers. 4.1.2 Binary Feature Generation. When node features are binary (e.g., indicating a categorical attribute or the presence/absence of a property), directly applying KDE is not feasible. Instead, we com- pute the empirical probability of each feature being 1 from the extracted setV′ 𝑡𝑡𝑟. Specifically, for each binary feature dimension 𝑗∈{1,2,...,𝑑}, let ˆ𝑝𝑗=1 𝑚𝑚∑︁ 𝑘=1𝑋′ 𝑡𝑡𝑟(𝑘,𝑗), (4) where𝑋′
|
https://arxiv.org/abs/2505.21140v1
|
𝑡𝑡𝑟(𝑘,𝑗)is the𝑗-th feature of the 𝑘-th node inV′ 𝑡𝑡𝑟, and 𝑚=|V′ 𝑡𝑡𝑟|. We then generate the binary features for the newly inserted trigger nodes V(new) 𝑡𝑡𝑟by sampling each dimension 𝑗via a Bernoulli distribution: 𝑋(new)(𝑖,𝑗) ∼ Bernoulli ˆ𝑝𝑗,∀𝑖=1,..., V(new) 𝑡𝑡𝑟 and𝑗=1,...,𝑑. (5) HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Conference acronym ’XX, June 03–05, 2018, Woodstock, NY This process ensures that the newly generated binary features main- tain the same empirical probabilities as the existing trigger-type nodes, preserving consistency with the original data distribution. By combining both the continuous and binary feature generation strategies, the feature generator can effectively produce feature embeddings for trigger nodes that blend seamlessly into the het- erogeneous graph, thus minimizing the risk of detection. 4.2 Edge Generator To determine which existing nodes should connect with the newly inserted trigger nodes, we first use edge type 𝑟𝑡𝑝,𝑡𝑡𝑟to gather the first-hop trigger-type neighbors V(1) 𝑡𝑡𝑟⊆V𝑡𝑡𝑟ofV𝑦𝑡, then collect the second-hop auxiliary-type neighbors V(2) aux⊆V auxbased on V(1) 𝑡𝑡𝑟via edges𝑟𝑡𝑡𝑟,𝑡𝑏for𝑡𝑏∈ T aux. The identified second-hop neighbors serve as the backup nodes, which will be connected to the trigger nodes. For each auxiliary type 𝑡𝑏∈Taux, the number of auxiliary-type nodes connected to each trigger node is denoted as 𝑑𝑡𝑏, which corresponds to the average degree of trigger-type nodes on edges of type𝑟𝑡𝑡𝑟,𝑡𝑏. We next feed the heterogeneous graph 𝐺into a surrogate modelMto retrieve attention weights 𝛼(𝑢,𝑣)and node embeddings z𝑣. Building on these outputs, the attention-based and clustering-based strategies respectively utilize attention values and embeddings to select the top 𝑑𝑡𝑏most influential nodes in each auxiliary type 𝑡𝑏∈Taux, which are then connected to the newly inserted trigger nodes. In HeteroBA, we select SimpleHGN [ 13] asMfor three primary reasons: it directly yields both node embeddings and attention scores, has comparatively fewer parameters than other heteroge- neous models, and requires no explicit metapath definition, thereby mitigating potential biases from metapath design. 4.2.1 Attention-based strategy. We leverage the learned attention coefficients 𝛼(𝑢,𝑣)from the surrogate model Mto quantify the importance of each node in V(2) auxin influencingV𝑦𝑡. Specifically, for each target node 𝑣𝑦𝑡∈V𝑦𝑡, we consider its first-hop trigger- type neighbors inV(1) 𝑡𝑡𝑟via edges of type 𝑟𝑡𝑝,𝑡𝑡𝑟, and subsequently aggregate contributions from their second-hop neighbors in V(2) aux through edges 𝑟𝑡𝑡𝑟,𝑡𝑏. For an auxiliary-type node 𝑣(2) aux∈V(2) aux, its influence on 𝑣𝑦𝑡is de- termined by the product of the first-layer attention value 𝛼(𝑣(2) aux,𝑣(1) 𝑡𝑡𝑟) and the second-layer attention value 𝛼(𝑣(1) 𝑡𝑡𝑟,𝑣𝑦𝑡): 𝐼 𝑣(2) aux,𝑣𝑦𝑡=∑︁ 𝑣(1) 𝑡𝑡𝑟∈V(1) 𝑡𝑡𝑟(𝑣𝑦𝑡)𝛼(𝑣(2) aux,𝑣(1) 𝑡𝑡𝑟)·𝛼(𝑣(1) 𝑡𝑡𝑟,𝑣𝑦𝑡). (6) whereV(1) 𝑡𝑡𝑟(𝑣𝑦𝑡)represents the set of first-hop trigger-type neigh- bors of𝑣𝑦𝑡. Summing over all 𝑣𝑦𝑡∈V𝑦𝑡, we compute the total importance score for each auxiliary-type node: 𝐼 𝑣(2) aux=∑︁ 𝑣𝑦𝑡∈V𝑦𝑡𝐼 𝑣(2) aux,𝑣𝑦𝑡. (7)Nodes inV(2) auxwith higher 𝐼(𝑣(2) aux)scores are ranked in descend- ing order, and the top-ranked nodes in each auxiliary type are selected to connect with the newly inserted trigger nodes. This en- sures that adversarial information is efficiently propagated toward V𝑦𝑡, thereby enhancing the backdoor attack effectiveness while maintaining the structural consistency of the heterogeneous graph. 4.2.2 Clustering-based Strategy. In another strategy, we leverage the node embeddings obtained from the surrogate model Mto identify structurally and semantically cohesive nodes within
|
https://arxiv.org/abs/2505.21140v1
|
V(2) aux. Specifically, we extract the embedding representations of all second- hop auxiliary-type neighbors V(2) auxand employ a clustering-based selection strategy to determine which nodes should connect to the newly inserted trigger nodes. Given the embedding matrix Z∈R|V(2) aux|×𝑑, where each row z𝑖 corresponds to the 𝑑-dimensional embedding of node 𝑣(2) aux,𝑖∈V(2) aux, we first compute the pairwise cosine similarity matrix: 𝑆=Z·Z𝑇, (8) where the entry 𝑆𝑖𝑗represents the cosine similarity between nodes 𝑣(2) aux,𝑖and𝑣(2) aux,𝑗. To ensure that self-similarity does not dominate the selection process, we set the diagonal elements of 𝑆to zero: 𝑆𝑖𝑖=0,∀𝑖∈{1,2,...,|V(2) aux|}. (9) For each node 𝑣(2) aux,𝑖, we compute its average similarity score, defined as: 𝐼(𝑣(2) aux,𝑖)=1 |V(2) aux|−1∑︁ 𝑗≠𝑖𝑆𝑖𝑗. (10) Nodes inV(2) auxwith higher 𝐼(𝑣(2) aux,𝑖)scores exhibit stronger em- bedding similarities to other nodes within the same type, indicating their centrality within structurally cohesive regions. We rank all nodes in descending order based on 𝐼(𝑣(2) aux,𝑖)and select the top- ranked nodes within each auxiliary type to connect with the newly inserted trigger nodes. By enforcing connections with the most clustered and semanti- cally aligned nodes, this strategy ensures that the inserted trigger nodes integrate seamlessly into the graph structure, thereby im- proving stealthiness while maintaining attack effectiveness. 5 Experiments In this section, we evaluate our proposed method on multiple bench- mark datasets to investigate the following research questions: RQ1: How effective is the attack? RQ2: How is the stealthiness? RQ3: Do the two edge-generation strategies (e.g., attention-based or clustering-based) indeed improve attack performance ? RQ4: What is the relationship between the poison rate and attack effectiveness? RQ5: Why do these particular auxiliary-type nodes effectively facilitate backdoor infiltration in a heterogeneous graph, and what crucial role do they play in the attack success? Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Trovato et al. 5.1 Experimental Settings 5.1.1 Datasets. We evaluate our method on three real-world het- erogeneous datasets: DBLP, ACM, and IMDB[ 5]. DBLP consists of four entity types (authors, papers, terms, conferences), with authors being categorized into three research areas (database, data mining, artificial intelligence). ACM includes papers from KDD, SIGMOD, SIGCOMM, MobiCOMM, and VLDB, being categorized into three fields (database, wireless communication, data mining). IMDB con- tains movies, keywords, actors, and directors, with movies being classified into action, comedy, and drama. The statistics of these datasets are shown in Table 2. Table 2: Dataset Statistics Dataset #Node Types #Edge Types #Nodes #Edges Primary Type ACM 3 4 11252 34864 paper IMDB 3 4 11616 34212 movie DBLP 4 6 26198 242142 author 5.1.2 Train settings. We conduct experiments using HAN [ 20], HGT [ 8], and SimpleHGN [ 13] as victim models, ensuring a fair comparison of backdoor attack performance under the same train- ing and evaluation conditions. The dataset is divided into training, testing, and validation sets. The training set comprises 70% of the primary-type nodes V𝑡𝑝, including both clean and poisoned nodes. Specifically, the poisoned training set (Poison Trainset) accounts for 5% ofV𝑡𝑝, serving as the injected trigger nodes to facilitate backdoor activation. The testing set constitutes 20% of V𝑡𝑝, within which the
|
https://arxiv.org/abs/2505.21140v1
|
poisoned testing set (Poison Testset) also accounts for 5%, allowing us to evaluate the attack’s effectiveness during infer- ence. The remaining 10% is allocated to the validation set, which is used for hyperparameter tuning and early stopping. The training parameters are provided in Appendix B. 5.1.3 Compared Methods. Since our work is the first to explore backdoor attacks on heterogeneous graphs, we adapt existing back- door attack methods originally designed for homogeneous graphs, namely UGBA [ 3] and CGBA [ 25], and modify them to be compati- ble with heterogeneous graphs, using the adapted versions as our baselines. For UGBA, to ensure its applicability to heterogeneous graphs, we first convert the heterogeneous graph into a homogeneous graph. Following UGBA’s bi-level optimization strategy, we generate and inject adversarial graph structures. After completing the attack in the homogeneous setting, we convert the graph back into its heterogeneous form. During this process, the newly inserted nodes and edges introduced by UGBA are assigned random node types and edge types to conform to the heterogeneous graph schema. For CGBA, as it does not involve structural modifications, we directly adapt its feature perturbation strategy to heterogeneous graphs by applying it to the nodes in V(𝑝), ensuring its effective- ness in this setting. 5.1.4 Evaluation Metrics. The Attack Success Rate (ASR) [3] mea- sures the probability that the backdoored model 𝑓𝑏misclassifies a sample embedded with a trigger 𝑔𝑡into the target class 𝑦𝑡. Formally, ASR is defined as:𝐴𝑆𝑅=Í𝑛 𝑖=11(𝑓𝑏(𝑣𝑖)=𝑦𝑡) 𝑛(11) where𝑛denotes the number of poisoned test samples, and 1(·) represents the indicator function. A higher ASR indicates a more effective backdoor attack. The Clean Accuracy Drop (CAD) [25] quantifies the degradation in classification accuracy of the backdoored model 𝑓𝑏on clean samples compared to the clean model 𝑓𝑐. It is defined as: 𝐶𝐴𝐷 =𝐴𝑐𝑐𝑓𝑐(Clean)−𝐴𝑐𝑐𝑓𝑏(Clean) (12) where𝐴𝑐𝑐𝑓𝑐(Clean)and𝐴𝑐𝑐𝑓𝑏(Clean)denote the prediction accu- racies of the clean model 𝑓𝑐and the backdoored model 𝑓𝑏on clean samples, respectively. A lower CAD indicates that the backdoor attack preserves the original model’s performance on clean data. We introduce a method to calculate the Stealthiness Score for node injection-based methods in heterogeneous graphs, providing a quantitative measure for evaluating the concealment of injected nodes. This score assesses the similarity between injected trigger nodes and clean nodes in both feature and structural aspects. Given the original graph 𝐺and the poisoned graph e𝐺, we compute the feature similarity Sim featand structural similarity Sim struct , then combine them to obtain the final score. Feature similarity measures how closely the injected nodes’ fea- ture distribution matches that of clean nodes. Let V(𝑛𝑒𝑤) 𝑡𝑡𝑟be the set of newly injected trigger nodes and V(𝑐𝑙𝑒𝑎𝑛) 𝑡𝑡𝑟the clean nodes of the same type. For each feature dimension 𝑖, we compute the Wasser- stein distance WD𝑖between the feature distributions of these two sets and define the average Wasserstein distance as: WD=1 𝑑𝑑∑︁ 𝑖=1WD𝑖. (13) Feature similarity is then given by: Sim feat=1 1+WD. (14) Structural similarity evaluates the degree consistency between injected and clean nodes. Let ¯𝑑𝑡𝑟𝑔and ¯𝑑𝑐𝑙𝑒𝑎𝑛 be the average degrees of injected and clean nodes, respectively. The degree difference is defined as: Δ𝑑=|¯𝑑𝑡𝑟𝑔−¯𝑑𝑐𝑙𝑒𝑎𝑛|, (15) and
|
https://arxiv.org/abs/2505.21140v1
|
the structural similarity is computed as: Sim struct=1 1+Δ𝑑. (16) The final stealthiness score is a weighted sum of both compo- nents: Stealthiness(𝐺,e𝐺)=𝑤1·Sim feat+𝑤2·Sim struct, (17) where𝑤1and𝑤2are weighting factors (default to 0.5). A higher score indicates that the injected nodes blend more naturally into the graph, enhancing attack stealthiness. Unlike previous works, which qualitatively discuss stealthiness, our proposed Stealthiness Score provides a quantitative measure, enabling a more precise evaluation of attack concealment. HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Table 3: Attack effectiveness on three datasets (ACM, DBLP, IMDB). Dataset Victim Model Class TriggerASR CAD HeteroBA-C HeteroBA-A HeteroBA-R CGBA UGBA HeteroBA-C HeteroBA-A HeteroBA-R CGBA UGBA ACMHAN0 author0.9983 0.3748 0.3416 0.9420 0.0664 -0.0033 -0.0028−0.0558†0.0160 0.0099 1 1.0000 0.7463 0.8275 0.5970 0.9783 -0.0005 0.0480−0.0602†0.1109 0.0149 2 1.0000 0.7861 0.5224 0.8126 0.0498 -0.0132 -0.0193 −0.0480†-0.0375 0.0149 HGT0 author1.0000 1.0000 0.9751 0.9436 0.9867 -0.0033 -0.0044 0.0226 -0.0022 -0.0038 1 0.9569 0.9469 0.9851†0.8905 0.9851 -0.0061 -0.0028 -0.0017 0.0122 -0.0061 2 1.0000 1.0000 0.7977 0.9005 0.9469 0.0050 -0.0027 0.0138 0.0083 0.0006 SimpleHGN0 author0.9967 1.0000 0.9536 0.9602 1.0000 -0.0027 0.0000 0.0099 -0.0022 -0.0254 1 0.9950 1.0000 0.9967 0.9303 1.0000 0.0028 0.0033 -0.0011 0.0000 -0.0750 2 1.0000 1.0000 0.6965 0.9038 1.0000 0.0011 0.0011 0.0111 0.0033 -0.0695 DBLPHAN0 paper0.7849 0.7783 0.2167 0.8993 0.0673 0.0110 0.0104 0.0005†0.0038 0.0094 1 0.6043 0.6716 0.1576 0.9422 0.0541 0.0214 0.0411 0.0033 0.0022 0.0027 2 0.6749 0.5534 0.1855 0.9142 0.0312 0.0143 0.0011 0.0104 0.0088 -0.0115 HGT0 paper0.9343 0.9130 0.9950†0.9175 0.0788 0.0137 0.0165 0.0148 -0.0027 0.0088 1 0.7980 0.9967 0.7537 0.9505 0.1117 0.0099 0.0137 0.0115 0.0016 0.0110 2 0.8588 0.8867 0.6650 0.9439 0.1790 0.0137 0.0131 0.0181 0.0077 0.0192 SimpleHGN0 paper1.0000 1.0000 0.9984 0.9224 0.0263 0.0016 0.0033 0.0082 0.0077 0.0044 1 0.9754 1.0000 0.6897 0.9538 0.1347 0.0055 0.0033 0.0027†-0.0016 0.0066 2 0.7307 0.9294 0.7093 0.9422 0.1084 -0.0060 0.0071 0.0077 0.0011 -0.0044 IMDBHAN0 director0.9953 0.8006 0.6791 0.5618 0.2087 0.0307 0.0089 0.0265 0.0037 0.0364 1 0.9984 0.8473 0.8458 0.4523 0.2991 -0.0031 -0.0192 -0.0094 -0.0119 0.0037 2 1.0000 0.9174 0.9003 0.4992 0.3582 0.0068 -0.0234 -0.0068 0.0010 0.0067 HGT0 director0.8473 0.9237 0.7975 0.4851 0.5109 0.0036 0.0062 0.0021 -0.0104 0.0291 1 0.9299 0.9283 0.8878 0.4147 0.7757 0.0182 0.0234 −0.0146†0.0130 0.0026 2 0.8894 0.9377 0.8193 0.4523 0.6807 0.0026 -0.0026−0.0099†-0.0015 0.0182 SimpleHGN0 director0.9533 0.9813 0.7679 0.3881 0.8443 -0.0047 0.0021 0.0015 -0.0244 0.0005 1 0.9502 0.9564 0.9486 0.3850 0.9595 0.0047 0.0109 0.0052 -0.0130 0.0291 2 0.9720 0.9642 0.8255 0.3474 0.9330 -0.0052 0.0099−0.0166†0.0156 0.0078 5.2 Experiment result 5.2.1 Attack effectiveness. To answer RQ1 , we evaluate HeteroBA’s effectiveness against UGBA and CGBA on three heterogeneous graph datasets. Each experiment was repeated three times, with averaged results reported. Table 3 shows key results, while Table 5 in appendix provides additional data. Bold highlights the best ASR values among HeteroBA-C, HeteroBA-A, UGBA, and CGBA. HeteroBA consistently achieves high Attack Success Rates (ASR), outperforming UGBA and CGBA across datasets and models. For example, on ACM, HeteroBA-A achieves an ASR of 1.0000 on HAN and HGT, while UGBA reaches only 0.0664 on HAN (class 0). On DBLP, HeteroBA-C achieves 1.0000 ASR on SimpleHGN (class 0), surpassing UGBA’s 0.0263. On IMDB, HeteroBA-A achieves 0.9813
|
https://arxiv.org/abs/2505.21140v1
|
ASR on SimpleHGN (class 0), outperforming CGBA (0.3881) and UGBA (0.8443). These results highlight HeteroBA’s superior attack effectiveness. Despite high ASR, HeteroBA introduces minimal classification accuracy degradation (CAD), often close to zero or negative, indi- cating little impact on clean data. Although UGBA achieves slightly lower CAD in some cases (e.g., DBLP, class 1, SimpleHGN), the difference is negligible. Overall, HeteroBA maintains clean data performance while achieving superior attack effectiveness. 5.2.2 Stealthiness analysis. As for RQ2 , we compare the Stealth- iness Score of HeteroBA and UGBA across different datasets and trigger settings, as shown in Table 4. CGBA is not included in this comparison since it only perturbs node features without modifying the graph structure, making the Stealthiness Score inapplicable to it. Results show that HeteroBA consistently outperforms UGBA in stealthiness, indicating that its injected trigger nodes are less detectable. For instance, in ACM with author as the trigger, Heter- oBA achieves significantly higher scores across all classes (up to 0.8603), while UGBA remains below 0.2251. In DBLP, where paperis used as the trigger, HeteroBA maintains an advantage, achiev- ing scores around 0.6201 compared to UGBA’s 0.4841. Similarly, in IMDB, when using director as the trigger, HeteroBA achieves a stealthiness score of up to 0.6865, outperforming UGBA’s 0.6531. These results confirm that HeteroBA effectively integrates trigger nodes into the original graph through sampled feature and edge generation strategies, improving attack stealthiness. Table 4: Stealthiness Score of HeteroBA and UGBA. Dataset Class Trigger HeteroBA UGBA ACM0 author0.7715 0.2109 1 0.8603 0.2251 2 0.7226 0.1899 0 field0.3937 0.0342 1 0.3815 0.0342 2 0.3539 0.0294 DBLP0 paper0.6161 0.5063 1 0.6201 0.4841 2 0.6179 0.4963 IMDB0 director0.6567 0.6492 1 0.6711 0.6531 2 0.6865 0.6297 0 actor0.7663 0.6304 1 0.7302 0.6252 2 0.7874 0.6090 5.2.3 Ablation study. To answer RQ3 , we conducted an ablation study by replacing the edge-generation strategies in HeteroBA with a random connection strategy, denoted as HeteroBA-R, to evaluate the impact of the Cluster-based strategy (HeteroBA-C) and the Attention-based strategy (HeteroBA-A). If HeteroBA-R outperforms other methods, we mark it with †. The results in Table 3 show that HeteroBA-R consistently exhibits a significant drop in Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Trovato et al. (a) ACM Paper Attn. Dist. (b) DBLP Term Attn. Dist. (c) IMDB Director Attn. Dist. (d) ACM Paper Clustering (e) DBLP Term Clustering (f) IMDB Movie Clustering Figure 2: Attention distribution and Embedding Clustering Attack Success Rate (ASR) across different datasets. For example, in the ACM dataset with author as the trigger, HeteroBA-C achieves an ASR of 1.0000 on HGT (class 2), while HeteroBA-R only reaches 0.7977. Similar trends are observed in DBLP and IMDB, indicating that structured edge-generation strategies help improve ASR. In terms of Clean Accuracy Drop (CAD), HeteroBA-R does not show a significant advantage. Although CAD is slightly lower in some cases (e.g., in the ACM dataset, HeteroBA-R’s CAD on HAN (class 1) is -0.0602, lower than HeteroBA-C’s -0.0005), the difference is minimal. This suggests that HeteroBA-C and HeteroBA-A can en- hance ASR without significantly affecting clean data performance. In summary, Cluster-based and
|
https://arxiv.org/abs/2505.21140v1
|
Attention-based edge-generation strategies are crucial for improving attack effectiveness. Random edge selection reduces ASR and provides no significant benefit in maintaining clean data accuracy. (a) HeteroBA-A (b) HeteroBA-C Figure 3: Comparison of attack success rates for HeteroBA-A and HeteroBA-C under different poison rates. 5.2.4 Impact of Poison Rate on Attack Effectiveness. In this exper- iment, we explore how the poison rate influences attack successrate (ASR) across three representative scenarios to answer the RQ4 , as depicted in the left figure (HeteroBA-A) and the right figure (HeteroBA-C) of Fig. 3. When the poison rate is only 0.01, the ASR is relatively low, but it increases markedly as the proportion of poisoned samples in the training set grows. This trend mirrors the phenomenon reported in [ 29], where a complex system undergoes a sudden collapse at a specific critical point. Once the poison rate exceeds that threshold, the backdoor attack success rate tends to spike sharply, indicating heightened model vulnerability to the backdoor trigger within that range. Notably, in some cases, even a very small poison rate (e.g., 0.01) can still yield a considerable level of attack effectiveness. This sug- gests that when the backdoor trigger is well-aligned with the model architecture or the underlying data distribution, only a modest number of poisoned samples is required for the model to learn the backdoor features and achieve a high ASR. These findings under- score the insidious nature of backdoor attacks and highlight the pressing need for robust defensive strategies. 5.2.5 Data visualization. In the Fig. 2, the “Attention Distribution” panels show that the victim model’s attention scores are heavily concentrated on a few influential auxiliary-type nodes within the second-hop neighborhood, following a long-tailed pattern. These nodes capture most of the attention weight, while the majority of nodes contribute minimally. In the “Embedding Cluster” figures, highlighted nodes—top-ranked by average cosine similarity—are positioned near dense cluster centers, reflecting strong semantic or structural similarity with neighboring nodes. These patterns reveal that a small set of auxiliary-type nodes plays a key role in propagating adversarial triggers. Their high attention scores and central cluster positions make them effective HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Conference acronym ’XX, June 03–05, 2018, Woodstock, NY conduits for influencing target node representations. This enables more successful backdoor attacks, as triggers align with the graph’s inherent structure. These findings address RQ5 by showing that these key nodes can efficiently transmit adversarial signals to target nodes. 6 Conclusion and future work In this paper, we introduce HeteroBA, the first backdoor attack framework tailored for HGNNs in node classification. By injecting trigger nodes and forming targeted connections with both primary and auxiliary neighbors through two distinct strategies, HeteroBA misleads the model into predicting a designated target class while maintaining clean data performance. Extensive experiments on benchmark datasets and various HGNN architectures reveal that HeteroBA achieves high attack success rates with strong stealthi- ness. Future work will explore extending the attack to other tasks, such as recommendation systems and graph classification, and developing effective defense strategies against backdoor attacks on heterogeneous graphs. References [1]Yang Chen, Zhonglin
|
https://arxiv.org/abs/2505.21140v1
|
Ye, Haixing Zhao, and Ying Wang. 2023. Feature-Based Graph Backdoor Attack in the Node Classification Task. International Journal of Intelligent Systems 2023, 1 (2023), 5418398. [2]Pengzhou Cheng, Zongru Wu, Wei Du, Haodong Zhao, Wei Lu, and Gongshen Liu. 2023. Backdoor attacks and countermeasures in natural language processing models: A comprehensive security review. arXiv preprint arXiv:2309.06055 (2023). [3]Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. 2023. Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023 . ACM, New York, NY, 2263–2273. [4]Saman Forouzandeh, Mehrdad Rostami, Kamal Berahmand, and Razieh Sheikh- pour. 2024. Health-aware food recommendation system with dual attention in heterogeneous graphs. Computers in Biology and Medicine 169 (2024), 107882. [5]Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. 2020. MAGNN: Metap- ath aggregated graph neural network for heterogeneous graph embedding. In Proceedings of the Web Conference 2020 . ACM, New York, NY, 2331–2341. [6]Honglin Gao and Gaoxi Xiao. 2024. Top k enhanced reinforcement learning at- tacks on heterogeneous graph node classification. arXiv preprint arXiv:2408.01964 (2024). [7]Jun Hu, Bryan Hooi, and Bingsheng He. 2024. Efficient heterogeneous graph learning via random projection. IEEE Transactions on Knowledge and Data Engi- neering (2024). [8]Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. 2020. Heterogeneous graph transformer. In Proceedings of the Web Conference 2020 . ACM, New York, NY, 2704–2710. [9]Yiming Li. 2023. Poisoning-based backdoor attacks in computer vision. In Proceedings of the AAAI Conference on Artificial Intelligence . Vol. 37. AAAI Press, Menlo Park, CA, 16121–16122. [10] Wenfei Liang, Yanan Zhao, Rui She, Yiming Li, and Wee Peng Tay. 2024. Fed- SheafHN: Personalized Federated Learning on Graph-structured Data. arXiv preprint arXiv:2405.16056 (2024). [11] Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. 2020. Reflection backdoor: A natural backdoor attack on deep neural networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16 . Springer, 182–199. [12] Yujia Liu, Kang Zeng, Haiyang Wang, Xin Song, and Bin Zhou. 2021. Content matters: A GNN-based model combined with text semantics for social network cascade prediction. In Pacific-Asia Conference on Knowledge Discovery and Data Mining . Springer, 728–740. [13] Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, and Jie Tang. 2021. Are we really making much progress? Revisiting, benchmarking and refining heterogeneous graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining . ACM, New York, NY, 1150–1160. [14] Giulia Muzio, Leslie O’Bray, and Karsten Borgwardt. 2021. Biological network analysis with deep learning. Briefings in Bioinformatics 22, 2 (2021), 1515–1530. [15] Trung-Kien Nguyen, Zemin Liu, and Yuan Fang. 2023. Link prediction on latent heterogeneous graphs. In Proceedings of the ACM Web Conference 2023 . ACM,New York, NY, 263–273. [16] Amirreza Salamat, Xiao Luo, and Ali Jafari. 2021. HeteroGraphRec: A hetero- geneous graph-based neural networks for social recommendations. Knowledge- Based Systems 217 (2021), 106817. [17] Xuan Sheng, Zhaoyang Han, Piji Li, and Xiangmao Chang. 2022. A survey on backdoor attack and defense in natural language processing. In 2022 IEEE 22nd
|
https://arxiv.org/abs/2505.21140v1
|
International Conference on Software Quality, Reliability and Security (QRS) . IEEE, 809–820. [18] George R Terrell and David W Scott. 1992. Variable kernel density estimation. The Annals of Statistics (1992), 1236–1265. [19] Jianfei Wang, Cuiqing Jiang, Lina Zhou, and Zhao Wang. 2024. Representing and discovering heterogeneous interactions for financial risk assessment of SMEs. Expert Systems with Applications 247 (2024), 123330. [20] Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. 2019. Heterogeneous graph attention network. In The World Wide Web Conference . ACM, New York, NY, 2022–2032. [21] Bin Wu, Kuo-Ming Chao, and Yinsheng Li. 2024. Heterogeneous graph neural networks for fraud detection and explanation in supply chain finance. Information Systems 121 (2024), 102335. [22] Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21) . 1523–1540. [23] Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21) . 1523–1540. [24] Sheng Xiang, Dawei Cheng, Chencheng Shang, Ying Zhang, and Yuqi Liang. 2022. Temporal and heterogeneous graph neural network for financial time series prediction. In Proceedings of the 31st ACM international conference on information & knowledge management . ACM, New York, NY, 3584–3593. [25] Xiaogang Xing, Ming Xu, Yujing Bai, and Dongdong Yang. 2024. A clean-label graph backdoor attack method in node classification task. Knowledge-Based Systems 304 (2024), 112433. [26] Siyong Xu, Cheng Yang, Chuan Shi, Yuan Fang, Yuxin Guo, Tianchi Yang, Luhao Zhang, and Maodi Hu. 2021. Topic-aware heterogeneous graph neural network for link prediction. In Proceedings of the 30th ACM international conference on information & knowledge management . ACM, New York, NY, 2261–2270. [27] Bo Yan, Yang Cao, Haoyu Wang, Wenchuan Yang, Junping Du, and Chuan Shi. 2024. Federated heterogeneous graph neural network for privacy-preserving recommendation. In Proceedings of the ACM on Web Conference 2024 . ACM, New York, NY, 3919–3929. [28] Zi Ye, Yogan Jaya Kumar, Goh Ong Sing, Fengyan Song, and Junsong Wang. 2022. A comprehensive survey of graph neural networks for knowledge graphs. IEEE Access 10 (2022), 75729–75741. [29] Yi Yu, Gaoxi Xiao, Jie Zhou, Yubo Wang, Zhen Wang, Jürgen Kurths, and Hans Joachim Schellnhuber. 2016. System crash as dynamics of complex networks. Proceedings of the National Academy of Sciences 113, 42 (2016), 11726–11731. [30] Xi Zeng, Fang-Yuan Lei, Chang-Dong Wang, and Qing-Yun Dai. 2024. Multi-view Heterogeneous Graph Neural Networks for Node Classification. Data Science and Engineering 9, 3 (2024), 294–308. [31] Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla. 2019. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining . ACM, New York, NY, 793–803. [32] Mengmei Zhang, Xiao Wang, Meiqi Zhu, Chuan Shi, Zhiqiang Zhang, and Jun Zhou. 2022. Robust heterogeneous graph neural networks against adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence . Vol. 36. AAAI Press, Menlo Park, CA, 4363–4370. [33] Mengmei Zhang, Xiao Wang, Meiqi Zhu, Chuan Shi, Zhiqiang Zhang, and Jun Zhou. 2022. Robust heterogeneous graph neural
|
https://arxiv.org/abs/2505.21140v1
|
networks against adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence . Vol. 36. AAAI Press, Menlo Park, CA, 4363–4370. [34] Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021. Back- door attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies . ACM, New York, NY, 15–26. [35] He Zhao, Zhiwei Zeng, Yongwei Wang, Deheng Ye, and Chunyan Miao. 2024. HGAttack: Transferable Heterogeneous Graph Adversarial Attack. arXiv preprint arXiv:2401.09945 (2024). [36] Yanan Zhao, Xingchao Jian, Feng Ji, Wee Peng Tay, and Antonio Ortega. 2024. Generalized Graph Signal Reconstruction via the Uncertainty Principle. arXiv preprint arXiv:2409.04229 (2024). [37] Shichao Zhu, Chuan Zhou, Shirui Pan, Xingquan Zhu, and Bin Wang. 2019. Relation structure-aware heterogeneous graph neural network. In 2019 IEEE International Conference on Data Mining (ICDM) . IEEE, 1534–1539. [38] Zhihua Zhu, Xinxin Fan, Xiaokai Chu, and Jingping Bi. 2020. HGCN: A heteroge- neous graph convolutional network-based deep learning model toward collective classification. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining . ACM, New York, NY, 1161–1171. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Trovato et al. Table 5: Remaining attack results. Dataset Victim Model Class TriggerASR CAD HeteroBA-C HeteroBA-A HeteroBA-R CGBA UGBA HeteroBA-C HeteroBA-A HeteroBA-R CGBA UGBA ACMHAN0 field0.3167 0.7612 0.9021 0.9420 0.0697 -0.0072 0.0508−0.0419†0.0160 0.0122 1 0.6667 0.6716 0.7960†0.5970 0.1178 0.0066 0.0513−0.0171†0.1109 0.0187 2 0.0829 1.0000 0.9967 0.8126 0.0398 0.0006 0.1424 -0.0215 -0.0375 0.0000 HGT0 field0.7164 0.4859 0.6020 0.9436 0.9867 -0.0105 - 0.0105 0.0016 -0.0022 -0.0038 1 0.7861 0.8060 0.6733 0.8905 0.9851 0.0055 -0.0011 -0.0028 0.0122 -0.0061 2 0.6020 0.7131 0.8275 0.9005 0.9469 0.0039 0.0033 0.0055 0.0083 0.0006 SimpleHGN0 field0.9718 1.0000 0.9983 0.9602 1.0000 0.0094 0.0160 -0.0022 -0.0022 -0.0660 1 0.9735 1.0000 0.9884 0.9303 1.0000 -0.0061 -0.0022 0.0033 0.0000 -0.0182 2 0.9685 1.0000 1.0000 0.9038 1.0000 0.0033 0.0066 0.0066 0.0033 -0.0701 IMDBHAN0 actor0.2866 0.6854 0.5358 0.5618 0.2087 0.0208 0.0016 0.0099 0.0037 0.0364 1 0.9751 0.7212 0.5545 0.4523 0.2991 -0.0187 -0.0187 -0.0041 -0.0119 0.0047 2 0.8178 0.8910 0.5997 0.4992 0.3832 -0.0031 -0.0073 0.0088 0.0010 0.0114 HGT0 actor0.6511 0.8006 0.6230 0.4851 0.5109 0.0140 0.0026 0.0052 -0.0104 0.0291 1 0.8022 0.8240 0.7227 0.4147 0.7757 0.0145 0.0244 −0.0078†0.0130 0.0026 2 0.7290 0.8286 0.6075 0.4523 0.6807 0.0109 0.0145 0.0036 -0.0015 0.0182 SimpleHGN0 actor0.8660 0.9829 0.7492 0.3881 0.8427 -0.0005 -0.0047 0.0052 -0.0244 -0.0016 1 0.9346 0.8614 0.9455†0.3850 0.9611 0.0015 -0.0062 0.0146 -0.0130 0.0322 2 0.8988 0.9408 0.8084 0.3474 0.9330 0.0047 -0.0057 -0.0031 0.0156 0.0047 A Time complexity analysis The time complexity of the algorithm is primarily determined by the graph size, the number of target nodes for trigger insertion, and the selection strategy of auxiliary nodes. Let 𝑛=|V|denote the total number of nodes in the graph, and 𝑚=|E|the total number of edges. DefineV𝑡𝑝as the set of nodes of type 𝑡𝑝, with a size of 𝑛𝑡𝑝. The set of target nodes for trigger insertion is denoted as V(𝑝)⊆ V𝑡𝑝, with size𝑝=|V(𝑝)|. Additionally, letVaux=Ð 𝑡𝑏∈TauxV𝑡𝑏 be the set of auxiliary type nodes, with size 𝑛aux. The algorithm first filters out nodes of the target type that do not match
|
https://arxiv.org/abs/2505.21140v1
|
a specific label and identifies relevant nodes in their neighborhood. A single-hop neighbor search typically has a com- plexity of at most 𝑂(𝑚), depending on the adjacency structure. If multi-hop neighborhoods are considered, this step can be viewed as a breadth-first search (BFS) on the relevant subgraph, with an upper bound of𝑂(𝑛+𝑚). Following this, the average degree information for different types of nodes is computed, which involves scanning specific sets of nodes or edges. Without index optimization, this step also has a complexity of 𝑂(𝑛+𝑚). During the trigger insertion process, each target node is assigned a new trigger node, and its feature values are sampled accordingly. If the features are discrete, a simple Bernoulli sampling method is applied, which has a complexity of 𝑂(1). For continuous features using kernel density estimation (KDE), the complexity depends on whether the KDE model is pre-trained. If a pre-trained model is available, the complexity per sample is 𝑂(1)or𝑂(log𝑛). The newly inserted trigger node must be connected to the target node and auxiliary nodes. To ensure proper connectivity, auxiliary nodes are selected using an attention-based or clustering-based sorting mechanism. If sorting is applied, the complexity can be as high as 𝑂(𝑛auxlog𝑛aux). If clustering is used, the complexity depends on the specific clustering algorithm, but in most cases, it remains close to𝑂(𝑛auxlog𝑛aux). Since this process is executed for each target node, the cumulative complexity becomes 𝑂(𝑝·𝑛auxlog𝑛aux). After selecting the auxiliary node set, adding the corresponding edges incurs a linear complexity with respect to the number of selected nodes. If the maximum number of connections per node is bounded by𝐷, this step has a complexity of 𝑂(𝐷).Algorithm 1 HeteroBA Overall Algorithm 1:Input:𝐺=(V,E,𝑋),𝑡𝑝,𝑡𝑡𝑟,Taux,𝑦𝑡,V(𝑝)⊆V𝑡𝑝 2:Output: e𝐺=(fV,eE,e𝑋) 3:IdentifyV¬𝑦𝑡←{𝑣∈V𝑡𝑝|𝑦𝑣≠𝑦𝑡} 4:V′ 𝑡𝑡𝑟←Neigh(V¬𝑦𝑡,𝑟𝑡𝑝,𝑡𝑡𝑟) 5:Compute average degrees {𝑑𝑡𝑏}for edges(𝑡𝑡𝑟,𝑡𝑏),∀𝑡𝑏∈Taux 6:E(new)←∅, 𝑋(new)←∅ 7:foreach𝑣∈V(𝑝)do 8: Insert a new trigger node 𝑢 9: iffeature is continuous: sample 𝑥(new) 𝑢 via KDE 10: else (binary): sample 𝑥(new) 𝑢 via Bernoulli 11:E(new)←E(new)∪{(𝑢,𝑣),(𝑣,𝑢)} 12: Let𝑊←{top-𝑑𝑡𝑏in eachVaux(via attention/clustering) } 13:E(new)←E(new)∪{(𝑢,𝑤),(𝑤,𝑢) |𝑤∈𝑊} 14:𝑋(new)←𝑋(new)∪{𝑥(new) 𝑢} 15:end for 16:fV←V∪{ all new𝑢} 17:eE←E∪E(new) 18:e𝑋←" 𝑋 𝑋(new)# 19:return e𝐺=(fV,eE,e𝑋) Once all trigger nodes have been constructed, they must be merged into the original graph, including updating the node set, edge set, and feature matrix. The number of newly added nodes is 𝑝, and the number of new edges is typically at most 𝑂(𝑝𝐷), leading to a complexity of 𝑂(𝑝)or𝑂(𝑝𝐷). Overall, the total time complexity of the algorithm is given by: 𝑂(𝑛+𝑚)+𝑂 𝑝·𝑛auxlog𝑛aux. B Other training parameters The training parameters are shown in Table 6. C Other attack results The remaining attack results are presented in Table 5. D Pseudocode of HeteroBA The pseudocode of HeteroBA is in Algorithm 1 HeteroBA: A Structure-Manipulating Backdoor Attack on Heterogeneous Graphs Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Table 6: Training Parameters and Model-Specific Hyperpa- rameters Key Training Parameters Parameter Value Loss function Cross Entropy Optimizer AdamW Epochs 400 Learning rate 1e-3 Scheduler OneCycleLR Dropout 0.2 Weight decay 1e-4 Gradient Clipping 1.0 Model-Specific Hyperparameters Model Hyperparameters (Value) HGT Hidden Units: 64; Layers: 8; Heads: 4 HAN Hidden Units: 64; Heads: 4 SimpleHGN Hidden Units: 64; Heads: 8; Layers: 4;Received 20 February 2007;
|
https://arxiv.org/abs/2505.21140v1
|
GGBond: Growing Graph-Based AI-Agent Society for Socially-Aware Recommender Simulation Hailin Zhong1, Hanlin Wang1, Yujun Ye1, Meiyi Zhang1, Shengxin Zhu2,1 1Faculty of Science and Technology, Beijing Normal-Hong Kong Baptist University , Zhuhai, China 2Research Centers for Mathematics, Advanced Institute of Natural Sciences, Beijing Normal University , Zhuhai, China {r130026215, r130026135, r130026180, r130026169}@mail.uic.edu.cn, shengxinzhu@uic.edu.cn Abstract —Current personalized recommender systems pre- dominantly rely on static offline data for algorithm design and evaluation, significantly limiting their ability to capture long- term user preference evolution and social influence dynamics in real-world scenarios. To address this fundamental challenge, we propose a high-fidelity social simulation platform integrating human-like cognitive agents and dynamic social interactions to realistically simulate user behavior evolution under recom- mendation interventions. Specifically, the system comprises a population of Sim-User Agents, each equipped with a five- layer cognitive architecture that encapsulates key psycholog- ical mechanisms, including episodic memory, affective state transitions, adaptive preference learning, and dynamic trust- risk assessments. In particular, we innovatively introduce the Intimacy–Curiosity–Reciprocity–Risk (ICR²) motivational engine grounded in psychological and sociological theories, enabling more realistic user decision-making processes. Furthermore, we construct a multilayer heterogeneous social graph (GGBond Graph) supporting dynamic relational evolution, effectively mod- eling users’ evolving social ties and trust dynamics based on inter- est similarity, personality alignment, and structural homophily. During system operation, agents autonomously respond to recom- mendations generated by typical recommender algorithms (e.g., Matrix Factorization, MultV AE, LightGCN), deciding whether to consume, rate, and share content while dynamically updating their internal states and social connections, thereby forming a stable, multi-round feedback loop. This innovative design transcends the limitations of traditional static datasets, providing a controlled, observable environment for evaluating long-term recommender effects. Index Terms —AI-Agent, Simulation system, Recommender system, Social network. I. I NTRODUCTION In recent years, personalized recommender systems have become widely deployed across digital platforms to alleviate information overload and deliver precise content matching. While such systems have significantly improved user expe- rience and platform performance, mainstream research and evaluation practices remain heavily dependent on static his- torical logs. These static datasets provide only a snapshot of user behaviors, failing to reflect how user preferences evolve over long-term interactions or how social influences shape recommendation acceptance. Prior work has shown that ignoring such dynamics can lead to short-sighted optimization,filter bubbles, and even polarization effects in recommender outputs. Thus, enabling recommendation algorithms to under- stand, adapt to, and be evaluated under long-term behavioral dynamics and social interactivity remains a fundamental yet unresolved challenge in the field. To address these limitations, several simulation-based eval- uation frameworks have been proposed (e.g., RecSim), aiming to test recommendation algorithms beyond static data. How- ever, most of these simulators model users as reward-driven policy executors, lacking the ability to emulate realistic hu- man decision-making processes that involve memory, emotion, evolving preferences, and social cognition. Meanwhile, the majority of social recommendation models still rely on static adjacency graphs, which overlook key sociological features such as tie-strength drift, trust evolution, multi-context social layers, and indirect influence propagation [39]. These simpli- fications critically hinder our ability to study recommender systems in real-world interactive and socially complex envi-
|
https://arxiv.org/abs/2505.21154v1
|
ronments. To bridge these gaps, we argue for the construction of a high-fidelity simulation environment that jointly models cog- nitive user behavior, heterogeneous social structure, and causal feedback loops. In such an environment, each user should demonstrate rich internal decision-making behavior—affected by memory, affective state, and evolving interests—while simultaneously engaging in social interactions guided by inti- macy, trust, and reciprocity. These interactions, in turn, should influence content diffusion and preference updates over time. A simulation platform of this nature would not only provide an expressive substitute for limited real-world datasets but also enable rigorous testing of recommendation algorithms under complex dynamics and long-term feedback. In this work, we propose a novel high-fidelity simulation platform Growth Graph-based Recommendation System via AI-Agent Social Bonds (GGBond) for recommender systems that models both user cognition and social interaction. The platform consists of two core components: (1) a population of simulated user agents with psychologically grounded cog- nitive architectures, and (2) a dynamically evolving multi- layer social graph (GGBond Graph) that supports heteroge- neous semantics and multi-dimensional social relations. Each simulated agent is endowed with a five-layer architecture thatarXiv:2505.21154v1 [cs.MA] 27 May 2025 Figure 1. GGBond System Architecture: Rrecommender Engine, Database, Social network, Agent includes memory, emotion, preferences, trust assessment, and natural language generation, enabling fine-grained modeling of internal decision processes. The decision of whether to consume and share content is governed by a psychologically- inspired motivational engine (ICR²), which integrates intimacy (I), novelty ( N), reciprocity potential ( R), and risk perception (K) into a unified decision score. On the structural side, we enrich an anonymized real-world topology (e.g., Stanford Facebook graph) with inferred personality vectors using a structure-driven inference model, and construct layered graphs capturing interest similarity, personality alignment, and struc- tural homophily. The system is executed under a discrete- time simulation engine. In each time step, a recommendation model (e.g., Matrix Factorization, MultV AE, or LightGCN) generates candidate content based on the current state of users and the social graph. Simulated agents autonomously decide whether to watch, rate, and share the content. These behaviors trigger updates to the agent’s internal state and modify the social graph accordingly. All behavioral signals and structural dynamics are recorded, enabling multifaceted evaluation of recommender algorithms on metrics such as preference drift, diversity, and social influence. Additionally, these records serve as empirical validation of the simulation’s behavioral realism. Compared to existing approaches, our proposed system makes the following structural contributions:•We introduce a simulation paradigm that integrates human-like cognition and social evolution, enabling long- term recommender evaluation beyond static logs; •We design a psychologically grounded agent architecture incorporating memory, affect, preference, and a novel ICR² motivational engine to simulate realistic user de- cisions; •We construct a heterogeneous, multi-context, and dy- namically updating social graph that reflects evolving intimacy and trust relationships; and •We provide an extensible evaluation platform that sup- ports the integration of representative recommendation al- gorithms and enables systematic analysis of their perfor- mance and social impact in realistic interactive settings. II. GGB OND SYSTEM ARCHITECTURE To simulate user behavior in a socially contextualized recommendation environment with high
|
https://arxiv.org/abs/2505.21154v1
|
fidelity, we design a virtual society framework composed of a population of Sim- User Agents and a multi-layer heterogeneous social graph called the GGBond Social Network . The system architecture is clearly presented in Figure 1. This system is built not only to reproduce realistic cognitive and behavioral patterns of individuals under social influence, but also to serve as a controlled, repeatable testbed for evaluating the performance of mainstream recommendation algorithms in complex user scenarios. During initialization, we construct an anonymized social graph based on the Stanford Facebook dataset and enrich each node with Big-Five personality traits through a structure-driven inference framework. The resulting hybrid profile integrates structural features and psychological traits. On top of this, we define a multi-layer social network consisting of interest- based, personality-based, and structural homophily layers, all of which are fused into a unified layer for downstream reasoning. This network supports layered semantic modeling and serves as the basis for trust estimation, social propagation, and motivational reasoning. Each agent operates through a five-Module cognitive archi- tecture. The language reasoning core (Module 0) generates natural language outputs for commentary and interaction; the individual cognition Module (Module 1) models memory, affective state, and evolving content preferences; the social cognition Module (Module 2) maintains multi-dimensional intimacy scores and trust estimations toward other agents; the motivational engine (Module 3) computes the internal drive Cbased on intimacy ( I), novelty ( N), reciprocity potential (R), and risk perception ( K); and the behavior output Module (Module 4) executes watch, rate, and share actions, writing feedback back into upper Modules, forming a complete per- ception–decision–action loop. The system runs on a discrete time-step simulation en- gine. Each time step corresponds to an actionable social or system-triggered recommendation opportunity. A scheduler activates agents sequentially, allowing them to perceive in- coming content, evaluate it, and autonomously decide whether to consume, rate, and propagate it. This asynchronous update scheme better reflects real-world behavioral heterogeneity than synchronous models and avoids unrealistic global coordination assumptions. Recommendation algorithms are embedded within this evolving simulation. At each time step, algorithms take as input the current state of the agents and social graph, generat- ing personalized candidate recommendations. Agents, in turn, respond to these recommendations according to their current internal states. This interaction forms a closed evaluation loop, allowing us to monitor both traditional metrics (e.g., CTR, acceptance rate, diversity) and long-term behavioral conse- quences (e.g., preference drift, social influence spread). The system supports both periodic algorithmic interventions and context-sensitive triggers, such as increased activity periods or mismatched preference stages. Throughout the simulation, we collect a wide range of structural and behavioral metrics, including graph dynamics (e.g., density, community shifts), internal agent state distri- butions (e.g., affect variation, preference entropy), and recom- mendation interaction traces. A built-in write-back mechanism ensures that every decision made by the agent has forward- propagating consequences, enabling systematic analysis of long-term recommender interventions. The system currently supports three representative recom-mendation algorithms: Matrix Factorization, MultV AE, and LightGCN. These algorithms operate directly on the evolving user profiles and social graph, allowing for unified evaluation across personalization quality,
|
https://arxiv.org/abs/2505.21154v1
|
social robustness, and adapta- tion under long-term user drift. This dual evaluation paradigm not only benchmarks algorithmic performance in realistic environments but also serves as indirect validation of our simulation framework’s behavioral and structural credibility. Through the coordinated design of internal cognitive agents and an external multi-layer social network, the system offers a scalable, interpretable, and behaviorally grounded simulation platform for research in personalized recommendation, social trust modeling, and emergent group behavior dynamics. III. S OCIAL NETWORK ARCHITECTURE This study presents an advanced social network architec- ture, meticulously crafted to synthesize anonymized structural attributes from the Stanford Facebook social graph [20] with nuanced personality profiles inferred from external behavioral datasets, such as MovieLens [10], Steam [16], and Ama- zonBooks [23], which offer rich textual review corpora for psychological analysis [33]. The fundamental challenge under- pinning this integration arises from the inherent anonymization of the Stanford dataset—core demographic attributes including occupation, educational background, and age are withheld, ef- fectively precluding direct personality alignment with external datasets. To navigate this complexity, we propose a robust two-stage, structure-driven inferential framework that capitalizes solely on the intrinsic topology of the social graph to predict latent Big-Five personality traits for each node. In the first stage (illustrated in Figure 2), we leverage textual reviews from AmazonBooks, Steam, and MovieLens datasets to extract Big- Five personality traits using a pre-trained RoBERTa model. Simultaneously, we derive behavioral statistics from user in- teractions in MovieLens, including activity level, diversity, rating deviation, and novelty preference. In the second stage (shown in Figure 3), the trained MLP model is applied to the Stanford network. Here, structurally analogous features are ex- tracted solely from network topology—node degree for social activity, entropy-based neighbor diversity for social diversity, betweenness centrality [7] for deviation from mainstream, and PageRank scores [27] for novelty preference. These features serve as proxies for traditional psychological indicators within the social graph context [19]. The learned mapping enables the prediction of Big-Five traits for anonymized nodes. A. Personality Prediction Model To predict the Big-Five personality traits (Openness, Con- scientiousness, Extraversion, Agreeableness, Neuroticism), we train a multi-task regression model using behavioral data extracted from the MovieLens dataset. Each user is represented by four interpretable features grounded in their historical interactions: Figure 2. Big-5 Model Training Process Definition III.1 (Activity Level) .Activity Level Tact udenotes the total number of items rated by user u, capturing their engagement intensity: Tact u=X i∈Iu1, (1) where Iuis the set of items rated by user u. Definition III.2 (Diversity) .Diversity Tdiv umeasures the en- tropy of genre distribution among all rated items, reflecting the breadth of a user’s interests: Tdiv u=−X g∈Gpu,glogpu,g, p u,g=|{i∈Iu|genrei=g}| |Iu|, (2) where Gis the set of all genres. Definition III.3 (Conformity Deviation) .Conformity Devia- tionTconf uquantifies the deviation of user u’s ratings from the global consensus: Tconf u=1 |Iu|X i∈Iu(rui−¯Ri)2, (3) where ruiis the rating given by user uto item i, and ¯Riis the average rating for item iacross all users. Definition III.4 (Novelty Seeking) .Novelty Seeking Tnov u captures the user’s tendency to interact with less popular (i.e., rare) items: Tnov u=1
|
https://arxiv.org/abs/2505.21154v1
|
|Iu|X i∈Iu1 pop(i),pop(i) =|{u′|i∈Iu′}|,(4) where pop (i)is the popularity of item i, defined as the number of users who rated it. This trained model is subsequently applied to structural features derived from the Stanford social graph, enabling cross-domain personality prediction for anonymized nodes.B. Structure-based Feature Engineering Definition III.5 (Social Activity) .The Social Activity ( kv) is defined as the node degree, representing the number of direct connections a node has in the social graph, analogous to user activity level. Definition III.6 (Social Diversity) .The Social Diversity ( dv) is computed as the entropy of anonymized occupation or education labels among a node’s immediate neighbors, rep- resenting the variety in a node’s social contacts: dv=−X cpclog(pc), (5) where pcdenotes the proportion of neighbors belonging to category c. Definition III.7 (Deviation from Mainstream) .The Deviation from Mainstream ( cv) is captured via betweenness centrality, measuring the extent to which a node acts as a bridge between distinct social communities, thus indicating non-conformity: cv=X s̸=v̸=t∈Vσst(v) σst, (6) where σstis the total number of shortest paths between nodes s andt, and σst(v)is the number of these paths passing through node v. Definition III.8 (Novelty Preference) .The Novelty Preference (pv) is measured by the node’s PageRank score, reflecting a node’s position relative to network prominence, with lower scores indicating a preference towards peripheral or less main- stream connections: pv=αX u∈N(v)pu |N(u)|+1−α |V|, (7) where N(v)is the set of neighbors of node v, and αis the damping factor (typically 0.85). Finally, the structural feature vector xvfor each node vis defined as: xv= [kv, dv, cv, pv]. (8) C. Personality Reasoning Process To predict the Big-Five personality traits (Openness, Con- scientiousness, Extraversion, Agreeableness, Neuroticism) for Stanford nodes, we trained a multi-task regression model using data derived from Real banchmark datasets’ user behaviors and personality traits extracted from textual reviews. Specif- ically, we utilized a pre-trained RoBERTa-based classifier on MovieLens textual data to label user personalities and trained a neural network fθto map structural behavior features to personality vectors: ˆbv=fθ(xv), (9) where ˆbvdenotes the predicted Big-Five personality vector for node v. The model was trained on real datasets features and evalu- ated based on the RMSE and Pearson correlation metrics on a Figure 3. Big-5 Alignment Framework hold-out set, achieving robust predictive performance (RMSE <0.1, Pearson r >0.6). The trained model was subsequently applied to predict the personality traits of Stanford network nodes, producing personality-enriched node representations without reliance on explicit personal data. D. Big-Five Mapping In the Stanford social network dataset, each node vis originally described by a set of anonymized social attributes and topological features, formally represented as: tu= [t1, t2, t3, t4], (10) where: t1denotes the number of connected edges (degree). t2represents the anonymized occupation identifier. t3repre- sents the anonymized age group identifier. t4represents the anonymized education identifier. Following the Big-Five personality prediction process, we enrich this attribute vector by appending the predicted Big- Five personality traits, denoted as: bv= [b1, b2, b3, b4, b5], (11) where each bicorresponds to one dimension of the Big- Five personality (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism), scaled within the range [0,1].
|
https://arxiv.org/abs/2505.21154v1
|
The complete profile vector for node wthus becomes: pw= [tu,bv]. (12) This unified representation integrates structural social at- tributes and latent psychological traits, providing a comprehen- sive embedding for each node. Such enriched profiles facilitate downstream tasks, including personalized recommendation and social influence modeling, by co-optimizing structural and psychological dimensions within the network. E. GGBond Social Network Building upon the personality-enriched node representation pu= [tu,bu], where tu∈R4denotes anonymized struc- tural attributes (e.g., degree, occupation, age, education), and bu∈R5represents predicted Big-Five personality traits, the GGBond framework constructs a multilayer social graph Gto simulate user interactions, trust propagation, and preference shifts.As visualized in Figure 4, the architecture is composed of several semantically distinct graph layers, each encoding a different dimension of social similarity or homophily. Figure 4. GGBond Multi-layer Social Network Framework. Each colored plane represents a type of social edge (Interest, Personality, Structural, or Unified), with dotted connections modeling influence across layers. Agents (nodes) are embedded in all layers simultaneously. The aggregation and propagation mechanism across these layers enables personality drift and preference adaptation. a) Interest Graph Layer.: In the top (cyan) layer, edges are created based on user preference overlap. An undirected edge is formed between agents uandvif they share liked movie types, weighted by Jaccard similarity: wint uv=|Mu∩ M v| |Mu∪ M v|, (13) where Muis the set of positively rated movies types by u. And we select Top-5 similarity of each agents. b) Personality Graph Layer.: In the second (orange-pink) layer, agents are connected if they have similar psychological traits. The edge weight is defined using cosine similarity: wpers uv= cos( bu,bv) =bu·bv ∥bu∥ · ∥bv∥. (14) c) Structural Graph Layer.: The third (purple-green) layer models homophily in terms of occupation and age category. Edge weights are binary, based on shared categories: wstruct uv=1same(tu,2,tv,2)+1same(tu,3,tv,3), (15) d) Unified Layer and Aggregation.: The whole layer aggregates all previous edges using a weighted linear com- bination: wuv=α·wint uv+β·wpers uv+γ·wstruct uv,withα+β+γ= 1.(16) This unified, weighted graph is used during social reasoning and multi-round preference simulation. e) Multi-Round Interaction Protocol.: During each round, agent ureceives candidate movie mfrom its neighbors N(u)and decides whether to accept based on semantic align- ment and social propagation provided in Agent Architecture Accepted movies are added to agent profile, and agent embeddings are updated as described in Layer 1. In addition, personality drift is also triggered by successful social influ- ence, the formula is given in Agent Architecture section. Figure 5. Agent Architecture: Module 0 (GPT4 API), Module 1 (Individual cognition Module), Module 2 (Social cognition Module), Module 3 (Decision Module), Module 4 (Behavior Module) f) Final Output.: After Trounds, the updated agent profiles {p(T) u}and graph G(T)are passed to downstream recommenders (e.g., MF, MultV AE, LightGCN) to generate final predictions. This layered social framework enables rich modeling of community structure, latent personality align- ment, and iterative preference shaping over the social graph. IV. A GENT ARCHITECTURE The agent architecture is clearly presented in Figure 5, each simulated agent is designed as a fully autonomous entity equipped with the capabilities of perception, cognition, decision-making, and behavioral feedback. Its actions are
|
https://arxiv.org/abs/2505.21154v1
|
entirely driven by internal mechanisms, without reliance on external calls. The agent operates through a closed cogni- tive–motivational–behavioral loop: upon receiving an external stimulus—such as a system-generated recommendation or a peer-shared item—the agent invokes a series of internal cognition modules to encode and evaluate the environment, resulting in a set of psychologically and socially meaningful latent variables [37]. These variables are then passed to the motivational engine, which computes internal drive and determines whether and how the agent should respond. The agent may choose to watch the movie, assign a rating, and share it with others, while also writing feedback back into its memory, emotional state, and social ties, thereby enabling long-term behavioral adaptation. This autonomy is enabled by the coordinated operation of multiple functional subsystems, including episodic recall, affect regulation, preference learning, and social trust-risk inference. To systematically support these capabilities, we introduce a structured five-Module agent architecture that aligns with core components of behavior: language generation, individual cognition, social reasoning, motivational computa- tion, and behavioral output. In the following sections of this architecture, each Module is described in detail, corresponding to a specific stage in the internal decision pipeline and together implementing the cognitive loop described above [13]. Wefurther summarize the operational logic of the agent as an algorithm 1. Algorithm 1 Simulated Agent Decision Cycle Require: Received recommendation mfrom source sat time t Ensure: Agent response: watch/skip , rating r, share set S 1:Module 1: Individual Cognition 2:Retrieve past event history (s, m′, r′)from episodic memory 3:Compute current emotional state (Vt, At)from past satisfy M 4:Update preference vector pu 5:Compute novelty score N←Novelty (pu,em,lang) 6:Module 2: Social Cognition 7:Retrieve intimacy score I←Intimacy (u, s) 8:Estimate reciprocity potential R←Reciprocity (u, s) 9:Evaluate risk perception K←Risk (m, s, u ) 10:Module 3: Motivation Evaluation 11:Compute dynamic threshold θ←θ0−κ·Vt 12:Compute motivation score C←αI+βN+γR−δK 13:Module 4: Behavior Execution 14:Compute watch probability p←σ((C−θ)/τu) 15:ifSample( Bernoulli (p)) == watch then 16: Predict expected rating ˆru,m 17: Sample final rating r←ˆru,m+ε 18: Compute satisfaction M←(r−ˆru,m)/4 19: Select share targets S ← TopK (qv=Iuv·Ruv·min(1 , C)) 20: Update episodic memory with (s, m, r, M ) 21: Update emotional state (Vt+1, At+1) 22: Update Iuv, Ruvusing M 23: Generate natural language review via Module 0 (DeepSeek- R1) 24:else 25: Record skip event and decay motivation-related traces 26:end if 27:return Action ( watch/skip ),r(if applicable), share set S A. Module 0: Language Reasoning Core The language reasoning core is the only module in our agent architecture that depends on a large language model. It is implemented via a lightweight wrapper around DeepSeek- R1, and its primary function is to generate natural language expressions—including movie reviews, brief ratings, and so- cial sharing messages—once a behavioral decision has been finalized. The core design principle here is semantic genera- tion decoupled from behavioral computation : this module does not participate in any decision-making or numerical inference and is invoked only after an action is determined by the upper Modules. The input to this module includes structured outputs from Module 4, such as the final rating r, metadata of the watched movie
|
https://arxiv.org/abs/2505.21154v1
|
(e.g., genre, keywords, language), the agent’s current emotional state (valence), and the target social circle (e.g., friends, interest groups, technical communities). These inputs are formatted into controllable prompt templates that are passed to DeepSeek-R1 for generation. For example, when recommending a science fiction movie to a technical circle, the generated review tends to be objective and analytical, while in a friends circle, the tone becomes more emotional and colloquial. This context-adaptive generation enhances realism and aligns with stratified language patterns observed in real- world social networks. The module also supports multilingual generation to simu- late the diversity and asymmetry of language in cross-linguistic social networks. In cases where there is a mismatch between the language of the movie and the agent’s own language, additional burden cues (e.g., “too many subtitles” or “trans- lation affects immersion”) are automatically included in the output, thus reinforcing the language-related risk factor Lgap calculated in Module 2 [17]. B. Module 1: Individual Cognition Module The individual cognition Module constitutes the core of an agent’s internal state modeling. It maintains and dynamically updates the agent’s personalized cognitive features, which include episodic memory, affective state, and long-term pref- erences. These modules are critical for providing interpretable, temporally stable inputs to upper-Module decision processes. Specifically, this Module includes three subcomponents: the Episodic Memory , the Affective State Machine , and the Pref- erence Model , corresponding to experiential, emotional, and preference-related facets of human cognition. a) Episodic Memory.: This module stores past movie- watching and social interaction events in the form of times- tamped triples (s, m, r ), where sis the source (e.g., a rec- ommender or a friend), mdenotes the movie, and ris the agent’s actual rating. Each event decays over time according to an exponential forgetting curve: Definition IV .1. wt= exp( −λmem·∆t), (17) where ∆tis the elapsed time since the event and λmemis a tunable forgetting rate. This memory supports the dynamic computation of reciprocity ( R) and risk ( K) by tracking long- term satisfaction and content failures associated with specific sources. b) Affective State Machine.: To reflect transient affective conditions, we model the agent’s emotional state in the 2D Valence–Arousal space [32]. After each movie-watching event, the emotional state is updated based on the rating deviation (satisfaction): M=r−ˆru,m 4, M ∈[−1,1], (18) Vt+1=Vt+σV·M, A t+1=At+σA· |M|,(19) where VandAdenote valence and arousal, respectively, andσV, σAare sensitivity coefficients. The valence value Vt directly modulates the decision threshold θin Module 3, in accordance with affective decision-making theories such as Mood-as-Information [14]. c) Preference Model.: The agent’s long-term interests are represented as an evolving embedding vector pu∈Rd, updated incrementally through observed movie embeddings emusing exponential smoothing: pu←(1−η)·pu+η·em, (20)where ηis the preference update rate. The preference vector is used to calculate the subjective novelty score Nfor any candidate movie, incorporating both semantic distance and linguistic mismatch: Definition IV .2. N= min 1,1−cos(pu,em) +λlang·1langm̸=langu (21) This captures not only content novelty but also cross- linguistic cognitive load [17]. All submodules in Module 1 contribute real-time cognitive signals to the upper Modules. Their outputs are consumed by the motivation engine in
|
https://arxiv.org/abs/2505.21154v1
|
Module 3 and are subject to recursive updates through behavioral feedback handled in Module 4. This Module thus forms a closed adaptive loop, ensuring each agent gradually accumulates experiences, evolves preferences, and regulates emotion—collectively shaping temporally coher- ent, person-like behavioral profiles. C. Module 2: Social Cognition Module The social cognition Module enables the agent to perceive, encode, and adapt to the surrounding social structure. It is responsible for modeling the agent’s relationships across multiple social contexts (e.g., interest-based, professional, ge- ographic), evaluating trust in others’ recommendations, and quantifying the perceived risk of consuming a given content. This Module includes two major components: the MultiMod- ule Social Graph Manager and the Trust and Risk Assessor , which together generate the intimacy (I) and risk(K) factors for motivation scoring in Module 3. a) MultiModule Social Graph Manager.: The agent’s social connections are represented as a set of Moduleed adja- cency matrices {W(ℓ)}L ℓ=1, where each Module ℓcorresponds to a distinct social circle type. The edge weight I(ℓ) uvquantifies the raw structural tie strength between agents uandvin that circle, based on interaction frequency, co-engagement, or proximity. These ties decay over time to reflect fading social interactions. A structural intimacy score is computed by weighted aggre- gation across Modules: Istruct=LX ℓ=1γℓ·I(ℓ) uv, (22) where γℓis a decay factor representing inter-Module influence. To enrich this structural metric with homophily-informed features, we introduce two additional similarity components: •Demographic similarity Sdemo, based on age, gender, lo- cation, and language overlap. The demographic similarity Sdemo is computed as the normalized count of matching demographic attributes between agents uandv: Sdemo=1 4 1ageu=agev+1genderu=genderv +1location u=location v+1languageu=languagev (23) where each indicator function 1·returns 1 if the attribute matches and 0 otherwise. This score reflects the classical notion of demographic homophily, widely observed to enhance social affinity and communication efficiency [24]. •Preference similarity Spref, combining cosine similarity of Big Five personality embeddings and Jaccard overlap of interest tags. The preference similarity Sprefcaptures both personality alignment and interest overlap between users: Spref=1 2·cos (bu,bv) +1 2·|Tu∩Tv| |Tu∪Tv|, (24) where buandbvare the Big-Five personality vectors of users uandv, and Tu,Tvare their sets of interest tags. The first term measures psychological similarity via cosine distance; the second term is the Jaccard similar- ity between interest sets. This formulation is supported by empirical findings that personality compatibility and shared interests facilitate trust and collaboration in social systems. The final intimacy score is defined as: Definition IV .3. I=Istruct·[1 +λdemoSdemo+λprefSpref]. (25) This formulation is grounded in sociological findings that homophily strongly predicts tie strength and social bonding [24], as well as computational studies that show personality and interest similarity foster collaboration. b) Trust and Risk Assessor.: This module computes a personalized risk score Kfor each movie recommendation event, integrating content-related uncertainty, cognitive bur- den, and social-contextual cues. The full risk function is: Definition IV .4. K=σ 0.4L+ 0.5G+ 0.4wtV+ 0.6A+ 0.3Lgap −0.8wtTrust s−0.5I+ 0.5P +risk_base u(26) where: •L: movie length (longer implies higher opportunity cost). •G: genre-based arousal risk (e.g., horror, war). •V: variance in historical ratings (uncertainty). •A: age mismatch between user and movie rating. •Lgap: language mismatch
|
https://arxiv.org/abs/2505.21154v1
|
penalty [17]. •Trust s: recommender’s historical approval rate. •I: intimacy with recommender, as computed above. •P: user’s neuroticism level, reflecting risk aversion [26]. The sigmoid σ(·)ensures output normalization to [0,1], the wtis from exponential forgetting curve. Additionally, each agent has a static base risk level risk_base u, encoding their default cautiousness. c) Reciprocity Potential.: The agent also estimates the likelihood of emotionally or informationally rewarding feed- back following a share. This reciprocity potential Ris modeled as: Definition IV .5. R= 0.6wtRhist+ 0.3Rpot+ 0.1Rpsn, (27)where: •thewtis from exponential forgetting curve •Rhist: rolling average of successful past shares to the target user [3]. •Rpot: cosine similarity in interest embeddings [43]. •Rpsn: personality complementarity (e.g., extrovert sharing with introvert) [51]. All scores computed in this Module serve as direct inputs to the IC2Engine in Module 3. Moreover, these variables are updated based on behavior logs recorded in Module 4, enabling a continuous feedback loop between social perception and social interaction. This design ensures that each agent is capable of building, adjusting, and utilizing trust relationships, adapting to social risks, and making decisions that align with evolving social dynamics. D. Module 3: Motivational Decision Module The motivational decision Module is the behavioral core of the agent architecture. It receives psychological and social cues from lower Modules and integrates them into a scalar motivation score that drives action. This Module includes two components: the IC2Engine and the Reciprocity Regulator . The former computes the agent’s internal motivation Cbased on four interpretable subfactors, while the latter dynamically adjusts relationship strength and reciprocity expectation based on behavioral feedback, enabling long-term social adaptation. a) IC2Engine.: The Inti- macy–Curiosity–Reciprocity–Risk (IC2) engine is designed to fuse four factors into a unified motivation score: Definition IV .6. C=αI+βN+γR−δK, (28) where: •I: intimacy with the recommending user (from Mod- ule 2). •N: subjective novelty of the content (from Module 1). •R: reciprocity potential (from Module 2). •K: risk perception (from Module 2). The coefficients (α, β, γ, δ )control the relative influ- ence of each factor. We adopt a default configuration of(0.40,0.35,0.20,0.25), following principles from Multi- Attribute Utility Theory (MAUT) and empirical behavioral studies. The linear form supports transparency and control- lability in simulation environments. The IC2engine also incorporates a dynamic threshold θto capture emotion-modulated decision flexibility: θ=θ0−κ·Vt, (29) where θ0is the agent’s base decision threshold, Vtis the cur- rent valence (from Module 1), and κis a sensitivity coefficient. This formulation is grounded in the Mood-as-Information Theory [14], which postulates that positive emotions lower resistance to external stimuli, thereby increasing the likelihood of accepting recommendations. Once the motivation Cis computed, it is compared against θ. IfC≥θ, the agent decides to take action (e.g., watch a movie); otherwise, the opportunity is skipped. The binary or probabilistic implementation of this choice is deferred to Module 4. b) Reciprocity Regulator.: To support adaptive social learning, the system includes a mechanism to update social ties based on interaction outcomes. After an action is taken and the satisfaction score Mis computed, the intimacy and reciprocity scores are adjusted as follows: Iuv←Iuv+ρI·M, R uv←Ruv+ρR·M, (30) where ρIandρRare update rates. A
|
https://arxiv.org/abs/2505.21154v1
|
positive M(indicating satisfaction) reinforces the social connection, while a negative Mweakens it. This process emulates long-term relationship dynamics as explained in Social Exchange Theory [ ?], and supports emergent behavior adaptation over multiple decision cycles. Together, the IC2engine and reciprocity regulator form a closed motivational loop that is both sensitive to internal and external conditions and capable of long-term self-adjustment. This Module enables agents to exhibit realistic behavioral selectivity, emotion-driven variability, and socially responsive learning, which are essential for generating credible, human- like behavior trajectories in multi-agent simulation environ- ments. E. Module 4: Behavioral Output Module The behavioral output Module serves as the agent’s external interface, responsible for converting motivational signals from Module 3 into concrete observable actions such as watching, skipping, scoring, and sharing. Additionally, it records behav- ioral traces in structured logs, triggers feedback updates to lower Modules, and generates natural language explanations via the language core (Module 0). This Module consists of two main components: the Action Strategist and the Explainable Logger . a) Action Strategist.: The first task of the action strategist is to determine whether the agent will watch a recommended movie. Given the motivation score Cand internal decision threshold θcomputed in Module 3, the probability of watching is defined using a soft decision rule: Pr(watch ) =σC−θ τu , (31) where τuis the agent’s individual temperature coefficient , modeling behavioral randomness. Lower τuleads to more deterministic decisions, while higher values introduce stochas- ticity. This setup supports heterogeneous behavioral tendencies across agents and aligns with the stochastic utility maximiza- tion framework. If the agent chooses to watch the movie, the final rating is generated by adding a personalized noise term to the predicted preference score: r= ˆru,m+ε, ε∼ N(0, σ2 u), (32)where ˆru,mis the expected rating, and σucontrols subjec- tive rating variability. The satisfaction score is then computed: M=r−ˆru,m 4, M ∈[−1,1], (33) which serves as a signal for feedback updates in Modules 1 through 3. Next, the agent determines whether to share the movie with peers. This is achieved by computing a priority score qvfor each candidate target v: qv=Iuv·Ruv·min(1 , C), (34) where Iuvis the intimacy, Ruvthe reciprocity potential, andCthe motivation score. The top- kcandidates by qvare selected to form the share set S, and the corresponding share edges are recorded into the social graph. These links will be used for future tie updates and propagation. b) Explainable Logger.: The logger module converts all decision signals and behavioral results into a structured log entry: (C, I, N, R, K, r, M, θ,S,t) where tdenotes the current simulation timestep. This log provides full transparency into the agent’s decision-making process, and is stored for use in later analysis, evaluation, and visualization. In parallel, the logger constructs a prompt from the log content and passes it to the language reasoning core (Mod- ule 0), which generates natural language movie reviews and recommendation messages. The generated texts reflect both the decision rationale (e.g., high rating or emotional reaction) and the tone appropriate for the target social circle (e.g., informal for friends, factual for professional peers). Examples include: •“Loved the
|
https://arxiv.org/abs/2505.21154v1
|
visuals and plot twists—highly recommend!” •“Too slow and predictable, not my thing, but others might enjoy it.” Finally, the logger triggers feedback propagation: the sat- isfaction score Mis sent to Module 1 to update emotion and memory, to Module 2 for adjusting trust and intimacy, and to Module 3 for tuning reciprocity. The corresponding episode is also written into the episodic memory buffer. This completes the full decision–action–feedback loop, enabling the agent to adapt over time and refine its internal models and social strategies. V. A GENT CONSISTENCY EXPERIMENT To evaluate the plausibility of LLM-based agents in sim- ulating human-like recommendation behavior, we designed a two-part assessment strategy. First, we propose this Agent Rating Consistency Experiment , wherein agents are prompted to rate a curated set of movies under controlled conditions. Specifically, agents instantiated with varying personas are presented with movie metadata (title, genre, plot, cast) and asked to assign a rating from 1 to 5. The resulting Agent Rating Distributions (ARD) are compared against empirical Human Rating Distributions (HRD) derived from real user data in the MovieLens dataset. We employ Earth Mover’s Distance (EMD) and Kullback- Leibler Divergence to quantify distributional similarity. A high similarity score suggests alignment between agent and human evaluative tendencies. To complement this, we refer to the comprehensive behav- ioral simulations presented by [45] in their paper. Their work systematically evaluates LLM agents across multiple human- aligned dimensions, including trust calibration, social influ- ence susceptibility, and rational preference disclosure. Their experimental conclusions to support the broader behavioral fidelity of LLM agents in social recommendation contexts. A. Rating Consistency a) Motivation.: To systematically assess how closely LLM-based agents simulate human evaluative behavior, we compare rating outputs across three subject types: (1) Human Ratings: Empirical ground-truth ratings are sourced from the MovieLens dataset, where each item (movie) has been rated by a diverse population of real users. These ratings are aggregated to form the reference Human Rating Distribution (HRD) per item. (2) GGBond Agents: These agents are embedded in a dynamic social network structure and undergo three iterative rounds of interaction within the GGBond framework. Each round allows agents to exchange movie recommendations, receive social feedback, and update their personality traits and interest profiles based on their neighbors’ influence. After convergence, each agent rates a set of target movies. The resulting scores constitute the GGBond Rating Distribution (GRD). (3) Static Agents: These agents operate in isolation, without access to social influence or iterative feedback. Each static agent is initialized with a fixed persona (personality vector and interest vector) derived from the same initialization process as GGBond agents, but they do not engage in multi-agent interaction. Their ratings form the Static Rating Distribution (SRD). For each target movie, we collect ratings from all three sources and compute distributional similarity between HRD and both GRD and SRD using Earth Mover’s Distance [31] and Kullback-Leibler [29] divergence, defined as follows: b) Earth Mover’s Distance (EMD).: LetPandQdenote two probability distributions over the same domain. The Earth Mover’s Distance between PandQis defined as the minimal cost required to transform PintoQ, where the cost is
|
https://arxiv.org/abs/2505.21154v1
|
quantified as the amount of distribution “mass” that must be moved times the distance it has to be moved: EMD( P, Q) = inf γ∈Γ(P,Q)Z X×X∥x−y∥dγ(x, y), (35) where Γ(P, Q)denotes the set of all joint distributions (or transport plans) with marginals PandQrespectively. c) Kullback-Leibler Divergence (KL): Given two dis- crete probability distributions P={p1, ..., p n}andQ= {q1, ..., q n}over the same support, the KL divergence fromQtoPmeasures the information loss when Qis used to approximate P: KL(P∥Q) =nX i=1pilogpi qi , (36) where it is assumed that qi>0whenever pi>0. A lower divergence indicates higher alignment with human evaluative behavior. d) Result.: To assess the alignment between agent- generated and human rating behaviors, we computed two standard distributional similarity metrics: KL Divergence and Earth Mover’s Distance. Table I presents the evaluation results across three subjects: human raters (as ground truth), GGBond agents (socially interacting agents), and static agents (non- interactive). Table I DISTRIBUTIONAL SIMILARITY METRICS (LOWER IS BETTER ) Subject Type KL Divergence EMD GGBond Agent 0.0108 0.0900 Static Agent 0.0750 0.4200 We hypothesize that GGBond agents, enriched by personality-driven and socially-informed interactions, will ex- hibit greater consistency with human rating distributions than static agents. This would suggest that social dynam- ics—modeled through structured agent interaction—play a meaningful role in shaping human-aligned evaluative behavior in recommendation contexts. As shown in Table I, the GGBond Agent exhibits signif- icantly lower KL Divergence (0.0108) and EMD (0.0900) compared to the Static Agent (KL = 0.0750, EMD = 0.4200), indicating a stronger alignment between the GGBond Agent’s rating behavior and that of real human users. These results suggest that incorporating social interac- tions—modeled via multi-round agent communication and personality adaptation in the GGBond framework—enables agents to develop more human-like evaluative tendencies. In contrast, static agents, which do not participate in any social feedback loop, exhibit flatter and less natural rating distributions. Figure 6 further visualizes the rating density curves for all three subject types. The GGBond Agent curve closely tracks the human curve, particularly in the modal region (scores 3–4), whereas the Static Agent curve is more uniformly distributed and deviates from realistic scoring tendencies. B. Behavior Consistency Extensive evidence from recent studies confirms that social structures and personality signals jointly drive human–like recommendation behavior. Graph–based social recommenders consistently exploit homophily andsocial influence : users who are socially connected tend to share similar preferences and gradually converge through interaction [35]. Personality–aware 1 2 3 4 500.10.20.30.40.5 Rating ScoreDensityRating Distribution Comparison Human Ratings GGBond Agent Static Agent Figure 6. Comparison of rating distributions across Human, GGBond Agent, and Static Agent. recommenders further improve cold–start accuracy by aligning item exposure with individual trait profiles [5]. Trust graphs and multi–agent collaborations have also been shown to en- hance robustness and realism in simulated environments [4], [36]. Within GGBond, agents exchange recommendations over three interaction rounds. Such iterative communication closely mirrors the feedback-loop modeling that MacRec employs to improve recommendation quality via agent collaboration [4]. Recent work on large–scale social simulators like OASIS demonstrates that dynamic social graphs coupled with recom- mendation mechanisms reproduce realistic adoption patterns at scale
|
https://arxiv.org/abs/2505.21154v1
|
[48]. Likewise, SimUSER shows that LLM-based agents equipped with memory modules can generate user–RS interactions that faithfully match observed click statistics [ ?]. Finally, studies on graph-invariant learning and low-homophily settings reveal that even when explicit similarity signals are weak, leveraging high-order social paths markedly narrows the gap between model outputs and human behavior [15], [47]. These converging findings substantiate our observation that the GGBond agents—when embedded in a social graph produce recommendation and rating patterns significantly closer to real users than isolated static agents. VI. E VALUATION A. Impact of Social Interaction Depth on Agent Behavior and Recommendation a) Objective.: This experiment investigates how varying the depth of social interactions influences recommendation performance, agent behavioral dynamics, and simulated user satisfaction within the GGBond framework. Specifically, we examine whether increasing the number of interaction rounds(0-30 and we examine the metrics of 0, 10, 20, and 30 rounds) leads to improved personalization and stable, human-aligned behavior patterns. b) Experimental Setup.: We defined four experimental conditions: Baseline (0 Rounds): Static agents with no social interac- tion; initial profiles are directly passed to the recommendation models. GGBond-10Round / 20Round / 30Round: Agents engage in ten to thirty rounds of interaction, where they exchange movie recommendations with neighbors and accept/evaluate the recommender system and give feedback, update their personality and interest vectors based on accepted items, and propagate behavioral signals throughout the network. At the end of every round, the profile data of every agent, including interaction and rating data, are sent into the recommender systems: Matrix Factorization (MF) [18], MultV AE [21], and LightGCN [11]. These models then generate top-20 ranked lists per agent. Each agent evaluates the presented movies (based on content similarity and internal state), and accepts or rejects accordingly. The evaluation metrics we use are Recall and NDCG [40]. Our work builds on prior studies highlighting the role of social influence in recommender systems [4], [22], [39]. c) Recall@20.: measures the proportion of relevant items (e.g., positively rated by the agent) that appear in the top-20 recommended list: Recall@20 =1 |V|X u∈V|R(20) u∩ Gu| |Gu|, (37) where R(20) uis the top-20 recommendation list for agent u, andGuis the set of relevant (positively rated) items by u. d) NDCG@20.: NDCG@20 evaluates the quality of ranking by assigning higher scores to relevant items that appear earlier in the top-20 list: NDCG@20 =1 |V|X u∈V1 IDCG(20) u20X i=1I(rui∈ Gu) log2(i+ 1),(38) where ruiis the item at rank ifor agent u,Guis the ground- truth relevant set, and IDCG(20) uis the ideal DCG for agent u (i.e., when all relevant items are ranked at the top). e) Recommendation Performance.: Table II and Figure 7, Figure 8 presents Recall@20 and NDCG@20 for each model under different interaction depths. All models exhibit improved performance as interaction depth increases, with MultV AE and LightGCN benefiting more from socially enriched profiles. B. Post-Recommendation Agent Behavior. To further assess how social interactions affect downstream agent dynamics, we analyze four key behavioral indicators across all models and interaction rounds: personality change, satisfaction ratio, acceptance rate, and negative review rate. Table II RECOMMENDATION PERFORMANCE
|
https://arxiv.org/abs/2505.21154v1
|
AT SPECIFIC ROUNDS (RECALL @20 AND NDCG@20) Model / Metric 0 Rounds 10 Rounds 20 Rounds 30 Rounds MF - Recall@20 0.1502 0.1574 0.1612 0.1623 MF - NDCG@20 0.3560 0.3612 0.3652 0.3669 MultV AE - Recall@20 0.1592 0.1636 0.1668 0.1674 MultV AE - NDCG@20 0.3482 0.3543 0.3592 0.3606 LightGCN - Recall@20 0.1721 0.1783 0.1804 0.1813 LightGCN - NDCG@20 0.3845 0.3923 0.3968 0.3975 0 10 20 300.140.150.160.170.180.19 Interaction RoundsRecall@20Recall@20 across Interaction Rounds MF MultV AE LightGCN Figure 7. Recall@20 across different interaction rounds. Definition VI.1 (Personality Change) .Personality Change ∆personality is defined as the average ℓ2distance between each agent’s final and initial Big-Five personality vectors: ∆personality =1 |V|X u∈V b(final) u−b(init) u 2. (39) This metric captures the extent to which an agent’s person- ality representation has evolved after social recommendation interactions. Definition VI.2 (Satisfaction Ratio) .Satisfaction Ratio Ssat denotes the average score agents assign to liked items, reflect- ing their alignment with user preference: Ssat=1 |V|X u∈V1 |Au|X m∈Auscore u,m, (40) where Au={m|score u,m≥3}is the set of items liked by agent u(i.e., rated 3–5), and score u,m∈ {0,1,2,3,4,5}.0 10 20 300.340.360.380.4 Interaction RoundsNDCG@20NDCG@20 across Interaction Rounds MF MultV AE LightGCN Figure 8. NDCG@20 across different interaction rounds. Definition VI.3 (Negative Review Rate) .Exit Rate Nexit measures the proportion of agents whose ratings for all rec- ommended items fall below the acceptance threshold: Nneg=|{u∈ V | ∀ m∈ R u,score u,m≤2}| |V|, (41) where Ruis the set of items recommended to agent u. A higher value indicates dissatisfaction or poor personalization. Definition VI.4 (Acceptance Rate) .Acceptance Rate Arate measures the proportion of recommended movies that were accepted (i.e., watched) by the agent population: Arate=1 |V|X u∈V|Au| |Ru|, (42) where Auis the set of movies accepted (rated ≥3) by agent u, andRuis the set of all movies recommended to u. This metric serves as a proxy for user engagement and perceived relevance of the recommendations. Table III POST-RECOMMENDATION AGENT BEHAVIOR ACROSS INTERACTION ROUNDS Model / Metric 0 Rounds 10 Rounds 20 Rounds 30 Rounds MF - Personality Change 0.000 0.142 0.186 0.189 MF - Satisfaction ( Ssat) 3.01 3.56 3.88 3.95 MF - Negative Rate ( Nneg) 0.306 0.248 0.194 0.181* MF - Acceptance Rate ( Arate) 0.216 0.398 0.433 0.447 MultV AE - Personality Change 0.000 0.154 0.208 0.217 MultV AE - Satisfaction ( Ssat) 3.01 3.72 3.94 4.01 MultV AE - Negative Rate ( Nneg) 0.306 0.236 0.178 0.163 MultV AE - Acceptance Rate ( Arate) 0.216 0.421 0.462 0.474 LightGCN - Personality Change 0.000 0.161 0.223 0.234 LightGCN - Satisfaction ( Ssat) 3.01 3.80 4.07 4.15* LightGCN - Negative Rate ( Nneg) 0.306 0.231 0.163 0.147 LightGCN - Acceptance Rate ( Arate) 0.216 0.438 0.479 0.505* a) Discussion.: The results presented in Table III high- light the effectiveness of incorporating multi-round social interactions into recommendation workflows. Across all three models—MF, MultV AE, and LightGCN—agents exhibit no- table improvements in both behavioral alignment and simu- lated satisfaction as the number of interaction rounds increases. First, Personality Change steadily rises with interaction depth, indicating that agents’ psychological representations are dynamically adapting based on
|
https://arxiv.org/abs/2505.21154v1
|
social feedback. This effect is most pronounced in LightGCN, suggesting that graph-based models are more sensitive to socially enriched profile updates and better capture latent preference drift over time. Second, Satisfaction ( Ssat) shows a clear upward trend, increasing from a flat baseline of 3.01 (i.e., neutral ratings) to 4.15 in LightGCN after 30 rounds. This illustrates that deeper social exposure not only improves recommendation relevance but also yields more positively perceived content, mimicking the organic satisfaction growth seen in real user systems. Third, Negative Rate ( Nneg)—a proxy for user disengage- ment—consistently decreases across all models, demonstrating that agents are increasingly finding suitable content to con- sume. The steepest decline is again observed in LightGCN, affirming its capacity to retain engagement under socially dynamic contexts. Finally, Acceptance Rate ( Arate) improves significantly from 0.216 (static agents) to 0.505 in LightGCN after 30 rounds. This more than twofold increase in click-through behavior confirms that social iteration enables agents to in- ternalize and act upon contextualized preferences, leading to higher interactivity and system responsiveness. Collectively, these findings validate the GGBond frame- work’s capability to enhance not only classical recommenda- tion metrics but also user-centered behavioral realism through layered social interaction modeling.VII. R ELATED WORK A. Recommendation Simulation Platforms. Simulation frameworks have become essential tools for evaluating and developing recommendation algorithms, allow- ing researchers to test recommender behaviors under con- trolled scenarios without costly real-user trials. Early platforms like RecSim [13] provided configurable environments for sequential user interactions. Building upon this, RecSim NG [25] introduced a probabilistic programming approach with modular, differentiable components, enabling more flexible and scalable simulations. Recent advances have leveraged large language models (LLMs) to enhance the realism of user simulations. Agent4Rec [49] employs LLM-powered genera- tive agents equipped with user profiles, memory, and action modules, simulating nuanced user behaviors and emotional responses. Similarly, RecAgent [42] integrates LLMs to model user interactions, including browsing, communication, and social media activities, providing a comprehensive simulation of user behaviors in recommender systems. KuaiSim [52] offers a comprehensive simulator supporting multi-behavior and cross-session user feedback, facilitating the evaluation of recommendation algorithms across various tasks. Additionally, SimUSER [2] introduces an agent framework that simu- lates human-like behavior using self-consistent personas and memory modules, enhancing the assessment of recommender systems. These simulation platforms collectively advance the field by providing more realistic and versatile environments for testing and improving recommendation algorithms. B. LLM-Driven Agent Behavior The integration of large language models (LLMs) into autonomous agents has significantly advanced their ability to perform complex, goal-directed behaviors. Park et al. intro- duced Generative Agents, which simulate human-like behav- iors by equipping agents with long-term memory, planning, and reflection capabilities, resulting in emergent social interac- tions within a simulated environment [28]. Similarly, V oyager employs DeepSeek-R1 to autonomously navigate and master open-ended tasks in the Minecraft environment, demonstrating the adaptability of LLM-driven decision-making across diverse contexts [41]. Recent surveys have provided comprehensive overviews of LLM-based autonomous agents. Wang et al. discuss the construction, application, and evaluation of such agents, highlighting their potential in various domains in- cluding social sciences, natural sciences,
|
https://arxiv.org/abs/2505.21154v1
|
and engineering [9]. Xi et al. further explore the rise and potential of LLM- based agents, proposing a general framework comprising brain, perception, and action components, and examining their applications in single-agent scenarios, multi-agent scenarios, and human-agent cooperation [6]. These studies collectively underscore the transformative impact of LLMs on autonomous agent behavior, enabling more sophisticated and human-like interactions across a range of applications. C. User Behavior Modeling in Social Networks Modeling user interactions and social influence within net- works has long been a critical research area. Recent advance- ments leverage large language models (LLMs) to simulate nuanced information diffusion processes. LAID [12] integrates LLMs into diffusion modeling, enabling simulation of real- istic message interpretation and propagation patterns within social graphs. On a larger scale, AgentSociety [30] employs thousands of LLM-driven agents to reproduce societal-level phenomena, such as opinion polarization and misinforma- tion spread, providing insights that closely match empirical observations from real-world social platforms. Additionally, the S3framework [8] utilizes LLM-empowered agents to simulate public opinion dynamics, offering a flexible platform for exploring various social scenarios and interventions. Fur- thermore, LLM-AIDSim [50] enhances traditional influence diffusion models by allowing agents to generate language- level responses, providing deeper insights into user agent interactions. GenSim [38] introduces a general social simula- tion platform with LLM-based agents, supporting large-scale simulations and incorporating error-correction mechanisms to ensure more reliable and long-term simulations. These simula- tion platforms collectively advance the field by providing more realistic and versatile environments for testing and improving our understanding of user behavior and information diffusion in social networks. D. Trust and Behavioral Alignment in AI Agents Understanding and aligning AI agent behavior with human- like trust and conformity has gained significant attention. Xie et al. [44] evaluated LLM agents within classical economic trust games, discovering that advanced models like DeepSeek- R1 closely align with human trust decisions. Further, Ar- gyle et al. [1] introduced the concept of algorithmic fidelity, demonstrating that properly conditioned LLMs can accurately emulate distinct human demographic responses, thus offering a powerful tool for modeling realistic user populations in experimental simulations. Recent studies have further explored the enhancement of trust in LLM-based AI automation agents.Schwartz et al. [34] analyzed the main aspects of trust in AI agents, identifying specific considerations and challenges rele- vant to this new generation of automation agents. Additionally, Yang et al. [46] proposed Behavior Alignment as a new evalua- tion metric to measure how well the recommendation strategies made by LLM-based conversational recommender systems are consistent with human recommenders’, highlighting the importance of aligning AI behavior with human expectations to enhance user trust. VIII. C ONCLUSION This work addresses the longstanding limitations of per- sonalized recommender systems in modeling dynamic user behaviors and social interactions by proposing a novel high- fidelity simulation platform. Our system systematically tackles the challenges of long-term preference drift, social influ- ence propagation, and cognitively realistic decision-making, which remain underexplored in current recommendation re- search. Specifically, we design a population of human-like intelligent agents endowed with multi-faceted internal cog- nition modules—including memory, affective state, person- alized preference, and trust evaluation—to simulate granu-
|
https://arxiv.org/abs/2505.21154v1
|
lar and interpretable decision-making processes. In parallel, we construct a dynamic, multi-layer social graph (GGBond Graph) that captures heterogeneous, multi-circle social rela- tions and their evolution over time. The entire system operates under a discrete-time simulation scheduler, coupling agent- level behavior with social network dynamics. We further integrate mainstream recommendation algorithms (e.g., Matrix Factorization, MultV AE, and LightGCN) into this framework, enabling rigorous evaluation under iterative feedback and socially embedded environments. This platform bridges critical gaps in existing recommen- dation evaluation pipelines by supporting long-term interac- tion, cognitively plausible user modeling, and dynamic social structures. Theoretically, we introduce an IC2motivational engine grounded in psychological and sociological princi- ples, enhancing interpretability and realism in agent behavior. Methodologically, our extensible experimental infrastructure lays a solid foundation for future studies on social impact, fairness, and causal inference in recommender systems. Naturally, some limitations remain. The current implemen- tation of agent affect and memory dynamics could benefit from more fine-grained modeling, and the transferability of the simulation framework to real-world deployment scenarios still requires further investigation. Future work may focus on enhancing the complexity and fidelity of agent cognition models, exploring the long-term societal effects and ethical implications of recommendation mechanisms, and integrating the proposed framework with real systems to support socially responsible algorithm design. REFERENCES [1] Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua Gubler, Christo- pher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. Political Analysis , 31(3):337–351, 2023. [2] Nicolas Bougie and Narimasa Watanabe. Simuser: Simulating user be- havior with large language models for recommender system evaluation. arXiv preprint arXiv:2504.12722 , 2024. [3] Samuel Bowles and Herbert Gintis. A cooperative species: Human reciprocity and its evolution. In A cooperative species . Princeton University Press, 2011. [4] Shihao Cai, Jizhi Zhang, Keqin Bao, Chongming Gao, Qifan Wang, Fuli Feng, and Xiangnan He. Agentic feedback loop modeling improves recommendation and user simulation. SIGIR, 2025. [5] Fabio Celli, Aleksandar Kartelj, Miljan Ð or ¯devi´c, Derwin Suhartono, Vladimir Filipovi ´c, Veljko Milutinovi ´c, Georgios Spathoulas, Alessandro Vinciarelli, Michal Kosinski, and Bruno Lepri. Twenty years of personality computing: Threats, challenges and future directions. arXiv preprint arXiv:2503.02082 , 2025. [6] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao Wang, Zekai Wang, Feng Yin, Junhua Zhao, and Xiuqiang He. Exploring large language model based intelli- gent agents: Definitions, methods, and prospects. arXiv preprint arXiv:2401.03428 , 2024. [7] Linton C. Freeman. A set of measures of centrality based on between- ness. Sociometry , 40(1):35–41, 1977. [8] Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, and Yong Li. S3: Social-network simulation system with large language model-empowered agents. arXiv preprint arXiv:2307.14984 , 2023. [9] Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V Chawla, Olaf Wiest, and Xiangliang Zhang. Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680 , 2024. [10] F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) , 5(4):1–19, 2015. [11] Xiangnan
|
https://arxiv.org/abs/2505.21154v1
|
He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages 639–648, 2020. [12] Yuxuan Hu, Gemju Sherpa, Lan Zhang, Weihua Li, Quan Bai, Yijun Wang, and Xiaodan Wang. An llm-enhanced agent-based simulation tool for information propagation. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence , IJCAI ’24, 2024. [13] Eugene Ie, Chih-Wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu, and Craig Boutilier. Recsim: A con- figurable simulation platform for recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems , pages 228–236. ACM, 2019. [14] Alice M Isen. Positive affect, cognitive processes, and social behavior. InAdvances in experimental social psychology , volume 20, pages 203– 253. Elsevier, 1987. [15] Wei Jiang, Xinyi Gao, Guandong Xu, Tong Chen, and Hongzhi Yin. Challenging low homophily in social recommendation. In Proceedings of the ACM Web Conference 2024 , pages 3476–3484, 2024. [16] Wang-Cheng Kang and Julian McAuley. Self-attentive sequential rec- ommendation. In 2018 IEEE international conference on data mining (ICDM) , pages 197–206. IEEE, 2018. [17] Boaz Keysar, Sayuri L Hayakawa, and Sun Gyu An. The foreign- language effect: Thinking in a foreign tongue reduces decision biases. Psychological science , 23(6):661–668, 2012. [18] Yehuda Koren, Robert Bell, and Chris V olinsky. Matrix factorization techniques for recommender systems. Computer , 42(8):30–37, 2009. [19] Michal Kosinski, David Stillwell, and Thore Graepel. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences , 110(15):5802–5805, 2013. [20] Jure Leskovec and Julian Mcauley. Learning to discover social circles in ego networks. Advances in neural information processing systems , 25, 2012. [21] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference , pages 689–698, 2018. [22] Hao Ma, Dengyong Zhou, Chao Liu, and Michael R. Lyu. Learning to recommend with social trust ensemble. In Proceedings of the 32ndinternational ACM SIGIR conference on Research and development in information retrieval , pages 203–210, 2009. [23] Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-based recommendations on styles and substitutes. InProceedings of the 38th international ACM SIGIR conference on research and development in information retrieval , pages 43–52, 2015. [24] Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology , 27(1):415–444, 2001. [25] Martin Mladenov, Craig Boutilier, and Eugene Ie. Recsim ng: Toward principled uncertainty modeling for recommender ecosystems. In Advances in Neural Information Processing Systems , volume 34, pages 14985–14997, 2021. [26] Nigel Nicholson, Emma Soane, Mark Fenton-O’Creevy, and Paul Will- man. Personality and domain-specific risk taking. Journal of Risk Research , 8(2):157–176, 2005. [27] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999. [28] Joon Sung Park, Joseph C O’Brien, Carrie
|
https://arxiv.org/abs/2505.21154v1
|
J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , pages 1–18. ACM, 2023. [29] Fernando Pérez-Cruz. Kullback-leibler divergence estimation of continu- ous distributions. In 2008 IEEE international symposium on information theory , pages 1666–1670. IEEE, 2008. [30] Jinghua Piao, Yuwei Yan, Jun Zhang, Nian Li, Junbo Yan, Xiaochong Lan, Zhihong Lu, Zhiheng Zheng, Jing Yi Wang, Di Zhou, et al. Agentsociety: Large-scale simulation of llm-driven generative agents advances understanding of human behaviors and society. arXiv preprint arXiv:2502.08691 , 2025. [31] Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover’s distance as a metric for image retrieval. International journal of computer vision , 40:99–121, 2000. [32] James A Russell. A circumplex model of affect. Journal of personality and social psychology , 39(6):1161, 1980. [33] H. Andrew Schwartz, Johannes C. Eichstaedt, Margaret L. Kern, et al. Personality, gender, and age in the language of social media: The open- vocabulary approach. PLOS ONE , 8(9):e73791, 2013. [34] Sivan Schwartz, Avi Yaeli, and Segev Shlomov. Enhancing trust in llm- based ai automation agents: New considerations and future challenges. arXiv preprint arXiv:2308.05391 , 2023. [35] Kartik Sharma, Yeon-Chang Lee, Sivagami Nambi, Aditya Salian, Shlok Shah, Sang-Wook Kim, and Srijan Kumar. A survey of graph neural networks for social recommender systems. ACM Computing Surveys , 56(10):1–34, 2024. [36] Paras Stefanopoulos, Ahad N Zehmakan, and Sourin Chatterjee. A first principles approach to trust-based recommendation systems. arXiv preprint arXiv:2407.00062 , 2024. [37] Ron Sun. The clarion cognitive architecture: Extending cognitive modeling to social simulation. Cognition and multi-agent interaction , pages 79–99, 2006. [38] Jiakai Tang, Heyang Gao, Xuchen Pan, Lei Wang, Haoran Tan, Dawei Gao, Yushuo Chen, Xu Chen, Yankai Lin, Yaliang Li, et al. Gensim: A general social simulation platform with large language model based agents. arXiv preprint arXiv:2410.04360 , 2024. [39] Jiliang Tang, Xia Hu, and Huan Liu. Social recommendation: a review. Social Network Analysis and Mining , 3(4):1113–1133, 2013. [40] Hamed Valizadegan, Rong Jin, Ruofei Zhang, and Jianchang Mao. Learning to rank by optimizing ndcg measure. Advances in neural information processing systems , 22, 2009. [41] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. V oyager: An open- ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 , 2023. [42] Lei Wang, Jingsen Zhang, Hao Yang, Zhiyuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, et al. User behavior simulation with large language model based agents. arXiv preprint arXiv:2306.02552 , 2023. [43] Yu Wang, Youcheng Wang, and Wen Wang. Impact of personal traits and social networks on online knowledge sharing: A case study of facebook users. Computers in Human Behavior , 34:345–354, 2014. [44] Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, et al. Can large language model agents simulate human trust behavior? In The Thirty- eighth Annual Conference on Neural Information Processing Systems
|
https://arxiv.org/abs/2505.21154v1
|
, 2024. [45] Feiyu Xu et al. Can large language model agents simulate human trust behavior? In Advances in Neural Information Processing Systems (NeurIPS) , 2024. [46] Dayu Yang, Fumian Chen, and Hui Fang. Behavior alignment: A new perspective of evaluating llm-based conversational recommender sys- tems. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2286– 2290. ACM, 2024. [47] Yonghui Yang, Le Wu, Yuxin Liao, Zhuangzhuang He, Pengyang Shao, Richang Hong, and Meng Wang. Invariance matters: Empowering social recommendation via graph invariant learning. arXiv preprint arXiv:2504.10432 , 2025. [48] Ziyi Yang, Zaibin Zhang, Zirui Zheng, Yuxian Jiang, Ziyue Gan, Zhiyu Wang, Zijian Ling, Jinsong Chen, Martz Ma, Bowen Dong, et al. Oasis: Open agents social interaction simulations on one million agents. arXiv preprint arXiv:2411.11581 , 2024. [49] An Zhang, Yuxin Chen, Leheng Sheng, Xiang Wang, and Tat-Seng Chua. On generative agents in recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 1234–1243. ACM, 2023. [50] Lan Zhang, Yuxuan Hu, Weihua Li, Quan Bai, and Parma Nand. Llm- aidsim: Llm-enhanced agent-based influence diffusion simulation in social networks. Systems , 13(1):29, 2025. [51] Min Zhang, Ji Liang, and Xiaofang Wu. Impact of introversion- extraversion personality traits on knowledge sharing: Evidence from virtual communities. Sustainability , 15(1):417, 2023. [52] Kesen Zhao, Shuchang Liu, Qingpeng Cai, Xiangyu Zhao, Ziru Liu, Dong Zheng, Peng Jiang, and Kun Gai. Kuaisim: A comprehensive simulator for recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems , pages 456–466. ACM, 2023.
|
https://arxiv.org/abs/2505.21154v1
|
arXiv:2505.21156v1 [cs.SD] 27 May 2025Model as Loss: A Self-Consistent Training Paradigm Saisamarth Rajesh Phaye1, Milos Cernak1, Andrew Harper1 1Audio Machine Learning, Logitech (sphaye, mcernak, aharper)@logitech.com Abstract Conventional methods for speech enhancement rely on handcrafted loss functions (e.g., time or frequency domain losses) or deep feature losses (e.g., using WavLM or wav2vec), which often fail to capture subtle signal properties essential for optimal performance. To address this, we propose Model as Loss, a novel training paradigm that utilizes the encoder from the same model as a loss function to guide the training. The Model as Loss paradigm leverages the encoder’s task- specific feature space, optimizing the decoder to produce out- put consistent with perceptual and task-relevant characteristics of the clean signal. By using the encoder’s learned features as a loss function, this framework enforces self-consistency be- tween the clean reference speech and the enhanced model out- put. Our approach outperforms pre-trained deep feature losses on standard speech enhancement benchmarks, offering better perceptual quality and robust generalization to both in-domain and out-of-domain datasets. Index Terms : speech enhancement, noise reduction, deep fea- ture loss, loss functions 1. Introduction Speech enhancement has long been a challenging problem, with applications in telecommunication, hearing aids, and robust au- tomatic speech recognition (ASR) [1, 2, 3, 4, 5]. A critical component in training enhancement models is the choice of loss function, which directly influences the quality and generaliza- tion of enhanced output [6, 7]. Conventional loss functions, such as time or spectrogram-domain losses [8], often do not fully capture the complex relationships between noisy and clean speech. For instance, spectrogram loss treats all frequency bins equally, which can result in overemphasis of less perceptually relevant regions while underweighting crucial frequencies im- portant for speech intelligibility [9]. Although there are percep- tually sensitive losses such as mel-spectral losses [8] or PMSQE [9], they overly compress the signal, limiting the preservation of fine-grained details crucial for maintaining speech intelligi- bility, achieving suboptimal performance [10]. Pre-trained deep feature losses [6], such as those derived from WavLM [11] or Wav2Vec [12], have gained popularity due to their ability to incorporate perceptual and contextual in- formation into training objectives. Recent work by Babaev et al.[3] shows that WavLM’s intermediate convolutional fea- tures have a high correlation to speech enhancement, as com- pared to its transformer layers. WavLM has also been shown to be superior to Wav2Vec 2.0 [13] when used as a loss function. However, these methods are often optimized for tasks such as ASR or phoneme recognition, which may not align with speech enhancement objective. Such losses might prioritize linguis-tic content while ignoring residual noise components critical to the enhancement task. Moreover, these pre-trained neural net- works, when used as loss functions, can suffer from limited sen- sitivity to noise, as the extracted features may focus on abstract representations rather than task-specific properties [14]. To address these limitations, we propose a novel training paradigm, Model as Loss (MAL) , which uses the encoder of the same model as a loss function to guide the training of the de- coder. It involves training
|
https://arxiv.org/abs/2505.21156v1
|
a model with conventional loss func- tions and then using the trained encoder’s embeddings as a loss function for the next stage. This approach aligns the loss func- tion with the downstream task, leveraging the encoder’s abil- ity to extract task-specific features while ensuring contextual and hierarchical understanding of the signal. Unlike traditional methods that rely on external pre-trained models or handcrafted losses, this paradigm exploits the encoder’s specialization in processing noisy speech and its inherent ability to prioritize per- ceptually relevant signal components. Our proposed method has several advantages. First, the encoder’s feature space is inherently tailored to the enhance- ment task, capturing both global and local signal features cru- cial for noise suppression. Second, using the encoder as a loss function enforces a feedback loop that aligns training and infer- ence dynamics, improving generalization to unseen noise types. Third, this approach ensures relevance across all frequency bins by leveraging the encoder’s weighting of spectral components based on their contribution to the task, avoiding the pitfalls of uniformly weighted losses. Finally, the self-consistency of the model provides a robust foundation for optimizing the decoder, leading to superior performance and perceptual quality. In this paper, we present a detailed analysis of the MAL paradigm, including its theoretical foundations and practical implementation. Through extensive experiments, we demon- strate that our approach outperforms conventional pre-trained deep feature losses and hand-crafted loss functions on standard speech enhancement benchmarks. The results highlight the po- tential of using the model itself as a loss function, offering a new perspective on loss design in machine learning. 2. Methodology In the realm of speech enhancement, models typically comprise an encoder and one or more decoders [5]. For simplicity, we will refer to this setup as a single encoder-decoder system mov- ing forward. The encoder’s job is to extract relevant features from the noisy signal, which are then used by the decoder to synthesize the enhanced signal. The usual approach for training encoder-decoder models involves minimizing a loss function Lbetween the clean ref- erence speech and the model output. For example, an L1 loss Baseline Baseline 10epochs Baseline wavlm Ours mal−frozen-fe Ours mal−frozen Ours mal−dynamic Noisy Spectrogram 1 Enhancement 10 Enhancements 100 Enhancements Figure 1: Comparison of models using iterative enhancement. Each row represents the number of iterative enhancements applied, where the output of the previous enhancement step is used as input for the next step. The columns show different models being compared. between the clean and enhanced signal in spectral domain using Short-time Fourier Transform (STFT) can be represented as: Lspectral =∥STFT (yclean)−STFT (yenhanced )∥1 (1) where y cleanis the clean reference speech and y enhanced is the model output when the noisy speech is used as input to the model. The objective is to minimize the difference between the model’s output and the clean speech, typically measured on a spectrogram or multiple spectrograms of different resolutions. Once we train a model with some loss Lto convergence, we know that for any given pair (y enhanced , yclean),Lwill be min- imal. However, this does not imply that any other
|
https://arxiv.org/abs/2505.21156v1
|
loss Lnew will also be minimal. Our search is to find the ideal loss func- tionLideal, such that, once trained to convergence, for any given mathematical function F, we get the minimum loss: LF=∥F(yclean)− F (yenhanced )∥1 (2) Babaev et al. [3] propose a Signal-to-Noise (SNR) Rule , which suggests that as more noise is added to speech (lowering the SNR), feature representations should move farther apart in the embedding space. Extending this idea, intuitively, once the model is trained to satisfy equation (2), y enhanced and y cleanshould be identical points in the embedding space. Assuming equation (2) holds, if we input y cleanto the trained model, it should re- turn y clean. Moreover, y enhanced when used as input should again return y enhanced , which is y clean. This would ensure the stabil- ity of the model in its embedding space. However, this is not Loss FunctionsTrained with Spectral Losses for N Epochs. . .Epoch = N + M Model As Loss + Spectral LossesEpoch = N Encoder Decoder DecoderEncoder Model As Loss + Spectral LossesEpoch = N + 1 DecoderEncoder Model As Loss + Spectral LossesEpoch = N + 2 DecoderEncoder Figure 2: An illustration of the Model as Loss paradigm, show- casing the Lmal−dynamic variation.the case for most models. Figure 1 illustrates the impact of iteratively enhancing the noisy input multiple times with vari- ous trained models, which are described later in Section 3. All models show a clear degradation in speech quality with each iteration, suggesting a loss of fine-grained features. Since the model being trained is itself a mathematical func- tion, we propose using its encoder as a loss function. After training a model with traditional loss functions, the encoder in- herently learns to represent the input data’s structure in its fea- ture space. This feature space encodes rich information beyond what traditional loss functions capture. Building on this, we introduce a novel training strategy: first, train the model with conventional losses, and then, using the trained encoder, add a loss term based on the encoder’s latent feature space. This loss function compares the encoder’s bottleneck embeddings of the enhanced output and the clean signal and can be expressed as: Lmal=∥Encoder (yclean)−Encoder (yenhanced )∥1(3) where Encoder represents the function that returns the bottle- neck features, denoted as the MAL-encoder . This ensures that the decoder’s output aligns with the clean signal not only at the spectral level but also in the encoder’s learned feature space. With this formulation, the decoder is guided to preserve the rich feature representation learned by the encoder. As a result, the decoder learns to produce y enhanced , which, when fed back into the model, reproduce close to y enhanced . This is evident in Fig- ure 1, where Ours malmodels, trained with the MAL paradigm, preserve more speech harmonics after 100 iterations. As shown in Figure 2, we train a model with conventional losses for N epochs. Then, for the next M epochs, we add the Lmalloss to further refine the model. Depending on how we use MAL-encoder, there
|
https://arxiv.org/abs/2505.21156v1
|
are three possible Lmalvariations: 1.Lmal−frozen-fe : Freeze the trained Encoder (FE) of Nthepoch and use it as the MAL-encoder to train only the decoder for subsequent epochs. 2.Lmal−frozen: Use the trained encoder of the Nthepoch as MAL- encoder for all subsequent epochs and train the full encoder- decoder model. 3.Lmal−dynamic : Use the trained encoder of the nthepoch as MAL-encoder for(n+1)thepoch for n≥Nand train the full model. Hence, MAL-encoder is updated with every epoch. Table 1: NISQA and ScoreQ metrics are presented for both in-domain (In) and out-of-domain (Out) datasets. All proposed MAL-based models demonstrate superior performance, with Ours mal−dynamic achieving the highest results across all NISQA metrics. NISQA ( ↑) ScoreQ MOS ( ↑) Model / loss variants Overall Noisiness Discontinuity Coloration Loudness Natural Synthetic In Out In Out In Out In Out In Out In Out In Out Baseline 3.50 2.93 4.06 3.82 3.87 3.45 3.45 2.97 3.90 3.50 2.91 2.50 2.10 2.01 Baseline 10epochs 3.54 2.99 4.10 3.85 3.92 3.53 3.45 2.99 3.89 3.53 2.93 2.51 2.09 2.00 Baseline wavlm 3.56 3.02 4.06 3.85 3.94 3.55 3.47 3.03 3.89 3.57 2.93 2.52 2.07 1.99 Baseline wavlm−fe 3.52 2.97 4.07 3.86 3.88 3.49 3.45 2.97 3.91 3.54 2.94 2.52 2.11 2.02 Ours mal−frozen-fe 3.65 3.12 4.14 3.93 4.01 3.60 3.57 3.10 3.98 3.62 3.00 2.60 2.11 2.02 Ours mal−frozen 3.66 3.13 4.09 3.90 4.06 3.64 3.58 3.15 3.99 3.65 2.97 2.58 2.08 1.99 Ours mal−dynamic 3.72 3.17 4.14 3.96 4.10 3.67 3.62 3.16 4.01 3.62 2.96 2.58 2.10 2.03 Ours wavlm-mal 3.65 3.13 4.13 3.95 4.02 3.61 3.57 3.11 3.97 3.61 3.00 2.60 2.12 2.03 DeepFilterNet2 [5] 3.46 2.82 3.97 3.66 3.87 3.47 3.43 2.88 3.86 3.48 2.90 2.49 2.08 1.98 Oursablation mal−frozen-fe 3.62 3.01 4.12 3.87 3.97 3.51 3.55 3.04 3.97 3.59 2.95 2.56 2.10 2.00 Table 2: SIGMOS Metrics are shown across in-domain (In) and out-of-domain (Out) datasets, while Intrusive Metrics are evaluated on the 2024 Urgent Challenge non-blind test set. All proposed MAL-based models outperform the other models. SIGMOS ( ↑) Intrusive Metrics Model / loss variants Signal Overall Noise Discontinuity Coloration PESQ ( ↑) ESTOI ( ↑) LSD ( ↓) MCD ( ↓) In Out In Out In Out In Out In Out Out Out Out Out Baseline 3.48 3.15 3.05 2.74 4.07 4.08 3.83 3.73 3.56 3.22 1.96 0.75 5.12 5.06 Baseline 10epochs 3.52 3.19 3.09 2.77 4.12 4.10 3.89 3.82 3.56 3.20 2.01 0.76 5.14 4.99 Baseline wavlm 3.48 3.22 3.05 2.80 4.03 4.08 3.87 3.87 3.53 3.22 2.03 0.76 4.99 4.91 Baseline wavlm−fe 3.49 3.18 3.06 2.77 4.04 4.10 3.86 3.78 3.55 3.21 1.99 0.76 4.99 5.02 Ours mal−frozen-fe 3.57 3.31 3.14 2.88 4.10 4.12 3.92 3.93 3.60 3.30 2.03 0.77 5.02 4.88 Ours mal−frozen 3.53 3.31 3.10 2.89 4.04 4.11 3.91 3.95 3.58 3.32 2.03 0.77 4.81 4.87 Ours mal−dynamic 3.56 3.30 3.13 2.88 4.08 4.13 3.91 3.91 3.59 3.32 2.00 0.76 4.88 4.87 Ours wavlm-mal 3.55 3.30 3.11 2.87 4.08 4.11 3.93 3.92 3.58 3.29 2.03 0.77 4.98 4.92 Within this training paradigm, the encoder (applied to noisy input) functions solely
|
https://arxiv.org/abs/2505.21156v1
|
as a feature extractor, while the decoder becomes the primary model for synthesizing the enhanced out- put. The MAL-encoder , acting as a loss function, ensures that the decoder produces an output closely aligned with the features of clean audio. The combination of supervised learning (match- ing the clean signal) and self-supervised learning (consistency within the encoder’s feature space) ensures that the decoder’s outputs are both accurate and perceptually meaningful. The proposed method introduces a self-consistency feed- back loop, where the decoder’s output is evaluated not only against the clean signal but also through the encoder’s feature space. Such a dual-objective structure reinforces meaningful learning at multiple levels of abstraction, leading to superior performance. This integration of self-supervised and supervised learning balances explicit and implicit learning objectives, mak- ing the model more robust in real-world scenarios. 3. Experimental setup We base all our experiments on DeepFilterNet2 proposed by Schr ¨oteret al. [5]. It has an encoder that extracts relevant fea- tures and passes them into a first-stage decoder. The output of this decoder is passed into the deep filtering decoder, which predicts the deep filtering coefficients for each time frame. We train DeepFilterNet2 as the base model from the official GitHub repository [5], using a 960-point FFT, 480 hop size, V orbis win- dow and 2-frame lookahead. Training is done on English subset of the 48KHz DNS4 dataset [15], using the same loss functionLdfas in the original article, combining multiple spectral losses. This model serves as the Baseline for our experiments. The experimental setup consists of five evaluation systems. The first three configurations include finetuning the baseline model with one of the three Lmalloss variations, mentioned in Section 2. In each model, the Lmalloss is added in equal pro- portion to the original DeepFilterNet2 loss function: i)Ours mal−frozen-fe trained with Ldf+Lmal−frozen-fe ii)Ours mal−frozen trained with Ldf+Lmal−frozen iii)Ours mal−dynamic trained with Ldf+Lmal−dynamic In the next two setups, we introduce a loss function, Lwavlm, which uses the final Conv-layer output from the pre-trained WavLM-Base-Plus (WavLM) [11], resembling equation (3). We again add this loss function in equal proportion to the origi- nal loss function, keeping the encoder either frozen or trainable: iv)Baseline wavlm trained with Ldf+Lwavlm v)Baseline wavlm−fetrained with Ldf+Lwavlm−fe As an ablation experiment to evaluate the effect of Lwavlm−fe helps, we combine it with Lmal−frozen-fe andLdf, adding all three in equal proportions, and denote this model as Ours wavlm-mal . In all experiments, the Baseline model is finetuned for ten epochs and the best epoch is chosen. For consistency, we also finetune the baseline model for ten epochs without introducing any new losses. This is Baseline 10epochs . 3.1. Evaluation data The evaluation was carried out on a total of 7404 samples drawn from multiple test sets. We divide the test sets into two domains: •In-domain test set (3504 samples) : Since the models are trained on DNS V4 training data, we aggregate the test sam- ples from DNS Challenge V2 [2], V3 [15], and V5 [4] cover- ing diverse acoustic scenarios such as mouse clicks, headset noise, speakerphone noise, and emotional speech.
|
https://arxiv.org/abs/2505.21156v1
|
•Out-of-domain test set (3900 samples) : We aggregated fully unseen test sets from 2024 and 2025 Urgent Challenges, combining the two nonblind and two blind sets [16]. This test setup enabled robust performance comparisons across diverse acoustic conditions. 3.2. Evaluation metrics Models are evaluated using SIGMOS, NISQA v2.0, and ScoreQ (no-reference natural and synthetic MOS) metrics [17, 18, 19]. SIGMOS scores were computed with all enhanced samples nor- malized to a peak level of -10 dBFS to account for level depen- dency. For NISQA, enhanced samples were normalized to have an active speech level of -26 dBFS. Intrusive metrics such as PESQ, ESTOI, log-spectral distance (LSD), and Mel-cepstral distance (MCD) were also calculated. We perform ANOV A analysis and report only the metrics with statistical significance. For intrusive metrics, we use only the 2024 Urgent Challenge nonblind test set with its open-source evaluation pipeline [16]. 4. Results Tables 1 and 2 present all metrics, clearly demonstrating that the proposed models with Lmallosses outperform the others. Ours mal−dynamic achieves the best performance across all NISQA metrics, while Ours mal−frozen leads in all intrusive metrics. The Ours mal−frozen-fe model outperforms others in SIGMOS Signal and Overall metrics, while performing comparably on the re- maining SIGMOS metrics. Given that Ours mal−frozen-fe is at par with Ours wavlm-mal , usingLwavlm−fewithLmal−frozen-fe offers no advantage over Lmal−frozen-fe alone. Notably, WavLM, with 95.1M parameters, is trained on 94,000 hours of speech data [11], while DeepFil- terNet2, with only 2.31M parameters, is trained on 1,100 hours of speech data [5]. However, all models trained with Lmalout- perform both models trained with Lwavlm loss variants. Typically, finetuning leads to overfitting on in-domain data, resulting in degraded performance on out-of-domain samples [20]. This is particularly a risk when the MAL-encoder is op- timized specifically for in-domain data. However, the results demonstrate that Ours malmodels not only avoid overfitting but also significantly outperform all other models. 4.1. Ablation Experiments A key factor is the quality of the MAL-encoder forLmal−frozen or Lmal−frozen-fe , as the effectiveness of the loss function depends on how well the encoder extracts features for enhancement. To answer this, we used the publicly available pre-trained Deep- FilterNet2 [5], which performs slightly worse than our trained Baseline. We finetuned the pre-trained model with Lmal−frozen-fe as previously described, resulting in Oursablation mal−frozen-fe . Although Lmal−frozen-fe significantly improved performance, it still per- formed worse than when applied to the superior Baseline model (see Table 1). The better the encoder, the more effective Lmal−frozen-fe becomes. We also trained with Lmal−dynamic per batch instead of per epoch, resulting in slightly worse metrics (e.g., LSD 5.03 vs. 4.88), likely due to loss instability from frequent updates. 0 20 40 60 80 100 120 140 Number of Enhancement Iterations0.51.01.52.02.53.03.5NISQA MOSBaseline Baseline10epochs Baselinewavlm Oursmalfrozenfe Oursmalfrozen Oursmaldynamic Figure 3: NISQA MOS vs number of enhancement iterations Self-consistency Experiment : We take the first 200 samples of the 2025 Urgent Challenge nonblind test set and iteratively en- hance them 150 times with every model. Figure 3 shows how the average NISQA MOS initially improves as noise is removed but later
|
https://arxiv.org/abs/2505.21156v1
|
declines as speech quality degrades. Models trained withLmal−frozen-fe orLmal−dynamic better preserve speech, align- ing with the self-consistency criterion, which Lmal−frozen lacks. In the 1stiteration, all MAL models achieve a high MOS, then reach a higher peak before converging to a higher MOS. This aligns with Figure 1, where MAL models pre- serve more speech harmonics than non- MAL models. No- tably, the Baseline model preserves slightly more speech than Baseline 10epoch or Baseline wavlm, despite having lower metrics (see Tables 1 and 2). Although the exact reason is unclear, we suspect that the latter two models are more aggressive than the Baseline in removing noise. 5. Conclusion In this paper, we propose Model as Loss (MAL) , a novel train- ing paradigm that leverages the encoder of an encoder-decoder model as a loss function to guide optimization. By align- ing the loss with the model’s task-specific feature space, MAL overcomes the limitations of traditional handcrafted and pre- trained deep feature losses. This approach offers key advan- tages such as task-specific feature extraction, self-consistency, and enhanced contextual understanding of input signals. Ad- ditionally, MAL reduces dependency on deep-feature losses de- rived from pre-trained models while achieving comparable or superior performance. This is particularly essential in domains such as medical imaging [21] or specialized signal analysis [22], where pre-trained models are scarce. Experiments demonstrate that MAL improves both perceptual quality and task-specific performance in speech enhancement. Although our evaluation focused on speech enhancement, MAL ’s domain-agnostic design makes it applicable to tasks such as acoustic echo cancellation [23, 24], image denoising or super-resolution [25], and medical image analysis [26]. On- going research explores its broader potential. We hope that this work inspires further innovation in loss functions and training methodologies, advancing machine learning applications. 6. Acknowledgements The authors thank Paul Kendrik, Tijana Stojkovic, and Andy Pearce for their valuable feedback and insights. We also thank Sai Dhawal Phaye for discussions during the early stages of MAL , and Kanav Sabharwal for his feedback on the writing. 7. References [1] J. Benesty, S. Makino, and J. Chen, Speech enhancement . Springer Science & Business Media, 2006. [2] C. K. Reddy, V . Gopal, R. Cutler, E. Beyrami, R. Cheng, H. Dubey, S. Matusevych, R. Aichner, A. Aazami, S. Braun et al. , “The interspeech 2020 deep noise suppression challenge: Datasets, subjective testing framework, and challenge results,” arXiv preprint arXiv:2005.13981 , 2020. [3] N. Babaev, K. Tamogashev, A. Saginbaev, I. Shchekotov, H. Bae, H. Sung, W. Lee, H.-Y . Cho, and P. Andreev, “FINALLY: fast and universal speech enhancement with studio- like quality,” in The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [Online]. Available: https://openreview.net/forum?id=18RdkSv9h9 [4] H. Dubey, A. Aazami, V . Gopal, B. Naderi, S. Braun, R. Cutler, A. Ju, M. Zohourian, M. Tang, H. Gamper, M. Golestaneh, and R. Aichner, “Icassp 2023 deep noise suppression challenge,” 2023. [Online]. Available: https://arxiv.org/abs/2303.11510 [5] H. Schr ¨oter, A. N. Escalante-B., T. Rosenkranz, and A. Maier, “DeepFilterNet2: Towards real-time speech enhancement on em- bedded devices for full-band audio,” in 17th International Work- shop on Acoustic
|
https://arxiv.org/abs/2505.21156v1
|
Signal Enhancement (IWAENC 2022) , 2022. [Online]. Available: https://github.com/Rikorose/DeepFilterNet [6] F. G. Germain, Q. Chen, and V . Koltun, “Speech denoising with deep feature losses,” Proc. Interspeech 2019, 2723-2727 , 2018. [7] S. Braun and I. Tashev, “A consolidated view of loss functions for supervised deep learning-based speech enhancement,” in 2021 44th International Conference on Telecommunications and Signal Processing (TSP) . IEEE, 2021, pp. 72–76. [8] C. J. Steinmetz and J. D. Reiss, “auraloss: Audio focused loss functions in PyTorch,” in Digital Music Research Network One- day Workshop (DMRN+15) , 2020. [9] J. Mart ´ın-Do ˜nas, A. Gomez, J. Gonzalez Lopez, and A. Peinado, “A deep learning loss function based on the perceptual evaluation of the speech quality,” IEEE Signal Processing Letters , vol. PP, pp. 1–1, 09 2018. [10] M. Kolbæk, Z.-H. Tan, S. H. Jensen, and J. Jensen, “On loss func- tions for supervised monaural time-domain speech enhancement,” IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing , vol. 28, pp. 825–838, 2020. [11] S. Chen, C. Wang, Z. Chen, Y . Wu, S. Liu, Z. Chen, J. Li, N. Kanda, T. Yoshioka, X. Xiao, J. Wu, L. Zhou, S. Ren, Y . Qian, Y . Qian, J. Wu, M. Zeng, and F. Wei, “Wavlm: Large-scale self-supervised pre-training for full stack speech processing,” CoRR , vol. abs/2110.13900, 2021. [Online]. Available: https://arxiv.org/abs/2110.13900 [12] S. Schneider, A. Baevski, R. Collobert, and M. Auli, “wav2vec: Unsupervised pre-training for speech recognition,” arXiv preprint arXiv:1904.05862 , 2019. [13] A. Baevski, H. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” CoRR , vol. abs/2006.11477, 2020. [Online]. Available: https://arxiv.org/abs/2006.11477 [14] S. Maiti, Y . Peng, T. Saeki, and S. Watanabe, “Speechlmscore: Evaluating speech generation using speech language model,” 2022. [Online]. Available: https://arxiv.org/abs/2212.04559 [15] H. Dubey, V . Gopal, R. Cutler, A. Aazami, S. Matusevych, S. Braun, S. E. Eskimez, M. Thakker, T. Yoshioka, H. Gamper, and R. Aichner, “Icassp 2022 deep noise suppression challenge,” 2022. [Online]. Available: https://arxiv.org/abs/2202.13288 [16] W. Zhang, R. Scheibler, K. Saijo, S. Cornell, C. Li, Z. Ni, A. Ku- mar, J. Pirklbauer, M. Sach, S. Watanabe et al. , “Urgent challenge: Universality, robustness, and generalizability for speech enhance- ment,” arXiv preprint arXiv:2406.04660 , 2024. [17] N. C. Ristea, A. Saabas, R. Cutler, B. Naderi, S. Braun, and S. Branets, “Icassp 2024 speech signal improvement challenge,” 2024. [Online]. Available: https://arxiv.org/abs/2401.14444[18] G. Mittag, B. Naderi, A. Chehadi, and S. M ¨oller, “Nisqa: A deep cnn-self-attention model for multidimensional speech quality prediction with crowdsourced datasets,” Aug. 2021. [Online]. Available: http://dx.doi.org/10.21437/Interspeech.2021-299 [19] A. Ragano, J. Skoglund, and A. Hines, “Scoreq: Speech quality assessment with contrastive regression,” arXiv preprint arXiv:2410.06675 , 2024. [20] Y . Li, Y . Sun, K. Horoshenkov, and S. M. Naqvi, “Do- main adaptation and autoencoder-based unsupervised speech enhancement,” IEEE Transactions on Artificial Intelligence , vol. 3, no. 1, p. 43–52, Feb. 2022. [Online]. Available: http://dx.doi.org/10.1109/TAI.2021.3119927 [21] L. Gondara, “Medical image denoising using convolutional de- noising autoencoders,” in 2016 IEEE 16th international confer- ence on data mining workshops (ICDMW) . IEEE, 2016, pp. 241–246. [22] K.
|
https://arxiv.org/abs/2505.21156v1
|
Sabharwal, S. Ramesh, J. Wang, D. M. Divakaran, and M. C. Chan, “Enhancing lora reception with generative models: Channel-aware denoising of loraphy signals,” in Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Sys- tems, 2024, pp. 507–520. [23] H. Zhang and D. Wang, “Neural cascade architecture for multi- channel acoustic echo suppression,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 30, pp. 2326–2336, 2022. [24] S. Braun and M. L. Valero, “Task splitting for dnn-based acous- tic echo and noise removal,” in 2022 International Workshop on Acoustic Signal Enhancement (IWAENC) . IEEE, 2022, pp. 1–5. [25] C. Tian, L. Fei, W. Zheng, Y . Xu, W. Zuo, and C.-W. Lin, “Deep learning on image denoising: An overview,” Neural Networks , vol. 131, pp. 251–275, 2020. [26] Y . Li, B. Sixou, and F. Peyrin, “A review of the deep learning methods for medical images super resolution problems,” Irbm , vol. 42, no. 2, pp. 120–133, 2021.
|
https://arxiv.org/abs/2505.21156v1
|
arXiv:2505.21160v1 [cs.LG] 27 May 2025STEB: In Search of the Best Evaluation Approach for Synthetic Time Series Michael Stenger University of Wuerzburg 97074 Wuerzburg, Germany michael.stenger@uni-wuerzburg.deRobert Leppich University of Wuerzburg 97074 Wuerzburg, Germany robert.leppich@uni-wuerzburg.de André Bauer Illinois Institute of Technology Chicago, US abauer7@iit.eduSamuel Kounev University of Wuerzburg 97074 Wuerzburg, Germany samuel.kounev@uni-wuerzburg.de Abstract The growing need for synthetic time series, due to data augmentation or privacy regulations, has led to numerous generative models, frameworks, and evaluation measures alike. Objectively comparing these measures on a large scale remains an open challenge. We propose the Synthetic Time series Evaluation Benchmark (STEB)—the first benchmark framework that enables comprehensive and inter- pretable automated comparisons of synthetic time series evaluation measures. Using 10 diverse datasets, randomness injection, and 13 configurable data transfor- mations, STEB computes indicators for measure reliability and score consistency. It tracks running time, test errors, and features sequential and parallel modes of op- eration. In our experiments, we determine a ranking of 41 measures from literature and confirm that the choice of upstream time series embedding heavily impacts the final score. 1 Introduction Time series (TS) data is central to many domains and applications such as forecasting medical data [ 22] or human activity recognition [ 17]. Yet too often there is a lack of availability, quantity, and quality of data, for instance, due to privacy concerns. Data synthesis can be a solution, but it remains challenging in practice [ 14,8]. A key step towards high-quality synthetic data is a comprehensive, reliable evaluation strategy. The fundamental problem is the lack of ground truth data similar to unsupervised clustering. To handle this complex, indirect assessment, the common approach is to use different measures to quantify various quality aspects [ 60]. However, with dozens of measures having been proposed in recent years, the selection of the “ideal” set of measures is particularly challenging [ 52]. For instance, seminal works [ 60,59,49] and recently proposed frameworks [ 45,41] use different combinations with little overlap. Furthermore, the aspects of synthetic data quality considered typically differ [ 51]. As a consequence, comparability of generative performance is hindered, the state-of-the-art unknown. Previous works studied small groups of measures in a tailored analysis or developed frameworks to generate and evaluate synthetic data. However, to the best of our knowledge, there is no detailed, objective, and broad study on the effectiveness of these measures applied to TS. Critically, this raises serious concerns regarding the reliability of past evaluation results for TS generative models. Hence, we propose Synthetic Time series Evaluation Benchmark (STEB), the first benchmark framework to conduct large-scale and multi-faceted analysis of synthetic TS Preprint. Under review. evaluation measures. With STEB, we aim to narrow down the clutter of measures to a standardized set, drastically increasing the comparability of generative performance. Scope. We focus on uni-/multivariate, real-valued TS. An evaluation measure is a function m: P(X)× P(X)→Rwhere Pis the super-set operator and Xthe data space, e.g., time series. For a real dataset Dr⊂ X and synthetic dataset Ds⊂ X , we call s=m(Dr, Ds)its score. In this context, “real” data is the initial
|
https://arxiv.org/abs/2505.21160v1
|
data to learn. Note that massesses the generated data, not generators. Hence, we do not consider generator-dependent measures such as duality gap [ 50]. Similarly, we limit this study to quantitative measures with clear sand exclude, for instance, visualizations such as the popular t-SNE plot [60]. Contributions. With this work, we contribute to synthetic data research in three ways: 1.We design and implement STEB, a novel benchmark framework for comprehensive, inter- pretable, and automated analysis of quantitative synthetic TS measures. 2.We analyze 41 measures with respect to reliability, consistency, and running time. More specifically, we rank the measures in four aspects of quality and in running time. 3. We investigate the impact of the upstream TS embedding on the final score. This paper is structured as follows. Section 2 puts our work in the context of related work. Section 3 introduces STEB and Section 4 the experiments on TS synthesis measures. Section 5 presents and discusses the results. Section 6 concludes the paper. 2 Related Work With the increasing interest in synthetic data generation, different studies on evaluation measures were conducted. One noteworthy work examines three of the then most commonly used evaluation measures for image generation [ 53]. Their combined theoretical and empirical approach is tailored towards each measure. Key findings are that the behavior of different measures is often largely independent of each other and synthetic data utility is application specific. Lucic et al. [ 34] presented an empirical study on GAN models and evaluation measures for image synthesis. Their focus is on the Fréchet inception distance (FID), analyzing bias, variance, robustness to mode dropping, different image embeddings, and FID value range. In terms of experimental design, the closest work we know of is by Huang et al. [ 19]. They manipulate image data in different ways to study the behavior and efficiency of five evaluation measures w.r.t. overfitting, mode collapse/dropping, discriminability, and robustness. Recently, Ismail-Fawaz et al. [ 20] conducted a comparison of eight measures for the evaluation of human motion generation and proposed a ninth. They analyzed each measure qualitatively, followed by a quantitative comparison using a conditional generator on one real dataset to controllably create human motions. STEB differs from the above works in many ways: (i) While these work are all (very) focused in their analysis, we compare a wide range of measures of diverse designs and purposes; (ii) STEB is designed for TS, whereas previous works target images and human motion; (iii) STEB is model agnostic with regard to the data generator and evaluation measure, unlike Lucic et al. [ 34] and Huang et al. [ 19], who focus on non-conditional GANs, or Theis et al. [ 53], who employ analysis specific to the measures; (iv) We utilize different hand-crafted data manipulations, while Ismail-Fawaz et al. [ 20] use a neural model to create the test synthetic data; (v) Most of the related work does not capture recent developments. STEB incorporates established and recent measures and can be extended to support further analysis techniques and future measures, continuously tracking the state-of-the-art; (vi) STEB is more
|
https://arxiv.org/abs/2505.21160v1
|
fine-grained in its analysis, as it differentiates four aspects of synthesis quality, and it is more comprehensive in terms of experimental parameters. There are three related synthetic data benchmarks. Synthcity is a framework for benchmarking tabular, image, and TS data generators [ 45]. It incorporates multiple generators, evaluation measures, and datasets in an automated test pipeline. Similarly, TSGBench [ 2] and Time Series Generative Modeling (TSGM) [ 41] are frameworks for time series synthesis. While TSGBench specializes on benchmarking generators, TSGM is presented as a more general solution including application cases. These benchmarks mainly differ in the choice of integrated measures, generators, and datasets. They evaluate the respective generative models and accept the implemented measures as given. STEB, however, assesses the measures themselves in order to determine which to best include in tools such as Synthcity, TSGBench, or TSGM. The diversity of evaluation suites highlights the need for a systematic analysis tool for measures. 2 3 STEB: Synthetic Time Series Evaluation Benchmark In this section, we introduce the approach and present the design of STEB, the first benchmark framework for analyzing evaluation measures for synthetic TS. 3.1 Controlled Distribution Modulation The evaluation of synthetic data Dsis commonly centered around given data generators with unknown performance on a set of real data Dr. The absence of ground truth complicates analysis and comparison, leading to the development of complex evaluation measures. To this end, we change the perspective and select benchmark scenarios with an expected outcome to evaluate the performance of the measures themselves. Inspired by Lucic et al. [ 34], we construct such scenarios by replacing the “regular” TS generator Gwith a pseudo-generation method: Transformation T. Formally, T:Rn×l×d×[0,1]→Rn×l×dis a function taking a dataset of nreal-valued TS of length land dimension d, along with a scale factor κspecifying the transformation intensity. Its output is a “transformed” dataset Dκ T. However, finding Tsuch that the score s=m(Dr, Dκ T)can be assessed directly and absolutely is challenging for the non-trivial case κ >0.Dκ Tmust be complex enough to be a realistic test case, but determining the expected smust be known for non-trivial input data. Additionally, different measures produce scores in varying ranges and optimization directions. To address this issue, we combine transformations with a second concept, the modulation ofTvia intensity κ. Intuitively, we modify Drmore and more, creating a series of ever more different synthetic data, and assess the score of each step relative to the others. More formally: Dris a sample drawn from an underlying distribution PinRl×d, and synthesizing Dsequates to generating new samples from P. A1D simplification of Pand the modulation process is depicted in Figure 1. κallows us to create a sample Dκ Tof a shifted and distorted distribution PTin the data space, ranging from Pitself to a completely different distribution. Assuming mmeasures some aspect of similarity of the underlying distributions of two datasets, we expect sto get worse with increasing κ. This is the expected outcome we can test empirically. More specifically, we test if s0> s1> s2> . . . (assumption: higher is better) for si=m(Dr, Dκi T)along the “modulation path”
|
https://arxiv.org/abs/2505.21160v1
|
κ0< κ 1< κ 2< . . . . Using this condition, we compute a reliability indicator for munder different test cases, varying T,Dr, and the random seed. The average value across all tests serves as approximation for the measure’s reliability. Example. Let miMAE :D×D′7→ 10−3+1 nldX i,j,k|Di,j,k−D′ i,j,k| −1 (1) be the inverse mean absolute error on ordered D, D′. Further, let T:D, κ7→ {x+κ|x∈D}be a transformation increasing every scalar in every TS in D by the scale factor. We compute D0 T= T(D,0),D0.5 T=T(D,0.5),D1 T=T(D,1)ands0=miMAE(D, D0 T),s1=miMAE(D, D0.5 T), s2=miMAE(D, D1 T). AsmiMAE measures the inverse average distance between the scalars of two datasets and Tincreases these values with increasing κ, we find that s0> s1> s2. Hence, miMAE behaves as expected and we would assign a high reliability indicator. Note that the regular MAE would satisfy none of these inequalities, resulting in bad performance. 3.2 Transformations In the following, we list all 13 transformations implemented in STEB and used in our experiments, chosen based on the diversity of data changes they force the measures to detect, their ability to allow gradual transformation with κ, a sensible running time, and the interpretability of induced changes. Gaussian noise adds a matrix of random values sampled from a Gaussian distribution with 0 mean andκ 2variance to the TS in Dr. To standardize the amount of noise for each dataset, we scale Drto [0,1]before applying the noise and rescale it afterwards. Label corruption corrupts the labeling of classified datasets by randomly swapping the labels ofκ 10of instances. Only applicable to measures sensitive to labels. Misalignment rotates the different channels of each TS in Drrandomly by ppositions with prob- ability κ,1≤p≤κ(l−1), and narrows the gap between the formerly first and last values. Only applicable to multivariate TS. 3 P=P0 TP1 T P2 TModulation Figure 1: Depiction of the modulation concept. By modulating parameter κ, we can influence the degree to which a transformation Timpacts dataset Dr(resp. its underlying distribution P) to create the pseudo-synthetic dataset DTwith distribution PT. For κ= 0, we get P0 T=P(black), for κ= 0.3, it might be P1 T(blue, dashed), and for κ= 0.9, it is P2 T(red, dotted). Mode collapse simulates a mode collapse by sampling each class of Drdown by κ, replacing the dropped instances in each mode with noisy duplicates of the remaining time series. Only applicable to labeled data. Mode dropping simulates mode dropping by replacing all TS of κclasses in Drwith TS from the remaining classes proportional to their size. Only applicable to labeled data. Moving average transforms Drby applying a moving average to each channel of each TS. The filter used for averaging is a·l·κ+ 1wide where a=1 3ifl≥30elsea= 1and centered on the modified value. Rare event drop probes the sensitivity to rare events, in this case, the smallest class in Dr.DTis created by swapping out κof its instances with ones of other classes taken from another substitute dataset Drs. This set provides additional real TS from the same distribution exclusively accessed by transformations. Only applicable to labeled data. Reverse substitution probes the sensitivity
|
https://arxiv.org/abs/2505.21160v1
|
to leaking real TS into the synthetic set. It starts with Drs and gradually adds up to ten Drinstances to DTwith increasing κ. Salt & pepper adds noise to the data by replacing random values in Drby 0 and 1 with probability κ 2. Segment leaking builds the output DTby using Drsas a basis and replacing 30κrandom segments from TS in Drswith segments from Dr. A segment is one channel of a TS subsequence and between l 4andl 2long. STL decomposition transforms Drby decomposing every channel cof every TS into season s, trend t, and residual rusing LOESS [ 10] followed by its reconstruction via linear combination cnew= (κu1+ 1)s+ (κu2+ 1)t+ (κu3+ 1)r)withu1, u2, u3∼ U[−1,1]. Substitution replaces a fraction κof the TS in Drwith instances from Drsat random. Wavelet transform decomposes each TS in Drwith discrete wavelet transform [ 27], rescales its scale component by κ, and inverses the transformation again. This gradually increases/removes the temporal structure of each channel while leaving the residuals intact. Note that none of the presented transformations is expected to mimic synthetic time series produced by typical generative models, which are practically intractable. Instead, they are intended to check individual, tractable aspects of relevant measure behavior. Collectively, the transformations cover a wide and diverse range of measure behavior across hundreds of tests. This is comparable to classification or forecasting, where the evaluation with fixed settings and dataset selection serves as approximation of general model performance. 3.3 Benchmark Design The purpose of our benchmark is to enable the comprehensive, interpretable, and automated com- parison of TS evaluation measures. Its main design goals are extensible components, fast and resource-efficient execution, and flexibility in operation. To achieve this, STEB is based on the distribution modulation and transformation concepts, while being centered around the measures 4 under test and being supported by several auxiliary components. The high-level architecture and data-/information-flow is depicted in Figure 2. Each component is described in the next section. EvaluationMeasureScaling & EmbeddingTransformationPreprocessing Storage Caching Logging Results Reliability (ranking) Consistency Running time (ranking)Dr DtrainDrsReal data optionalDheld-out DT Configuration & Management Recovery Figure 2: Architectural design of STEB. Input (top left) and output (bottom) are highlighted in orange, STEB components in blue. Datasets are referenced asDand the flow of data is indicated by arrows with filled head. The open-headed arrows mark in- formation flow such as scores, rankings, measure- ments, and error messages. Dashed arrows denote conditional data flow, where Dheld_out depends on the measure and Drson the transformation.STEB works with two organizational units, ex- periments andtests. An experiment represents one run of the benchmark; it is initiated by the user and specified through a configuration (e.g., see Listing 1). A test is one pass through STEB from the Transformation component to Storage ; it is characterized by a set of parameters includ- ing the input dataset, the measure to test, and the transformation to apply. An experiment includes multiple, often thousands of tests. It starts with an initialization via the Configuration & Man- agement component, followed by the Prepro- cessing of all required real data. Afterwards,
|
https://arxiv.org/abs/2505.21160v1
|
the included tests are gradually processed, starting with the transformation to create DT. If required by the measure, the test datasets are scaled to [0,1]or embedded. Finally, the data flow ar- rives at the Measure component, where a score forDtrain,DT, and Dheld-out is computed. The transformation, scaling/embedding, and mea- sure steps are repeated for each κ. Upon comple- tion, the different scores are collected and stored, before a new test is selected by the Management component. If a test incurs an exception, it will be recorded as failure, its reason logged, and the experiment resumed with the option to repeat the test at a later time. The Evaluation component is called once all tests are processed. Measures, embedders, transformations, and datasets are in- tegrated via common interfaces, making these components extensible. 3.4 Components Core STEB components are described below, supporting components for storage, caching, re- covery, and logging are detailed in Appendix C. Data Preprocessing. STEB implements a diverse group of ten datasets from different domains. A detailed list with description, source, and characteristics can be found in Appendix B.1. We imple- mented an automated preprocessing pipeline to prepare each dataset in the specific format required by each measure and transformation. It retrieves the specified data from its online source, removes outliers, interpolates missing values, equalizes TS lengths, extracts class labels, and calculates dataset statistics. Depending on the input requirements of transformation and measure, the real data is split into up to three, usually two, equally sized subsets Dtrain,Drs, andDheld-out .Dtrainrepresents the data available to a potential generator, Drsis the substitute data, and Dheld-out simulates generator test data. The latter two are optional. Scaling & Embedding. Many measures require further preprocessing of the input data. One group works well only on scaled values, another (overlapping) group operates on real-valued vectors and is not directly applicable to TS data (especially if they are multivariate). The scaling to range [0,1]is straightforward. To facilitate vector representations, this component has three embedders implemented. By default, the TS2Vec [61] representation model is employed. Alternatively, it features a non-DL-based embedder called Catch22 [33] and the trivial concatenation of feature channels referred to as Concat econcat :Rl×d→Rl·das baseline model. Measure. STEB includes 44 measures collected over the past years from different backgrounds with diverse evaluation goals and varying levels of complexity. Two are qualitative, visualization-based 5 measures, while the others are quantitative measures. They usually produce a score s∈R+ 0, but there are outliers such as C2ST [ 31] with s∈ {true,false}. Moreover, the measures differ in terms of input data. Three measures only consider synthetic data, here provided as DT, while most (33) also consider the real data provided to the generator, Dtrain. The remaining eight require a third, real dataset unknown to the generator, Dheld-out . Due to the space limitations, we prepared an alphabetical list of the 44 measures with descriptions and sources in Appendix A. For implementation details, we refer to the STEB code and documentation. Evaluation. Once all tests in an experiment are concluded, whether successful or not, the evaluation can start. To this
|
https://arxiv.org/abs/2505.21160v1
|
end, the scores produced for the tested measures and the recorded running times (if available) are analyzed by this component based on three criteria: (i) the reliability of a measure to truthfully and accurately reflect the quality of a given synthetic dataset in its score; (ii) the consistency of the measure’s scores with respect to changing parameters such as random seed or dataset; (iii) the measure’s speed in computing a score for a given dataset. The evaluation results are aggregated in different tabular formats and diagrams, formatted, and statistically analyzed (see Section 5 and Appendix F). While recording running time is straightforward, the quantitative evaluation of reliability and consistency requires an understanding of quality. Following previous works [ 55,1,57], we break down quality into different aspects. Specifically, we consider four key aspects, which hereafter will be referred to as categories in the evaluation: Fidelity refers to the similarity of individual synthetic data instances to real ones, ensuring they exhibit realistic properties such as patterns, trends, and volatility in time series. Generalization is a generator’s ability to create data beyond the training data itself or noisy versions thereof. Privacy aims to reduce or even eliminate the risk of disclosure of sensitive information in the data. Representativeness is the plausibility of Dsto be a sample of P, which can be thought of as the closeness of the synthetic dataset to the real one in the feature space. This implies an amount of diversity proportional to the real data and also extends to the utility of Dsfor downstream tasks. 3.5 Determining Reliability and Consistency To quantify reliability, we compute an indicator rrel∈[0,1]for each category and measure m. We define the expected category-dependent behavior of mon transformation Twith four options: Improve means we expect the score to get better with increasing κ;Worsen represents the expectation of a worsening score; Constant expects the score to remain largely unaffected by a changing κ; and N/Ameans that Tis not applicable in this category. These expected behaviors can be confidently assigned a priori. The result is listed in Table 3. If the behavior is defined, we can compute a task-specific indicator rrel(t)using m’s scores s0, . . . , s k−1, where kis the number of modulation steps. Improve. Assuming si∈Rand improvement means to increase si, we define rrelas the fraction of score pairs (si, sj),i < j withsi< sj. If instead siis a boolean, the performance is determined based on a point system, where points are assigned based on the positions of true andfalse along the modulation path and then normalized to [0,1]. Worsen. Ifsi∈R, we symmetrically use the fraction of pairs with si> sj, whereas if siis a boolean, we swap the points assigned for true andfalse . Constant. For real-valued si, we expect all scores to be close to their median. We use the median instead of the mean for robustness. For boolean scores, rrel(t)is the normalized number of unequal consecutive values. m’s reliability rrelis the average value across all t. More details and an example are provided in Appendix D.1. We define consistency rconofmvia the pairwise
|
https://arxiv.org/abs/2505.21160v1
|
statistical difference between groups of rrelindicators, where each group has either the same random seed or underlying dataset. The idea is that the behavior ofmshould depend on the relationship of synthetic and real data and not be impacted significantly by randomness or the real dataset alone. We use rrelas a proxy. Please find additional information in Appendix D.2. The running times of measures and embedders are recorded separately. For each test, STEB tracks the time required for the measure to compute the score on a fully prepared dataset, that is, after transformation and embedding. If an embedding is required, the amounts of time needed for training (if applicable) and inference are tracked. 6 4 Experimental Design and Execution In this section, we outline two different experiments conducted with STEB. For the modulation, we use eleven equally spaced steps, that is, κ= 0.0,0.1, . . . , 1.0, in both experiments. Similarly, we use all 10 datasets in each experiment. In term of hardware, we used a server with an AMD EPYC 7763, 256GB RAM, and an NVIDIA A100 running CUDA 12.4 capped at 40GB per worker. All running times were recorded on this system. To speed up the computation, we utilize up to five workers running in parallel. Still, we impose a strict time limit of 120 minutes per test to avoid excessive running times. 4.1 Main Experiment: Ranking Measures We analyze 41 of STEB’s 44 implemented measures. Of the remaining three measures, two are qualitative and thus difficult to assess objectively, while one computes instance-level scores unsuited to be condensed to a dataset-level score. Each measure is evaluated in all four categories. The goal is a comparison regarding reliability, consistency, and running time to test which measures are best suited to evaluate a specific category and within what time frame. Importantly, the goal is to provide running time estimates rather than precise measurements. We use TS2Vec as embedding and ten random seeds. 4.2 Side Experiment: TS Embedding Models Embedder-dependent measures operate on embedded vector data instead of the original time series, adding an extra source of variation to the final score. This issue was previously raised in the review of [19]. To this end, we examine the role of the embedding model in the synthesis evaluation process using STEB, providing empirical results for the 24 implemented embedder-dependent measures. We pairwise compare TS2Vec, Catch22, and Concat on a test-by-test basis. More specifically, we pair the tests of different embedders on the otherwise matching parameters dataset, transformation, and random seed, to compute two metrics between the two matched score sets: the mean absolute percentage error (MAPE) and the Pearson correlation coefficient (PCC). Tests for which no matching is possible due to a failed test on one side are ignored. 5 Results Given our experiment setup, the performed experiments resulted in an extensive output. Each of the 63,601 successful tests produces at least eleven scores along with multiple running time measurements. We evaluate and present them in different ways, some of which are placed in Appendix F due to space constraints. Not all of
|
https://arxiv.org/abs/2505.21160v1
|
the 68,666 tests are successful and count when evaluating the performance of the implemented measures. The success rate heavily depends on the resource and time demands of the measure, the size of the dataset, and the transformation applied. Most measures have a success rate of around 98%. The details are listed in Table 4. Reasons for failure include system or graphics memory overflow, measure-specific exceptions, and exceeding running time. A detailed list can be found in Table 5. General guidelines on how to use the results for measure selection are given in Appendix G, which can still be used when new measures change current rankings. 5.1 Main Experiment The main experiment included 41,432 tests, of which 39,063 were successful. Unfortunately, for the measure DOMIAS , all but one test failed, mostly due to excessive GPU memory demands. Still, this observed high resource demand is already a valuable insight for potential users. For the remaining 40 measures, we calculated rrel,rcon, and the average running time. We list the reliability indicators in Table 1 including the standard deviation observed between tests in alphabetical order. A ranking by category is shown in Table 6. The rankings for fidelity and representativeness are overall similar with α-Precision as the top position in both categories. ACS andautocorrelation perform surprisingly well in the generalization and privacy categories and not as well regarding fidelity and representativeness, which they are actually intended for. Generalization measures such as CTare further down in the ranking. More intuitive is the third position of the “novelty” measure authenticity in the privacy category. Across categories, 7 Table 1: Reliability indicator overview for experiment Main (left) and embedding comparison for the Embedders experiment. LEFT: For each measure, we list rrelas the mean across different tests and the standard deviation in all four categories (Mean ±StD). In the upper block (all but DOMIAS), we have highlighted the best (second best) indicator in bold (by underlining). DOMIAS is listed separately as its results are based on one test only and are thus not statistically significant (see Table 4). RIGHT : For each of the three embedder pairs, we report the mean absolute percentage error (MAPE) and the Pearson correlation coefficient (PCC). MAPE is capped at 10 (1000 %). For DOMAIS, no tests could be matched. All values are real numbers. Fidelity General- Privacy Represen- Measure TS2Vec-Concat Catch22-Concat Catch22-TS2Vec ization tativeness MAPE PCC MAPE PCC MAPE PCC .416±.413 .726±.320 .676±.313 .347±.369 ACS .783±.305 .358±.381 .261±.311 .745±.314 α-Precision 9.005 0.672 1.872 0.664 2.838 0.370 .460±.385 .244±.247 .324±.244 .539±.347 ApEn .052±.124 .530±.442 .666±.391 .098±.189 Authenticity >10 0.939 >10 0.896 >10 0.815 .083±.165 .624±.417 .773±.306 .141±.225 Autocorrelation .608±.435 .253±.397 .210±.349 .599±.425 β-Recall >10 0.944 >10 0.891 >10 0.807 .660±.193 .566±.203 .475±.102 .603±.170 C2ST .404±.271 .387±.258 .320±.197 .396±.259 CAS .595±.455 .189±.348 .246±.366 .678±.406 Context-FID >10 0.003 >10 0.001 >10 0.915 .769±.324 .263±.390 .165±.311 .716±.360 Coverage >10 0.781 >10 0.782 >10 0.623 .158±.314 .432±.459 .447±.439 .115±.216 CT 1.826 0.807 1.682 0.555 2.682 0.751 .731±.368 .324±.415 .257±.353 .696±.362 Density >10 0.168 >10−0.048 >10−0.089 .641±.278 .504±.269 .438±.200 .590±.257 Detection_GMM >10 0.487 >10 0.338 >10 0.180 .659±.366 .244±.361
|
https://arxiv.org/abs/2505.21160v1
|
.209±.328 .673±.345 Detection_linear 0.101 0.884 0.416 0.521 0.431 0.499 .739±.246 .418±.300 .333±.212 .703±.236 Detection_MLP >10 0.290 0.337 0.517 0.471 0.243 .530±.424 .204±.326 .220±.331 .600±.401 Detection_XGB 0.100 0.968 0.093 0.981 0.113 0.975 .326±.274 .286±.229 .348±.204 .379±.252 Discr. score .594±.438 .188±.325 .222±.334 .656±.405 Distr. metric .576±.432 .215±.341 .266±.355 .650±.392 FBCA >10 0.577 >10 0.324 >10 0.587 .433±.350 .615±.314 .571±.303 .395±.316 ICD .556±.423 .286±.409 .222±.354 .504±.417 Impr. precision >10 0.530 >10 0.441 >10 0.563 .715±.339 .290±.386 .190±.308 .691±.344 Improved recall >10 0.736 >10 0.816 >10 0.584 .697±.301 .336±.296 .310±.273 .683±.298 INND .616±.421 .233±.346 .240±.328 .649±.393 JSD >10 0.609 >10 0.771 >10 0.378 .602±.408 .251±.337 .256±.319 .638±.380 KLD >10 0.396 >10 0.570 >10 0.281 .382±.458 .684±.418 .524±.436 .231±.389 Max-RTS >10 0.398 >10 0.706 >10 0.461 .600±.318 .394±.320 .356±.299 .600±.311 MTop-Div >10−0.006 1.012 0.529 >10−0.020 .472±.384 .291±.344 .197±.255 .414±.369 NDB >10 0.559 >10 0.614 >10 0.444 .295±.279 .098±.160 .075±.124 .313±.277 NDB-over/under >10 0.715 >10 0.739 >10 0.597 .713±.332 .333±.320 .283±.259 .689±.322 ONND .542±.258 .384±.207 .409±.186 .570±.229 Predictive score .601±.388 .300±.334 .305±.307 .656±.348 RTS 8.306 0.373 >10 0.839 >10 0.412 .532±.440 .158±.265 .163±.263 .584±.428 Sig-MMD .307±.348 .216±.252 .258±.256 .352±.346 Spatial corr. .630±.415 .616±.383 .506±.362 .578±.398 STS >10 0.256 1.837 0.547 1.979 0.209 .462±.382 .243±.241 .314±.229 .542±.342 Temporal corr. .551±.416 .324±.358 .330±.339 .586±.397 TRTS .429±.309 .332±.228 .415±.184 .501±.271 TSTR .773±.274 .448±.330 .337±.225 .713±.278 WCS .617±.404 .237±.330 .233±.302 .647±.376 WD >10 0.503 >10 0.874 >10 0.457 1.±.000 .000±.000 .000±.000 1.±.000 DOMIAS – the best measures exhibit a reliability indicator of approximately rrel= 0.75, which leaves room for improvement for future measures. However, rrelhas a high standard deviation, meaning that closely positioned measures cannot be ordered definitively. As expected, measures performing well in the categories fidelity and representativeness perform poorly in the categories generalization and privacy. This suggests that at least two measures should be used but not necessarily one for each category. Comparing the embedder-dependent and direct measures, there is no clear indication to suggest that directly applied measures are better or worse than embedder-dependent ones. Considering the consistency results in Table 7, we observe pronounced differences in the impact of the dataset and the random seed. While the measures are overwhelmingly indifferent to randomness, the reliability of a measure depends more (and sometimes heavily) on the dataset. Particularly poor in this regard are ACS,ICD, and density .JSD,KLD , and FBCA stand out as rather consistent measures. Most measures demonstrate mediocre consistency, varying between categories, but without clearly favoring one specifically. We report the average running times (per dataset) for measures in Table 8 and for embedders in Table 11. Note that the running times for measures do not include any preparation steps such as 8 scaling or embedding. For embedders, these comprise training (where applicable), one inference for the real data, and one for the synthetic data. The fastest to compute measures are temporal and spatial correlation (≈0s), which benefit from their radical subsampling approach. The measures on subsequent positions benefit from the separately recorded embedding. With the exception of autocorrelation computed for PTB diagnostic ECG ( ≈660s), the execution times are negligible up to rank 27, staying below
|
https://arxiv.org/abs/2505.21160v1
|
one minute. Especially the more complex, often deep-learning based measures run significantly longer, up to 8minutes. Naturally, Concat is the fastest embedding, followed by Catch22 typically taking under 2 minutes and TS2Vec taking up to an hour. Still, the tables only reflect part of the picture. Tests are often stopped due to excessive running times, which underestimates the actual value for some measures. 5.2 Embedders Experiment The results of this experiment are shown in Table 1 on the right. Measured in MAPE, the effects of changing the embedding are remarkable. The smallest MAPE is 0.093, while in 59 out of 61 cases it is over 100% , and in 51 cases it is even over 1000% . PCC is mostly positive and in 42 cases above 0.5. Hence, the scores are often correlated but very different in value. Surprisingly, there is no notable difference between the TS2Vec-to-Catch22 pair (middle columns) and the comparisons with the naive Concat embedding (left and right columns). This may be due to the already big differences between TS2Vec and Catch22. Generally, there appears to be no rule or schema, neither between measures nor between embedder pairs. However, we see that the chosen embedding has an enormous effect on the score of a measure in this experiment. 5.3 Discussion of Key Findings In our reliability ranking, the measures α-precision ,ACS,autocorrelation , and again α-precision take a close first place in the categories fidelity, generalization, privacy, and representativeness, respectively. However, relatively high standard deviations make a precision ranking impossible. Some positions like α-precision ’s position in fidelity are very intuitive, others like autocorrelation ’s first place in privacy are very surprising, suggesting previously unknown properties of these measures. The side experiment on embedding models shows the influence of the chosen embedder on the measure’s score, implying that generators should always be evaluated using the same embedder and motivating dedicated analysis. This motivates further research towards a suitable standardized model. 5.4 Limitations STEB is mainly limited by the choice and combination of transformations. While it incorporates many diverse transformation designs, their connection is not fully explored and other designs may be better suited to test certain categories. Naturally, this also limits the generalizability of the results with respect to all other potential generation methods. Furthermore, STEB currently tests four categories, while a broader range is necessary for complete coverage of desirable synthetic data properties. For instance, fairness [ 23] or splitting representativeness into diversity and utility could be investigated in future work. To accommodate new measures in the future, we designed STEB with extensibility in mind. As for the experimental design, not all potentially relevant options can be explored due to the combinatorial explosion of the number of test configurations. However, we are confident to cover a wide and deep array of parameters. 6 Conclusion Currently, the comprehensive comparison of synthetic TS quality remains challenging due to the hodgepodge of measures, little analysis of their efficacy, and generally lacking standardization in evaluation. To tackle these obstacles, we propose a novel benchmark for evaluating the performance of quality measures for synthetic
|
https://arxiv.org/abs/2505.21160v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.