text string | source string |
|---|---|
strong baselines in task success rate, execution efficiency, and communication cost. Ablation studies confirm the significance of each component, particularly the central role of desire modeling in achieving user-aligned behavior. Qualitative analyses further highlight FAMER’s advantages in intent inference, planning precision, and communication efficiency. 9 References [1]Stefano V Albrecht and Subramanian Ramamoorthy. A game-theoretic model and best- response learning method for ad hoc coordination in multiagent systems. arXiv preprint arXiv:1506.01170 , 2015. [2]Nolan Bard, Jakob N Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, et al. The hanabi challenge: A new frontier for ai research. Artificial Intelligence , 280:103216, 2020. [3]Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. pi_0: A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164 , 2024. [4]Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choro- manski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language- action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 , 2023. [5]Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 , 2022. [6]Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. On the utility of learning about humans for human-ai coordination. Advances in Neural Information Processing Systems , 32, 2019. [7]Shuo Chen, Ewa Andrejczuk, Zhiguang Cao, and Jie Zhang. Aateam: Achieving the ad hoc teamwork by employing the attention mechanism. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34, pages 7095–7102, 2020. [8]Brandon Cui, Hengyuan Hu, Luis Pineda, and Jakob Foerster. K-level reasoning for zero-shot coordination in hanabi. Advances in Neural Information Processing Systems , 34:8215–8228, 2021. [9]Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. Safe rlhf: Safe reinforcement learning from human feedback. arXiv preprint arXiv:2310.12773 , 2023. [10] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03778 , 2023. [11] Weihua Du, Qiushi Lyu, Jiaming Shan, Zhenting Qi, Hongxin Zhang, Sunli Chen, Andi Peng, Tianmin Shu, Kwonjoon Lee, Behzad Dariush, et al. Constrained human-ai cooperation: An inclusive embodied social intelligence challenge. In Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [12] Hao-Shu Fang, Hongjie Fang, Zhenyu Tang, Jirong Liu, Chenxi Wang, Junbo Wang, Haoyi Zhu, and Cewu Lu. Rh20t: A comprehensive robotic dataset for learning diverse skills in one-shot. arXiv preprint arXiv:2307.00595 , 2023. [13] Jaime F Fisac, Monica A Gates, Jessica B Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S Shankar Sastry, Thomas L Griffiths, and Anca D Dragan. Pragmatic-pedagogic value alignment. In Robotics Research: the 18th International Symposium ISRR , pages 49–57. Springer, 2020. [14] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross | https://arxiv.org/abs/2505.22503v1 |
Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision , pages 2961–2969, 2017. [15] Laura M Hiatt, Cody Narber, Esube Bekele, Sangeet S Khemlani, and J Gregory Trafton. Human modeling for human–robot collaboration. The International Journal of Robotics Research , 36(5-7):580–596, 2017. 10 [16] Hengyuan Hu, Adam Lerer, Brandon Cui, Luis Pineda, Noam Brown, and Jakob Foerster. Off-belief learning. In International Conference on Machine Learning , pages 4369–4379. PMLR, 2021. [17] Hengyuan Hu, Adam Lerer, Alex Peysakhovich, and Jakob Foerster. “other-play” for zero-shot coordination. In International Conference on Machine Learning , pages 4399–4410. PMLR, 2020. [18] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [19] Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, et al. Ai alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852 , 2023. [20] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. [21] Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for "mind" exploration of large language model society. Advances in Neural Information Processing Systems , 36:51991–52008, 2023. [22] Jie Liu, Pan Zhou, Yingjun Du, Ah-Hwee Tan, Cees GM Snoek, Jan-Jakob Sonke, and Efstratios Gavves. Capo: Cooperative plan optimization for efficient embodied multi-agent cooperation. InInternational Conference on Learning Representations , 2025. [23] Andrei Lupu, Brandon Cui, Hengyuan Hu, and Jakob Foerster. Trajectory diversity for zero-shot coordination. In International Conference on Machine Learning , pages 7204–7213. PMLR, 2021. [24] Long Ma, Yuanfei Wang, Fangwei Zhong, Song-Chun Zhu, and Yizhou Wang. Fast peer adaptation with context-aware exploration. In International Conference on Machine Learning , 2024. [25] Reuth Mirsky, William Macke, Andy Wang, Harel Yedidsion, and Peter Stone. A penny for your thoughts: The value of communication in ad hoc teamwork. International Joint Conference on Artificial Intelligence , 2020. [26] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35:27730–27744, 2022. [27] Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, Abraham Lee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, Ajinkya Jain, et al. Open x- embodiment: Robotic learning datasets and rt-x models. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 6892–6903. IEEE, 2024. [28] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 8494–8502, 2018. [29] Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Yuan-Hong Liao, Joshua B Tenenbaum, Sanja Fidler, and Antonio Torralba. Watch-and-help: A challenge for social perception and human-ai collaboration. In International Conference on Learning Representations , 2021. [30] Neil | https://arxiv.org/abs/2505.22503v1 |
Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, SM Ali Eslami, and Matthew Botvinick. Machine theory of mind. In International Conference on Machine Learning , pages 4218–4227. PMLR, 2018. 11 [31] Muhammad A Rahman, Niklas Hopner, Filippos Christianos, and Stefano V Albrecht. Towards open ad hoc teamwork using graph-based policy learning. In International Conference on Machine Learning , pages 8776–8786. PMLR, 2021. [32] Manish Chandra Reddy Ravula. Ad-hoc teamwork with behavior-switching agents. In Interna- tional Joint Conferences on Artificial Intelligence , 2019. [33] Peter Stone, Gal Kaminka, Sarit Kraus, and Jeffrey Rosenschein. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 24, pages 1504–1509, 2010. [34] DJ Strouse, Kevin McKee, Matt Botvinick, Edward Hughes, and Richard Everett. Collaborat- ing with humans without human data. Advances in Neural Information Processing Systems , 34:14502–14515, 2021. [35] Octo Model Team, Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Tobias Kreiman, Charles Xu, et al. Octo: An open-source generalist robot policy. arXiv preprint arXiv:2405.12213 , 2024. [36] Yiding Wang, Yuxuan Chen, Fangwei Zhong, Long Ma, and Yizhou Wang. Simulating human- like daily activities with desire-driven autonomy. In International Conference on Learning Representations , 2025. [37] Yuanfei Wang, Xiaojie Zhang, Ruihai Wu, Yu Li, Yan Shen, Mingdong Wu, Zhaofeng He, Yizhou Wang, and Hao Dong. Adamanip: Adaptive articulated object manipulation environ- ments and policy learning. In International Conference on Learning Representations , 2025. [38] Yuanfei Wang, Fangwei Zhong, Jing Xu, and Yizhou Wang. Tom2c: Target-oriented multi-agent communication and cooperation with theory of mind. In International Conference on Learning Representations , 2022. [39] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [40] Luyao Yuan, Xiaofeng Gao, Zilong Zheng, Mark Edmonds, Ying Nian Wu, Federico Rossano, Hongjing Lu, Yixin Zhu, and Song-Chun Zhu. In situ bidirectional human-robot value alignment. Science Robotics , 7(68):eabm4183, 2022. [41] Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, et al. Proagent: building proactive cooperative agents with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17591–17599, 2024. [42] Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tianmin Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. In International Conference on Learning Representations , 2024. 12 A Environment Details HA-Desire is built upon VirtualHome [ 28], a widely adopted testbed for embodied multi-agent cooperation. In this work, we modify the simulation to specifically address the challenge of embodied agent-user adaptation with a focus on desire alignment. In the following sections, we provide further details on the human user agent and the tasks designed to evaluate this problem. A.1 Human User As discussed in Section 3, HA-Desire is designed to evaluate the ability of embodied agents to rapidly infer the underlying desires of human users and take actions to | https://arxiv.org/abs/2505.22503v1 |
fulfill those desires. To achieve this, we integrate a proxy human user within the environment, which serves two main functions: 1. The human user determines the specific goal set from a set of potential goals, based on a vague task description and a sampled set of value attributes. 2. The human user responds to the agent’s inquiries in natural language, ensuring that the goal set is not directly revealed. Instead, the user implies their goals by providing hints about object properties, reflecting their underlying preferences. To achieve these functions, we empower the human user with large language models (LLMs), specifically GPT-4o. The prompts used to generate goals and responses for communication are outlined below. We incorporate a Chain of Thought prompt at the end of each query to encourage the human user to think more thoroughly, leading to more accurate goal selection and communication. After the LLM generates results, we instruct it to extract the exact goals or messages, forming the final output. Several variables are included in the prompts, prefixed with a "$". During inference, these variables are replaced with context-specific information. Goal Generation: I am Bob , a human user living at home with a humanoid assistant named Alice . I have several personal value attributes (e.g., hungry , thirsty , alcoholic ), each rated at one of three levels : Not , Somewhat , or Very . Given a specific set of my current attribute states , along with a high - level task description , your task is to select the most appropriate goal set from a given potential set of goals . For example , if I am Very thirsty , the goal set should include beverages . If I am not SweetTooth , the goal set should include less sweet food objects . Please help me choose the best goal set to reflect my value attributes . Select $GOAL_CNT$ objects as the goal set. Provide your answer as a comma - separated list of object names . An example output is: cupcake , milk , pudding . Value Attribute : $Value$ Task : $Task$ Potential Goals : $GOAL$ Answer : Let ’s think step by step . Communication: I am Bob , a human user living at home with a humanoid assistant named Alice . I need Alice to help me with a household task , which is described in a high - level instruction without specific goals provided . I have several personal value attributes (e.g., hungry , thirsty , alcoholic ) that determine the goal , but this goal is only visible to me and not to Alice . Since Alice is unaware of the specific goal , she may ask me questions about it. However , I do not want to directly tell her the goal ; instead , I want her to gradually understand my preferences and needs through interaction . Over time , I expect her to infer the goal on her own without needing to ask. The following Status shows the number of EPISODEs I have interacted with Alice . | https://arxiv.org/abs/2505.22503v1 |
The larger the number is , the less willing I am to talk to Alice . If Alice proposes a goal or action that is incorrect , I can point out the 13 mistake . If the dialogue progresses but the task is not progressing , I may be more inclined to correct her by hinting at one of the goals , but I will never reveal the entire goal set at once unless Alice herself proposes the exact whole goal set. Task : $Task$ Status : This is the $EPISODE$ -th time I interact with Alice . Goal : $GOAL$ Progress : $PROGRESS$ Alice Previous Action : $ACTION_HISTORY$ Previous Dialogue History : Alice : "Hi , I’ll let you know if I find any goal objects and finish any subgoals , and ask for your instruction and clarification when necessary ." Bob : " Thanks ! Let me know if you are uncertain about the goal objects ." $DIALOGUE_HISTORY$ Alice asks this time : $QUESTION$ Note : 1. The generated message should be accurate and brief . Use simple expressions more often . Do not generate repetitive messages . 2. Do not directly tell Alice the specific goal name at the first time . (The most important ). Instead , hint through some vague descriptions that reveal some properties of the goals , such as sweet , crunchy , alcoholic , etc. 3. Confirm Alice ’s correct guess (or partly correct guess ). But if the guess contains too many objects compared to the goal set , I should not confirm any of the objects and hint at some descriptions instead . For example , suppose the correct goal set is [apple , orange ]. If Alice guesses [bottle , banana , apple , orange , milk , chips ] in one round , I should not confirm the goal as the guess contains too many objects . If Alice guesses [chips , apple ], then I should confirm that apple is correct , even though the chips guess is wrong . If Alice guesses [apple , orange ], I should say these two objects are exactly what I want . 4. Do NOT guess the location of objects or tell Alice where to find the goal objects . 5. Be aware of the number of EPISODEs : A larger number means lower communication willingness . 6. Even if the guess is incomplete , I should confirm the correct goals if there are any. 7. If Alice has guessed something correctly in previous dialogue , try to focus on the new goal objects in this message . Please think step by step : A.2 Tasks As described in Section 5, we evaluate our method, along with baselines and ablations, on two tasks: Prepare Afternoon Snack and Set Up Dinner Table. The detailed settings for these tasks are provided below. Snack. This task involves ten potential goals: cupcake, wine, milk, cereal, chips, apple, juice, pudding, creamybuns, and chocolatesyrup. The value space consists of five dimensions: Hungry, Thirsty, SweetTooth, Fruitarian, and Alcoholic, each | https://arxiv.org/abs/2505.22503v1 |
of which takes one of three discrete levels: Not, Somewhat, or Very. The human user randomly samples values for these dimensions and then uses a language model to generate the corresponding goal set. The Snack-M level represents the medium difficulty, with 2 goals and a maximum of 200 steps. The Snack-L level represents the large difficulty, with 4 goals and a maximum of 300 steps. Performance is evaluated using three metrics: score, step count, and communication cost, as described in Section 5. Table. This task includes eight potential goals: coffeepot, breadslice, cutleryknife, mug, plate, wineglass, cutleryfork, and waterglass. The value space consists of five dimensions: NeedRefresh, Thirsty, MeatLove, CaffeinTolerable, and Alcoholic. The value levels, number of goals, step limit, and evaluation metrics are identical to those in the Snack task. 14 The Snack and Table tasks serve as representative home assistance tasks in our evaluation. However, the VirtualHome simulator supports a wide range of object assets and activities, allowing for the easy extension of HA-Desire to additional household tasks. By defining the appropriate value space and potential goal set, new tasks can be seamlessly incorporated into the environment. B Experiment Details B.1 Computing Resource The experiments were conducted on a workstation equipped with an NVIDIA GeForce RTX 4090 GPU and an Intel Core i9-13900K CPU. The large vision-language model used in this study is the GPT-4o. B.2 Baselines Here, we provide the detailed prompts for CoELA and ProAgent, which are adapted from their original versions to help the agents account for uncertain goals. CoELA Planning I’m $AGENT_NAME$ , a humanoid home assistant . I’m in a hurry to finish the housework for my owner $OPPO_NAME$ . I know the high - level instruction of the task , but I am not certain about the specific goal determined by $OPPO_NAME$ . Given the potential goal , dialogue history , and my progress and previous actions , please help me infer and choose the best available action to achieve the underlying goal as soon as possible . Note that I can hold two objects at a time and there are no costs for holding objects . All objects are denoted as <name > (id), such as < table > (712) . Task Name : $Task$ Potential Goal : $GOAL_CNT$ object (s) determined by human user from the set $GOAL$ . Put them $REL_TARGET$ Progress : $PROGRESS$ Dialogue history : Alice : ""Hi , I’ll let you know if I find any goal objects and finish any subgoals , and ask for your instruction and clarification when necessary ."" Bob : "" Thanks ! Let me know if you are uncertain about the goal objects ."" $DIALOGUE_HISTORY$ Previous actions : $ACTION_HISTORY$ Available actions : $AVAILABLE_ACTIONS$ Answer : CoELA Communication I’m $AGENT_NAME$ , a humanoid home assistant . I’m in a hurry to finish the housework for my owner $OPPO_NAME$ . I know the high - level instruction of the task , but I am not certain about the specific goal determined by $OPPO_NAME$ . Given the potential goal , dialogue history , and my progress | https://arxiv.org/abs/2505.22503v1 |
and previous actions , please help me generate a short message to send to my owner $OPPO_NAME$ to help us achieve the underlying goal as soon as possible . Note that I can hold two objects at a time and there are no costs for holding objects . All objects are denoted as <name > (id), such as < table > (712) . Potential Goal : $GOAL_CNT$ objects determined by human user from the set $GOAL$ . Put them $REL_TARGET$ Progress : $PROGRESS$ Previous actions : $ACTION_HISTORY$ 15 Dialogue history : Alice : ""Hi , I’ll let you know if I find any goal objects and finish any subgoals , and ask for your instruction and clarification when necessary ."" Bob : "" Thanks ! Let me know if you are uncertain about the goal objects ."" $DIALOGUE_HISTORY$ Note : The generated message should be accurate and brief . Do not generate repetitive messages . ProAgent I’m $AGENT_NAME$ , a humanoid home assistant . I’m in a hurry to finish the housework for my owner $OPPO_NAME$ . I know the high - level instruction of the task , but I am not certain about the specific goal determined by $OPPO_NAME$ . Given the potential goal , my progress , and previous actions , please help me infer and choose the best available action to achieve the underlying goal as soon as possible . Note that I can hold two objects at a time and there are no costs for holding objects . All objects are denoted as <name > (id), such as < table > (712) . Potential Goal : $GOAL_CNT$ object (s) determined by human user from the set $GOAL$ . Put them $REL_TARGET$ . Important Instruction : $AGENT_NAME$ has previously achieved and found these subgoals . This is its success experience . You should focus only on actions that help achieve the goal items , i.e ., those in the target set provided . Ignore or deprioritize any actions unrelated to acquiring or placing goal items . When reviewing previous successful experiences , only reuse or adapt steps that directly contribute to acquiring or placing goal items .For example , ignore exploration , object grabbing , or placing steps for items not included in the current potential goal set. When the current situation even partially matches any past success (e. g., similar object types , room layout , or goal structure ), you should prioritize reusing or adapting the proven action sequences : $HISTORY_OF_SUCCESSFUL_SUBGOALS$ Progress : $PROGRESS$ Previous actions : $ACTION_HISTORY$ Belief State : $BELIEF_STATE$ Available actions : $AVAILABLE_ACTIONS$ Required Output Format : - Analysis : [ Infer and choose the best available action to achieve the underlying goal ] - Best Next Action : [ Single most optimal action from available options ] - Intention for $OPPO_NAME$ ’s Underlying Goal : [ Inference about the true goal ] 16 B.3 Qualitative Analysis Figure 7 provides a qualitative comparison between CoELA and FAMER. We present more details about the behavior of both methods in the Snack-L task, where the agent | https://arxiv.org/abs/2505.22503v1 |
must infer 4 goals from a set of ten potential options: cupcake, wine, milk, cereal, chips, apple, juice, pudding, creamybuns, and chocolatesyrup. For CoELA, the ground truth goal set is: creamybuns, milk, juice, chips. As shown in Figure 7, the CoELA agent makes several mistakes during the task. Initially, the agent incorrectly interprets the user’s message and grabs apple (step 1). It then retrieves chips (step 2) and places both apple and chips on the coffee table (step 3). However, the agent continues to confuse apple as a valid goal, repeatedly placing it on the table in step 4. After some dialogue with the user, it begins searching for the other correct goal objects, successfully finding creamybuns (step 5), but erroneously selects cupcake (step 6) as a goal. Eventually, the agent locates milk and places all the goal objects on the table, but with two incorrect items (apple and cupcake) (steps 7 & 8). For FAMER, the ground truth goal set is: cereal, creamybuns, cupcake, apple. As shown in the figure, the FAMER agent efficiently infers the user’s desires through a series of concise interactions. First, it grabs creamybuns (step 1) and cereal (step 2) and places them on the coffee table (step 3), since it can hold only two objects at a time. It then searches for apple (step 4) and cupcake (step 5), placing both objects on the table (step 6) and successfully completing the task. 17 | https://arxiv.org/abs/2505.22503v1 |
arXiv:2505.22513v1 [cs.GT] 28 May 2025Strengthening Proportionality in Temporal Voting Bradley Phillips1, Edith Elkind2, Nicholas Teh1and Tomasz W ˛ as1 1University of Oxford, UK 2Northwestern University, USA Abstract We study proportional representation in the framework of temporal voting with approval ballots. Prior work adapted basic proportional representation concepts—justified representation (JR), pro- portional JR (PJR), and extended JR (EJR)—from the multiwinner setting to the temporal setting. Our work introduces and examines ways of going beyond EJR. Specifically, we consider stronger variants of JR, PJR, and EJR, and introduce temporal adaptations of more demanding multiwinner axioms, such as EJR+, full JR (FJR), full proportional JR (FPJR), and the Core. For each of these concepts, we investigate its existence and study its relationship to existing notions, thereby estab- lishing a rich hierarchy of proportionality concepts. Notably, we show that two of our proposed axioms—EJR+ and FJR—strengthen EJR while remaining satisfiable in every temporal election. 1 Introduction Proportional representation is a fundamental principle in the design of fair decision-making mecha- nisms, particularly in the framework of multiwinner voting , where the goal is to select a representative group of candidates based on the preferences of the electorate. For approval ballots, this principle is instantiated as the axiom of justified representation (JR) and its extensions, such as PJR/EJR/FJR/EJR+ and the core [Aziz et al., 2017, Brill and Peters, 2023, Peters et al., 2021, Sánchez-Fernández et al., 2017]. These notions are motivated by classical social choice contexts, from electing diverse panels and boards to ensuring the presence of minority voices in political bodies [Lackner and Skowron, 2023, Phragmén, 1895]. More recently, they have been shown to be relevant for machine learning (ML) systems that incorporate human feedback, optimize group-level utility, or seek fairness across popula- tions. Examples include increasing diversity in recommendation systems and social media [Gawron and Faliszewski, 2024, Revel et al., 2025, Streviniotis and Chalkiadakis, 2022], ensuring balanced represen- tation in committee-based decisions in blockchain governance [Boehmer et al., 2024, Burdges et al., 2020, Cevallos and Stewart, 2021], and offering representation guarantees in LLM-augmented demo- cratic processes [Boehmer et al., 2025, Fish et al., 2024]. However, beyond static selection tasks, many ML applications involve decision-making over mul- tiple rounds, with preferences that may change over time. For instance, curriculum learning constructs sequences of training tasks for reinforcement learning agents; temporal fairness guarantees can prevent the curriculum from overemphasizing early-stage objectives at the expense of later goals [Wu et al., 2024, Yang et al., 2021]. Similarly, streaming service catalogs and content recommendation systems pe- riodically refresh their offerings: ensuring proportionality across time ensures that under-served genres or user communities are not systematically sidelined across updates [Chen et al., 2024]. In the domain of generative AI, blending outputs from multiple models—each with its own inductive biases—calls for proportional merging strategies that preserve diversity of generated content over time [Peters, 2024]. In these applications, proportional representation needs to be maintained not just in a single round, but across the entire time horizon. This calls for an adaptation of the JR axioms to the temporal setting. 1 Recently, Bulteau et al. | https://arxiv.org/abs/2505.22513v1 |
[2021], Chandak et al. [2024], and Elkind et al. [2025c] considered temporal voting with approval ballots, where the goal is to select one candidate per round. They adapted some of the more established justified representation axioms (namely, JR, PJR and EJR) to this setting, formu- lated temporal variants of popular multiwinner rules, e.g., Proportional Approval V oting [Kilgour, 2010], Method of Equal Shares [Peters and Skowron, 2020], or Greedy Cohesive Rule [Bredereck et al., 2019], and checked whether their outputs satisfy temporal JR axioms, as well as considered the complexity of verifying whether an outcome satisfies a given axiom. However, they did not attempt to extend more demanding proportionality axioms such as EJR+, FJR, and core stability to the temporal setting. 1.1 Our Contributions In this work, we aim to identify the most ambitious justified representation-style axioms that can be satisfied in temporal voting, and, more broadly, to create a comprehensive map of the landscape of temporal JR axioms. In Section 3, we focus on the EJR+ axiom [Brill and Peters, 2023]. We propose an adaptation that preserves the three main attractive features of its multiwinner variant: strengthening EJR, being verifiable in polynomial time, and being satisfiable by a polynomial-time computable voting rule. Our analysis in this section highlights the difficulties of extending proportionality axioms from the multi- winner setting to the temporal setting: the most straightforward variant of temporal EJR+ turns out to be unsatisfiable in general, and we explain why this is the case by uncovering a hidden layer of our axiomatic hierarchy. In Section 4, we pursue the same approach for the FJR axiom [Peters et al., 2021], and show that an outcome satisfying temporal FJR always exists and can be found by a modification of the Greedy Cohesive Rule. In Section 5, we look at the recently proposed notion of full proportional justified representation (FPJR) [Kalayci et al., 2025] that combines the FJR notion of group cohesion with a PJR-style collective guarantee. We adapt it to our setting and show that if the number of voters divides the number of rounds and each voter approves at least one candidate per round, an outcome that satisfies a strong version of FPJR (sFPJR) can be found in polynomial time by the Serial Dictatorship Rule. Importantly, EJR+ and FJR are stronger than EJR, and sFPJR is incomparable with EJR and the other two axioms. Thus, each of our results advances the frontier of satisfiable proportionality guarantees in the temporal setting. Finally, in Section 6, we complete the picture of proportionality notions in the temporal setting by formulating temporal core stability axioms. For all axioms that we define, we establish that they satisfy the expected implications (e.g., temporal FJR implies temporal EJR, just like in the multiwinner setting); for cases where implications do not hold, we provide concrete counterexamples. Thus, our work establishes a rich hierarchy of proportionality concepts in temporal voting, illustrated in Figure 1. All missing proofs are included in the technical appendix. 1.2 Related Work Proportional representation in temporal voting was first studied by Bulteau et al. [2021], who | https://arxiv.org/abs/2505.22513v1 |
extended two notions of proportional representation—JR and PJR—to the temporal setting. They showed that a JR outcome can be computed in polynomial time, even for the most demanding version of JR among those they considered. For PJR, they proved the existence of an outcome satisfying the axiom, but their constructive proof relies on an exponential-time algorithm. This work was extended by Chandak et al. [2024], who introduced a temporal variant of EJR and showed that several multiwinner voting rules that satisfy PJR or EJR in the standard model can be adapted to the temporal setting so as to satisfy the corresponding temporal variants of PJR and EJR. Subsequently, Elkind et al. [2025c] studied the complexity of verifying whether an outcome satisfies various proportionality axioms. 2 sEJR+ sFPJR sCore sFJR sEJR sPJR sJR EJR+ FPJR Core FJR EJR PJR JR wEJR+ wFPJR wCore wFJR wEJR wPJR wJR E=1 Figure 1: Axioms considered in our paper. A solid arrow from axiom Ato axiom Bmeans that Aimplies B; an absence of a path from AtoBmeans that the implication does not hold (even if each voter approves exactly one candidate per round, denoted by E=1). A dashed arrow means implication for E=1, but not in general. Thick arrows denote implications established in this paper. The axioms on a green plain background can be satisfied in each election. The axioms on a yellow dotted background can be satisfied if each voter approves at least one candidate per round and the number of rounds is divisible by the number of voters, but not in general; for the ones on a red striped background even in this special case there are instances where they are not satisfiable. For axioms on white background, satisfiability remains open. The axioms introduced in this paper are written in bold. Elkind et al. [2024a] focused on the tradeoffs between the social welfare and other desirable proper- ties in the temporal setting, such as strategyproofness and a weaker form of proportionality. Elkind et al. [2025b] considered similar computational questions, but in the setting where voters have disutility over candidates. A similar model has also been explored by Lackner [2020] and Lackner and Maly [2023] under the perpetual voting framework, where they consider temporal extensions of multiwinner voting rules and their axiomatic properties. We refer the reader to Elkind et al. [2024b] for a systematic review of recent work on temporal voting and related models. An adjacent line of work looks into sequential committee elections where an entire committee is elected at each round [Bredereck et al., 2020, 2022, Chen et al., 2024, Deltl et al., 2023, Zech et al., 2024]. These works focus on constraining the extent of changes to the committees chosen across rounds. Multiple other subareas are similar in spirit to the temporal voting framework, but with additional constraints/restrictions. In apportionment with approval preferences , the goal is to allocate the seats of a fixed-size committee to parties based on voters’ (approval) preferences over the parties [Brill et al., 2024, Delemazure et al., 2023]. This is equivalent to a restricted setting of temporal | https://arxiv.org/abs/2505.22513v1 |
voting with static voter preferences. In fair scheduling , each agent’s preference is a permutation, and so is the outcome [Elkind et al., 2022, Patro et al., 2022]. Our model differs from this setting in that we allow each project to be chosen more than once (both in agents’ preferences and the outcome). Fair public decision-making is similar to the temporal voting model, but the focus is typically on weaker forms of proportionality [Alouf-Heffetz et al., 2022, Conitzer et al., 2017, Fain et al., 2018, Skowron and Górecki, 2022]. Fi- nally, an adjacent but related framework is that of resource allocation over time [Allouah et al., 2023, Bampis et al., 2018, Elkind et al., 2025a], where items are sequentially (and privately) allocated to agents. While this setting can potentially be modeled using the temporal voting framework, the fairness notions typically considered are fundamentally different. 3 Another relevant work is that of Masa ˇrík et al. [2024], who considered a more general voting model incorporating feasibility constraints, and proposed proportionality axioms for this model. We include a detailed discussion of the connection between their concepts and some of ours in Appendix A. 2 Preliminaries For every natural number k∈N, we let [k] ={1,2, . . . , k }. A temporal election is a tuple E= (C, N, ℓ, A ), where Cis the set of candidates ,Nis the set of nvoters ,ℓis the number of rounds , and A= (ai)i∈Nis an approval profile , where ai= (ai,1, ai,2, . . . , a i,ℓ)is the ballot of voter i∈N, i.e., a list of ℓsubsets of C; in round r∈[ℓ]voter iapproves candidates in ai,r⊆C. We denote the set of all temporal elections by Eand the set of temporal elections in which in every round every voter approves at least (resp. exactly) one candidate by E≥1(resp.E=1). We say that voters from a subset S⊆Nagree in a round r∈[ℓ]if there exists a candidate that they all approve in this round, i.e.,T i∈Sai,r̸=∅. Anoutcome o= (o1, o2, . . . , o ℓ)∈Cℓis a sequence of candidates chosen in each round. For a subset of rounds R⊆[ℓ]and an outcome o, we write oR= (or)r∈Rto denote the suboutcome with respect to R. The satisfaction of a subset of voters S⊆Nfrom a suboutcome oRissatS(oR) =|{r∈ R:or∈S i∈Sai,r}|, i.e., the number of rounds in Rin which the selected candidate is approved by at least one voter from S. IfS={i}for some i∈N, we drop the brackets and write sati(oR). Avoting rulefis a function that for every election Eoutputs a non-empty set of outcomes f(E). Proportionality Axioms from Prior Work The temporal voting setting can be seen as an extension of multiwinner voting , where, in a single round of voting, voters in Nreport approvals (ai)i∈Nover candidates in C, and the goal is to select a set of kwinners, W. The satisfaction of a group of voters Sis then defined as the number of candidates in Wapproved by at least one member of S. Key proportionality axioms for multiwinner voting [Aziz et al., 2017, Sánchez-Fernández et al., 2017] require Wto provide | https://arxiv.org/abs/2505.22513v1 |
a certain level of satisfaction to each group of voters Sthat agree on t≥1candidates (in the sense that |T i∈Sai| ≥t). In particular, justified representation (JRmw)requires the satisfaction to be at least 1ifk· |S|/n≥1,proportional justi- fied representation (PJRmw)strengthens this guarantee to min(t,⌊k· |S|/n⌋), while extended justified representation (EJRmw) requires at least one voter in Sto approve min(t,⌊k· |S|/n⌋)members of W. Bulteau et al. [2021] and Chandak et al. [2024] propose two versions of each of these axioms for the temporal setting. We will now present these axioms using the terminology of Elkind et al. [2025c].1 Note that the temporal setting is more restrictive as compared to the multiwinner setting, as we cannot select two candidates in the same round. The weakest version of these axioms only considers subsets of voters that agree in every round. Definition 2.1. Given a temporal election E= (C, N, ℓ, A )and an outcome o, if for every subset of voters S⊆Nthat agree in all ℓrounds it holds that •satS(o)≥min(1 ,⌊ℓ· |S|/n⌋), then oprovides weak justified representation (wJR), •satS(o)≥ ⌊ℓ· |S|/n⌋, then oprovides weak proportional justified representation (wPJR), •sati(o)≥ ⌊ℓ· |S|/n⌋for some i∈S, then oprovides weak extended justified representation (wEJR). Definition 2.1 offers no guarantees to groups that disagree even in a single round. A more flexible approach is to provide guarantees that scale with the number of rounds where the members agree. 1In their conference paper, Chandak et al. [2024] use ‘JR/PJR/EJR’ for what we call ‘weak JR/PJR/EJR’ and ‘strong JR/PJR/EJR’ for what we call ‘JR/PJR/EJR’, but their arXiv version uses the same terminology as we do. 4 Definition 2.2. Given a temporal election E= (C, N, ℓ, A )and an outcome o, if for each t >0and every subset of voters S⊆Nthat agree in a size- tsubset of rounds it holds that •satS(o)≥min(1 ,⌊t· |S|/n⌋), then oprovides justified representation (JR), •satS(o)≥ ⌊t· |S|/n⌋, then oprovides proportional justified representation (PJR), •sati(o)≥ ⌊t· |S|/n⌋for some i∈S, then oprovides extended justified representation (EJR). Chandak et al. [2024] show that every temporal election admits an outcome that provides EJR. Also, EJR implies PJR, PJR implies JR, and each axiom implies its weak counterpart (we say that an axiom A1implies axiom A2, denoted A1=⇒A2, if every outcome that provides A1also provides A2). 3 Extended Justified Representation + In the context of multiwinner voting, Brill and Peters [2023] have recently strengthened EJRmwto a new axiom, which they call Extended Justified Representation + (EJRmw+). The goal of this section is to adapt this axiom to the temporal setting. EJRmw+ has three attractive features, which we aim to preserve in our temporal adaptation: (i) it implies EJRmw(while the converse is not true) (ii) a committee that provides EJRmw+ can be found in polynomial time, and (iii) it can be verified in polynomial time whether a given committee provides EJRmw+. EJRmw+ requires that for every group of voters Sthat jointly approve a candidate c, either cis included in the winning committee Wor there is a voter i∈Swhose satisfaction from Wis at least ⌊k· |S|/n⌋. A direct translation of this axiom to the temporal setting | https://arxiv.org/abs/2505.22513v1 |
gives us the following definition. Definition 3.1. Given a temporal election E= (C, N, ℓ, A ), an outcome oprovides strong extended justified representation + (sEJR+) if for every subset of voters S⊆Nand every round r∈[ℓ]withT i∈Sai,r̸=∅it holds that (i)sati(o)≥ ⌊ℓ· |S|/n⌋for some i∈S, or (ii) or∈T i∈Sai,r. Note that the second condition is phrased in terms of choosing an outcome fromT i∈Sai,rrather than selecting a specific candidate; this is because there may be several candidates approved by all members ofSin round r, but only one of them can be selected. In contrast, in the multiwinner setting, if Sagrees on multiple candidates, several such candidates can be included in the winning committee. We labeled the concept introduced in Definition 3.1 as ‘strong’ for two reasons. First, there are elections (even in E=1) with no sEJR+ outcomes. Second, sEJR+ is fundamentally different from JR/PJR/EJR, as defined in Definition 2.2. To show this, we will now define strong JR/PJR/EJR fol- lowing the approach of Definition 3.1, and explain how the resulting notions differ from JR/PJR/EJR. Definition 3.2. Given a temporal election E= (C, N, ℓ, A )and an outcome o, if for each t >0and every subset of voters S⊆Nthat agree in a size- tsubset of rounds it holds that •satS(o)≥min(1 ,⌊ℓ· |S|/n⌋), then oprovides strong justified representation (sJR), •satS(o)≥min(t,⌊ℓ· |S|/n⌋), then oprovides strong proportional justified representation (sPJR), •sati(o)≥min(t,⌊ℓ·|S|/n⌋)for some i∈S, then oprovides strong extended justified representa- tion(sEJR). Together with sEJR+, strong JR/PJR/EJR form a natural hierarchy and imply their counterparts from Definition 2.2. 5 Proposition 3.3. It holds that: (i) sEJR+ =⇒sEJR, (ii) sEJR =⇒sPJR and EJR, (iii) sPJR =⇒ sJR and PJR, and (iv) sJR =⇒JR. However, there exist elections (even in E=1) for which even the weakest of these axioms, i.e., sJR, is impossible to satisfy. By Proposition 3.3, this impossibility extends to sPJR/sEJR/sEJR+. Proposition 3.4. In the temporal election E∈ E=1given below no outcome provides sJR. V oter Rounds 1 2 3 4 5 6 7 8 9 10 11 12 1 x x x x y y y y z z z z 2–6 c1c2c3c4c5c6c7c8c9c10c11c12 Proof. The table specifies an election E= (C, N, ℓ, A )withℓ= 6 rounds, a set of voters N= [12] , and a set of candidates C={x, y, z, c 1, . . . , c 12}, where each voter approves a single candidate in each round. Intuitively, in the first round the voters form three cohesive blocks of size four, and in the next five rounds, every voter supports a different candidate. Without loss of generality, assume that candidate xis selected in the first round. Let Sbe a size-2 subset of {5,6,7,8}. As⌊ℓ· |S|/n⌋=⌊6·2/12⌋= 1and voters in Sagree on yin the first round, sJR requires that at least one voter in Shas positive satisfaction. Thus, at least three voters in {5,6,7,8}must have positive satisfaction. By the same argument, at least three voters in {9,10,11,12}must have positive satisfaction. Thus, we need to provide positive satisfaction to 6voters in rounds 2–6, which is clearly impossible, as in each of these rounds we can satisfy at | https://arxiv.org/abs/2505.22513v1 |
most one voter. Therefore, no outcome can provide sJR. The reason why sJR (and hence sEJR+) is hard to satisfy is that it can give strong guarantees to groups of voters based on agreement in just a few rounds. These rounds may be heavily contested, and selecting a candidate to satisfy one group might make it impossible to satisfy another group that agrees over the same rounds (such as groups 1–4, 5–8 and 9–12 in the proof of Proposition 3.4). In contrast, in the multiwinner setting, all kcommittee slots are equivalent. Now, the difference between sEJR and EJR is that the satisfaction guarantee to a group Sthat agrees introunds is ⌊ℓ· |S|/n⌋under to the former and ⌊t· |S|/n⌋under the latter. Can we modify the definition of sEJR+ in a similar way to obtain an axiom that can always be satisfied? The challenge is that the multiwinner variant of this axiom, EJRmw+, provides guarantees to voters in Sas long as they agree on one candidate, i.e., there is no ‘ t’ that can be used to replace ℓ. We would like to preserve this feature, but define the satisfaction guarantee so that it only takes into account the rounds with some level of agreement. To this end, we replace the size of the group |S|with a more ‘local’ measure, namely, the number of voters in Swho agree on a candidate in a given round, and focus on the rounds where this quantity is high. Definition 3.5. Given a temporal election E= (C, N, ℓ, A )andσ∈[n],τ∈[ℓ], we say that a subset of voters Sis(σ, τ)-cohesive if there exists a set of τrounds R⊆[ℓ]and a suboutcome oR= (or)r∈R such that for each r∈Rat least σvoters in Sapprove or. We say that an outcome oprovides extended justified representation + (EJR+) if for all σ∈[n],τ∈[ℓ], every (σ, τ)-cohesive subset of voters Sand every round r∈[ℓ]withT i∈Sai,r̸=∅it holds that (i)sati(o)≥ ⌊τ·σ/n⌋for some i∈S, or (ii) or∈T i∈Sai,r. 6 In the election from Proposition 3.4, the group S={1,2,3,4}is(4,1)-cohesive (as all members of Sapprove xin round 1) and (1,6)-cohesive (as at least one member of Sapproves (x, c1, . . . , c 1)), but 4·1<12,1·6<12, so EJR+ does not offer any representation guarantees to S. We will now argue that this is the ‘correct’ definition of EJR+ for the temporal setting. Our argument is threefold: we show that (a) EJR+ implies EJR and is implied by sEJR+, (b) EJR+ is polynomial-time verifiable, and (c) an EJR+ outcome always exists and can be found in polynomial time. Proposition 3.6. It holds that: (i) sEJR+ =⇒EJR+ and (ii) EJR+ =⇒EJR. Proposition 3.7. Given a temporal election Eand an outcome o, it can be checked in polynomial time whether oprovides EJR+. Proof. Consider an arbitrary temporal election E= (C, N, ℓ, A )and an outcome o. Our goal is to check if there are integers σ∈[n],τ∈[ℓ], a(σ, τ)-cohesive subset of voters S, and a round r∈[ℓ] withT i∈Sai,r̸=∅such that or̸∈T i∈Sai,rand for each i∈Sit holds that sati(S)<⌊τ·σ/n⌋. Our algorithm makes use of the observation that every superset of a (σ, τ)-cohesive | https://arxiv.org/abs/2505.22513v1 |
subset of voters is itself (σ, τ)-cohesive. We go over all possibilities for r∈[ℓ]andc∈C\{or}. For a given pair (r, c), we go over all λ∈[ℓ] and identify the set of all voters i∈Nwho (i) approve cin round rand (ii) have sati(o)< λ; denote this set by Sr,c,λ. If all voters in Sr,c,λapprove orin round r, we disregard this set. Otherwise, for each q∈[ℓ]letcqbe a candidate that receives the maximum number of approvals from voters in Sr,c,λ, and let αqbe the number of voters in Sr,c,λwho approve cqin round q. We then sort (αq)q∈[ℓ]is non-increasing order; denote the resulting list by (α′ q)q∈[ℓ]. Finally, we check whether there exists a q∈[ℓ]such that ⌊q·α′ q/n⌋ ≥λ. If this is the case, the respective set of qrounds witnesses that Sr,c,λis(α′ q, q)-cohesive. Moreover, voters in Sr,c,λ approve cin round r(soT i∈Sai,r̸=∅, butor̸∈T i∈Sai,r), yet for each voter i∈Sr,c,λits satisfaction is less than λ≤ ⌊q·α′ q/n⌋, i.e., Sr,c,λwitnesses that oviolates EJR+. In this case, our algorithm reports that odoes not provide EJR+. Our algorithm goes over mℓ2triples (r, c, λ ), and performs O(nmℓ +ℓlogℓ)operations for each such triple. Hence, it runs in time polynomial in the input size. Clearly, if our algorithm reports that EJR+ is violated, it provides an explicit witness. It remains to argue that if EJR+ is violated, our algorithm will be able to detect this. To see this, suppose that the violation of EJR+ is witnessed by a (σ, τ)-cohesive set Ssuch thatT i∈Sai,r̸=∅, butor̸∈T i∈Sai,r, and the satisfaction of each voter in Sis less than ⌊σ·τ/n⌋. Then there exists a candidate c∈T i∈Sai,r; note that c̸=r. Let Rbe the set of rounds witnessing that Sis(σ, τ)-cohesive. When our algorithm considers the triple (r, c, λ )withλ=⌊σ·τ/n⌋, it constructs the set Sr,c,λand by definition it holds that S⊆Sr,c,λ. Then, for each q∈Rwe have αq≥σ, since at least σvoters from Sapprove the same candidate in round q. Hence, the sequence (αq)q∈[ℓ]contains at least |R|=τvalues that are at least σ, and consequently our algorithm will be able to identify that EJR+ is violated. To prove that every temporal election has an EJR+ outcome, we show that the temporal variant of the ε-lsPA V rule (introduced by Aziz et al. [2018] in the multiwinner setting and adapted to the temporal setting by Chandak et al. [2024]) with ε= 1/(2ℓ2)always outputs EJR+ outcomes and runs in polynomial time. Definition 3.8. For a temporal election E= (C, N, ℓ, A ), the harmonic score of an outcome ois defined as sH(o, E) =P i∈NPsati(o) j=11 j.Theε-lsPA V rule outputs all outcomes oinCℓsuch that for every outcome o′∈Cℓthat agrees with oinℓ−1rounds it holds that sH(o, E) +ε≥sH(o′, E). Setting ε≥1/(2ℓ2)ensures that ε-lsPA V outputs an outcome in polynomial time (see Aziz et al. [2018] and Chandak et al. [2024]). Moreover, we show that ε-lsPA V provides EJR+ if we set ε <1/ℓ2. 7 Theorem 3.9. For every temporal election E= (C, N, ℓ, A ), each output of ε-lsPAV with ε <1/ℓ2 provides EJR+. Proof. Fix an arbitrary temporal election E= (C, N, ℓ, A ), and | https://arxiv.org/abs/2505.22513v1 |
let obe an output of ε-lsPA V on E. For a contradiction, assume that odoes not provide EJR+. Then there exist σ∈[n],τ∈[ℓ], a (σ, τ)-cohesive subset of voters S⊆N, a round r∈[ℓ], and a candidate c∈Cthat is approved by all voters in Sin round rsuch that c̸=or, and for every i∈Sit holds that sati(o)<⌊τ·σ/n⌋. If σ=|S|, this immediately implies that EJR is violated, which is a contradiction, as Chandak et al. [2024, Theorem 4.5] show that ε-lsPA V provides EJR for ε >1/ℓ2. Thus, let us assume that σ <|S|. LetRbe a set of τrounds witnessing that Sis(σ, τ)-cohesive, i.e., in each round q∈Rthere are σ voters in Sthat approve a common candidate. We can assume that r∈R, with cr=c. For each q∈R, letcqbe the respective candidate and let Sqbe the subset of voters in Swho approve cqin round q; note that|Sq| ≥σ. For each q∈R, let∆(oq, cq)be the change in the harmonic score obtained by replacing oqwithcqin round q. We have ∆(oq, cq) =X i∈N oq̸∈ai,q,cq∈ai,q1 sati(o) + 1−X i∈N oq∈ai,q,cq̸∈ai,q1 sati(o) ≥X i∈Sq oq̸∈ai,q1 sati(o) + 1−X i∈N\Sq oq∈ai,q,cq̸∈ai,q1 sati(o)(for every i∈Sqwe have cq∈ai,q) ≥X i∈Sq oq̸∈ai,q1 sati(o) + 1−X i∈N\Sq oq∈ai,q1 sati(o)(remove a condition in the second sum) =X i∈Sq1 sati(o) + 1−X i∈N\Sq oq∈ai,q1 sati(o)−X i∈Sq oq∈ai,q1 sati(o) + 1(add and subtract the same) =X i∈Sq1 sati(o) + 1−X i∈N oq∈ai,q1 sati(o)+X i∈Sq oq∈ai,q1 sati(o)−1 sati(o) + 1 (add and subtract the same) ≥X i∈Sq1 sati(o) + 1−X i∈N oq∈ai,q1 sati(o)(remove a non-negative term) ≥σ ⌊τ·σ/n⌋−X i∈N oq∈ai,q1 sati(o). (|Sq| ≥σandsati(o)<⌊τ·σ/n⌋fori∈Sq⊆S) Forq=randcr=c, we have |Sr|=|S| ≥σ+ 1, so we can strengthen the last transition by replacing σwithσ+ 1. We obtain ∆(or, c)≥σ+ 1 ⌊τ·σ/n⌋−X i∈N or∈ai,r1 sati(o). Next, we sum ∆(oq, cq)over all q∈R, including round r: X q∈R∆(oq, cq)≥τ·σ+ 1 ⌊τ·σ/n⌋−X q∈RX i∈N oq∈ai,q1 sati(o) ≥τ·σ+ 1 ⌊τ·σ/n⌋−X q∈RX i∈N oq∈ai,q1 sati(oR)(sati(o)≥sati(oR)) 8 =τ·σ+ 1 ⌊τ·σ/n⌋−X i∈NX q∈R oq∈ai,q1 sati(oR)(change the order of the summation) =τ·σ+ 1 ⌊τ·σ/n⌋−X i∈N oq∈ai,qfor some q∈R1 (by the definition of sati(oR)) ≥τ·σ+ 1 ⌊τ·σ/n⌋−n ≥n+n τ·σ−n =n τ·σ. Thus, by the pigeonhole principle, there exists a round q∈Rin which switching from oqtocqwould increase the PA V score by at least n/(τ2σ). Since n≥σandτ≤ℓ, we have n/(τ2σ)≥1/τ2≥1/ℓ2, which contradicts the assumption that owas an outcome of ε-lsPA V with ε <1/ℓ2. We note that ε-lsPA V always produces an outcome that satisfies condition (i) in Definition 3.5. Thus, we can obtain a stronger, but still satisfiable version of EJR+ by removing condition (ii). However, we decided to keep it for consistency with the multiwinner version of this axiom, i.e., EJRmw+.2 Finally, let us define a weaker version of the EJR+ axiom in the spirit of wEJR/wPJR/wJR formula- tions from Definition 2.1. Definition 3.10. Given a temporal election E= (C, N, ℓ, A ), we say that an outcome oprovides weak extended justified representation + (wEJR+) if for all σ∈[n], every (σ, ℓ)-cohesive subset of voters S and every round r∈[ℓ]withT i∈Sai,r̸=∅it holds that (i)sati(o)≥ ⌊ℓ·σ/n⌋for some i∈S, or (ii)or∈T i∈Sai,r. We then show that wEJR+ as defined above fits well into our | https://arxiv.org/abs/2505.22513v1 |
axiomatic hierarchy. Proposition 3.11. It holds that: (i) EJR+ =⇒wEJR+ and (ii) wEJR+ =⇒wEJR. 4 Full Justified Representation The next axiom that we would like to adapt to the temporal setting is full justified representation (FJRmw) [Peters et al., 2021]. Similarly to EJRmw, it aims to select a size- kcommittee Wthat provides guarantees to every subset of voters Sin proportion to its size |S|and the level of agreement. However, when measuring agreement, it does not require all voters in Sto approve the same set of candidates. Instead, voters in Sgo over all subsets of candidates Twith|T| ≤k· |S|/nand compute their satisfaction from Tasκmw S(T) = min i∈S|T∩ai|, where aiis the approval ballot of voter i. FJRmwthen requires that for each choice of T, some voter in Sapproves at least κmw S(T)members of W. FJRmwis (strictly) stronger than EJRmw: intuitively, it provides EJR-like guarantees to a larger collection of voter subsets, so every outcome that provides FJRmwalso provides EJRmw, but the converse is not true. Moreover, every election has an FJRmwoutcome [Peters et al., 2021]. 2This strengthening of EJR+ appears to be conceptually different from EJRmw+. Indeed, one might wonder if this idea can be ported back to the multiwinner setting, resulting in an axiom that is distinct from EJRmw+. This said, we feel that Definition 3.5 is the ‘correct’ adaptation of EJRmw+ to the temporal setting, as it has the same desirable features—satisfiability, efficient computability, and efficient verifiability—as EJRmw+. 9 To translate FJRmwto the temporal setting, a natural approach is to iterate over subsets of rounds R (and the best possible selection of outcomes for these rounds) instead of subsets of candidates T. This yields the following definition. Definition 4.1. Given a temporal election E= (C, N, ℓ, A ), we say that an outcome oprovides strong full justified representation (sFJR) if for every subset of voters S⊆N, every subset of rounds R⊆ [ℓ]such that |R|=⌊ℓ· |S|/n⌋, and every outcome o′, there is a voter i∈Ssuch that sati(o)≥ minj∈Ssatj(o′ R).Equivalently, for every subset of voters Sthere is an i∈Ssuch that sati(o)≥ max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). However, just as for sEJR+, the direct approach results in a definition that is too demanding: later in this section, we will show that sFJR implies sEJR, which means that some elections have no sFJR outcomes. A simple way to modify this definition is to consider the worst (rather than the best) subset of rounds Rfor the set S(while still allowing the voters in Sto choose outcomes for rounds in R). Definition 4.2. Given a temporal election E= (C, N, ℓ, A ), an outcome oprovides weak full justified representation (wFJR) if for every subset of voters S⊆Nthere is a subset of rounds R⊆[ℓ]of size|R|=⌊ℓ· |S|/n⌋such that for every other outcome o′there is a voter i∈Swithsati(o)≥ minj∈Ssatj(o′ R).Equivalently, for every subset of voters S, there is an i∈Ssuch that sati(o)≥ min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). The reason we placed the concept introduced in Definition 4.2 at the ‘weak’ level of our hierarchy is that requiring a way of jointly reaching a satisfaction threshold (via | https://arxiv.org/abs/2505.22513v1 |
o′ R) inallsubsets of rounds of size⌊ℓ· |S|/n⌋is similar in spirit, but weaker than requiring total agreement in all rounds. Indeed, in a moment we will show that wFJR implies wEJR. Finally, for an FJR axiom in the spirit of EJR/PJR/JR from Definition 2.2, we would like to move away from considering all (subsets of) rounds. To this end, when determining what S‘deserves’, we allow Sto designate a subset of rounds T, and then we pick the worst subset of rounds R, with size of R dependent on |T|(rather than ℓ). This way, if the voters in Scan reach an acceptable compromise over a large set of rounds T, they obtain strong satisfaction guarantees; however, if Tis small, they still get a (weaker) guarantee. Definition 4.3. Given a temporal election E= (C, N, ℓ, A ), we say that an outcome oprovides full justified representation (FJR) if for every subset of voters S⊆Nand every subset of rounds T⊆[ℓ] there exists a subset R⊆Twith|R|=⌊|T|·|S|/n⌋such that for every other outcome o′there is a voter i∈Swithsati(o)≥minj∈Ssatj(o′ R).Equivalently, for every subset S⊆Nthere is a voter i∈S withsati(o)≥max T⊆[ℓ]µS(T), where µS(T) = min R⊆T:|R|=⌊|T|·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). The first argument that our definitions are ‘correct’ is that all of the expected implications hold. Proposition 4.4. It holds that: (i) sFJR =⇒sEJR and FJR, (ii) FJR =⇒EJR and wFJR, and (iii) wFJR =⇒wEJR. In the multiwinner setting, Peters et al. [2021] show that the Greedy Cohesive Rule (GCR) [Bred- ereck et al., 2019] always outputs FJRmwoutcomes. We will now show that a variant of GCR satisfies FJR in the temporal setting, providing further evidence that our definition of FJR is ‘correct’. Our start- ing point is an adaptation of GCR to the temporal setting by Elkind et al. [2025c], who show that their 10 Algorithm 1 Greedy Cohesive Rule Input: A temporal election E= (C, N, ℓ, A ) 1:o←(c, . . . , c )for arbitrary c∈C 2:V←N,p←1 3:while V̸=∅do 4: pickSpfrom arg max S⊆Vmax T⊆[ℓ]µS(T) 5: pickTpfrom arg max T⊆[ℓ]µSp(T) 6:V←V\Sp,p←p+ 1 7:end while 8:π←a permutation of [p]such that |Tπ(1)| ≤ ··· ≤ | Tπ(p)| 9:T ← [ℓ] 10:forq∈[p]do 11: i←π(q) 12: R←arbitrary subset of ⌊|Ti| · |Si|/n⌋rounds from Ti∩ T 13: picko′ Rfrom arg maxo′′ R∈C|R|minj∈Sisatj(o′′ R) 14: oR←o′ R,T ← T \ R 15:end for 16:return o variant of the rule satisfies EJR. We modify the algorithm of Elkind et al. [2025c], and show that our version satisfies FJR. A formal description of GCR is presented in Algorithm 1. Intuitively, during the first stage (lines 1–7) GCR partitions the voters into sets S1, S2, . . . , S p: at each step q∈[p]it picks a subset of voters Sq with the highest ‘demand’ max TµS(T), adds it to the partition, and discards all voters in Sq. During the second stage (lines 8–16), it iterates through the sets S1, S2, . . . , S pin order of the size of the associated subset of rounds T, from smallest to largest; when processing Sq,it selects outcomes for some of the rounds in Tqso | https://arxiv.org/abs/2505.22513v1 |
as to satisfy the ‘demand’ of voters in Sq. Theorem 4.5. For every temporal election E= (C, N, ℓ, A ), every output of GCR provides FJR. Proof. By construction, the sets S1, S2, . . . , S pare pairwise disjoint: for each q∈[p], when we select Sq, we remove voters in Sqfrom V, and therefore no set Siwithi > q can contain a voter from Sq. We will now argue that during the second stage of the algorithm (lines 8–16), when we consider the set Si, the set of rounds Ticontains ⌊|Ti| · |Si|/n⌋rounds whose outcomes have not been set in previous iterations. For readability, we will assume that π(q) =qfor all q∈[p]. Suppose for a contradiction that this is not the case, and let q∈[p]be the first index such that there are fewer than ⌊|Tq|·|Sq|/n⌋rounds from Tqavailable. This means that strictly more than |Tq|−⌊|Tq|· |Sq|/n⌋of the rounds in Tqhave been taken up in previous iterations, and hence qX i=1|Ti| · |Si| n >|Tq|. Since|Tq| ≥ |Ti|for all i≤q, this inequality implies |Tq| n·qX i=1|Si| ≥qX i=1|Ti| · |Si| n≥qX i=1|Ti| · |Si| n >|Tq|. Hence,Pq i=1|Si|> n, a contradiction with the fact that the sets S1, . . . , S qare pairwise disjoint. Thus, when the algorithm processes Si, it is presented with ⌊|Ti| · |Si|/n⌋rounds from Tiand selects outcomes for these rounds so as to maximize the minimum satisfaction of voters in Si(line 13). 11 Hence, the satisfaction of every j∈Sifrom ois at least µSi(Ti). By the choice of Tiit means that j’s satisfaction is at least max T⊆[ℓ]µSi(T), i.e., the FJR condition is satisfied for all voters in Si. In particular, none of the sets S1, . . . , S pcan be a witness that FJR is violated. It remains to show that no subset of voters S̸=S1, . . . , S pcan witness that FJR is violated. Fix a subset S⊆V. Since S1, . . . , S pform a partition of N, the set Smust intersect with at least one of them. Leti= min {q:Sq∩S̸=∅}, and let jbe some voter in S∩Si. Asj∈Si, her satisfaction from o is at least max T⊆[ℓ]µSi(T). On the other hand, since Algorithm 1 picked SioverSin line 4, we have max T′⊆[ℓ]µS(T′)≤max T′⊆[ℓ]µSi(T′). Hence, j’s satisfaction from ois at least max T′⊆[ℓ]µS(T′). As j∈S, the set Scannot be a witness that FJR is violated. The running time of GCR is not polynomial in the input size, so it remains open whether an FJR outcome can be computed in polynomial time; this problem is also open in the multiwinner case. Remark 4.6. The fact that there always exists an outcome that provides FJR is also implied by Theorem 11 of Masa ˇrík et al. [2024], because our notion of FJR is implied by their analogous notion in a more general setting. We discuss this connection in detail in Appendix A. 5 Full Proportional Justified Representation In the definition of FJR, the guarantee provided to each group Sis of the form ‘some voter in Shas high satisfaction’, i.e., it | https://arxiv.org/abs/2505.22513v1 |
is a lower bound on max i∈Ssati(o); this approach is also used in the definition of EJR. In contrast, the guarantee provided by PJR is of the form ‘collectively, voters in Shave high satisfaction’, i.e., it is a lower bound on satS(o). One can combine the FJR approach to deciding what each group deserves with a PJR-style collective guarantee; we refer to the resulting notion as FPJR (the multiwinner version of FPJR was very recently proposed by Kalayci et al. [2025]). Based on the discussion in Sections 3 and 4, we define strong FPJR, FPJR, and weak FPJR. Definition 5.1. Given a temporal election E= (C, N, ℓ, A ), an outcome oprovides strong full propor- tional justified representation (sFPJR) (resp., full proportional justified representation (FPJR) , orweak full proportional justified representation (wFPJR) ) if for every S⊆Nwe have satS(o)≥ρs(S)(resp., satS(o)≥ρ(S), orsatS(o)≥ρw(S)), where ρs(S) = max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin i∈Ssati(o′ R), ρ(S) = max T⊆[ℓ]min R⊆T:|R|=⌊|T|·|S|/n⌋max o′ R∈CRmin i∈Ssati(o′ R), ρw(S) = min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin i∈Ssati(o′ R). Similarly to EJR, FPJR is implied by FJR and implies PJR on each level of our axiomatic hierarchy. Proposition 5.2. It holds that: (i) sFJR =⇒sFPJR =⇒sPJR and FPJR, (ii) FJR =⇒FPJR =⇒ PJR and wFPJR, and (iii) wFJR =⇒wFPJR =⇒wPJR. Propositions 5.2 and 3.4 imply that some elections have no sFPJR outcomes. However, sFPJR turns out to be satisfiable for elections in E≥1as long as the number of rounds ℓis divisible by the number of voters n.3Specifically, we show that in this case the simple Serial Dictatorship Rule (SDR) [Satterthwaite and Sonnenschein, 1981] outputs sFPJR outcomes. 3This turns out to be the best we can hope for in terms of satisfiability of the strong versions of our axioms since for sEJR, and hence every axiom that implies it, even in this restricted case there are elections where it is unsatisfiable. We prove this in Appendix D. 12 SDR operates by iterating through the rounds in [ℓ]while cycling through voters in N; in each round r∈[ℓ]it picks an outcome approved by the current voter, i.e., it selects an arbitrary or∈armodn,r(for elections in E≥1the set armodn,rcannot be empty, so such orhas to exist). Theorem 5.3. For every n-voter ℓ-round temporal election E∈ E≥1such that n|ℓ, every output of SDR provides sFPJR. Proof. Consider an election E= (C, N, ℓ, A )∈ E≥1withn|ℓ. Let SDR return the outcome o, and fix a subset of voters S. Since each voter selects the outcome of exactly ℓ/nrounds, we have satS(o)≥ ℓ· |S|/n. Also, for every R⊆[ℓ], every o′ Rand every i∈Nwe have sati(o′ R)≤ |R|and hence ρs(S)≤ ⌊ℓ· |S|/n⌋. Thus, satS(o)≥ρs(S), which is what we wanted to prove. Interestingly, if an election with n|ℓis additionally in E=1, then the outputs of SDR also provide wEJR. This is because for elections from E=1we can show that wPJR (which by Proposition 5.2 is implied by sFPJR) is equivalent to wEJR. Proposition 5.4. For every temporal election E∈ E=1, it holds that wPJR ⇐⇒ wEJR. 6 Core Stability To complete the analysis of proportionality notions in the temporal setting, we consider | https://arxiv.org/abs/2505.22513v1 |
the classic concept of core stability , which originates in cooperative game theory. Core stability for multiwinner voting was defined by Aziz et al. [2017], and it remains open if all multiwinner elections admit outcomes in the core. This concept captures resistance to deviations: a subset of voters can deviate by selecting outcomes in a subset of rounds whose size is proportional to the size of the group, and an outcome is stable if there is no deviation that benefits all members of the group. Strong core stability, core stability and weak core stability differ in how the voters are allowed to choose this subset of rounds. Definition 6.1. Given a temporal election E= (C, N, ℓ, A ), an outcome oprovides •strong core stability (sCore) if for each S⊆N, each R⊆[ℓ]with|R|=⌊ℓ· |S|/n⌋and each o′ R∈CRthere is an i∈Swithsati(o)≥sati(o′ R); •core stability (Core) if for each S⊆Nand each T⊆[ℓ]there is R⊆Twith|R|=⌊|T| · |S|/n⌋ such that for each o′ R∈CRthere is an i∈Swithsati(o)≥sati(o′ R); •weak core stability (wCore) if for each S⊆Nthere is an R⊆[ℓ]with|R|=⌊ℓ· |S|/n⌋such that for each o′ R∈CRthere is an i∈Swithsati(o)≥sati(o′ R). In the multiwinner voting setting, every outcome in the core provides FJR. This is also true in the temporal setting, for each level of our axiomatic hierarchy. Proposition 6.2. It holds that: (i) sCore =⇒sFJR and Core, (ii) Core =⇒FJR and wCore, and (iii) wCore =⇒wFJR. The satisfiability of Core and wCore (but not sCore) remains open in the temporal setting. 7 Separation Results Finally, we show that, apart from the implications already established, no further implication holds among the axioms in Figure 1. To rule out all remaining potential implications, we identify twelve specific pairs of axioms for which we construct outcomes that satisfy one notion but not the other. These counterexamples exhaust all possible implication relationships among the axioms we study. Proposition 7.1. For any two axioms A, B in Figure 1, Aimplies Bif and only if there exists a path (a sequence of arrows) from AtoB. 13 8 Conclusion We have explored several candidates for further strengthening all existing concepts studied in the tem- poral voting literature so far. We adapted EJR+, FJR, FPJR, and the Core from multiwinner elections to the temporal setting. We showed the polynomial-time computability of EJR+ outcomes, the existence of FJR outcomes, and the polynomial-time computability of sFPJR outcomes in a specific instance. We also establish non-existence results for stronger variants of these concepts, including the Core, and analyze the implications (or lack thereof) between the considered axioms. Our work provides a comprehensive analysis of proportionality concepts in temporal voting. Possible directions for future work include exploring domain restrictions for notions where propor- tionality is not guaranteed, or studying extensions of these concepts in the broader setting of participatory budgeting. References Amine Allouah, Christian Kroer, Xuan Zhang, Vashist Avadhanula, Nona Bohanon, Anil Dania, Caner Gocmen, Sergey Pupyrev, Parikshit Shah, Nicolas Stier-Moses, and Ken Rodríguez Taarup. Fair allo- cation over time, with applications to content moderation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) | https://arxiv.org/abs/2505.22513v1 |
, pages 25–35, 2023. Shiri Alouf-Heffetz, Laurent Bulteau, Edith Elkind, Nimrod Talmon, and Nicholas Teh. Better collective decisions via uncertainty reduction. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI) , pages 24–30, 2022. Haris Aziz and Barton E Lee. The expanding approvals rule: Improving proportional representation and monotonicity. Social Choice and Welfare , 54(1):1–45, 2020. Haris Aziz, Markus Brill, Vincent Conitzer, Edith Elkind, Rupert Freeman, and Toby Walsh. Justified representation in approval-based committee voting. Social Choice and Welfare , 48:461–485, 2017. Haris Aziz, Edith Elkind, Shenwei Huang, Martin Lackner, Luis Sánchez-Fernández, and Piotr Skowron. On the complexity of extended and proportional justified representation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI) , pages 902–909, 2018. Evripidis Bampis, Bruno Escoffier, and Sasa Mladenovic. Fair resource allocation over time. In Proceed- ings of the 17th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) , pages 766–773, 2018. Niclas Boehmer, Markus Brill, Alfonso Cevallos, Jonas Gehrlein, Luis Sánchez-Fernández, and Ul- rike Schmidt-Kraepelin. Approval-based committee voting in practice: A case study of (over- )representation in the polkadot blockchain. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI) , pages 9519–9527, 2024. Niclas Boehmer, Sara Fish, and Ariel D. Procaccia. Generative social choice. In Proceedings of the 42nd International Conference on Machine Learning (ICML) , 2025. Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk, and Rolf Niedermeier. An experimental view on committees providing justified representation. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI) , pages 109–115, 2019. Robert Bredereck, Andrzej Kaczmarczyk, and Rolf Niedermeier. Electing successive committees: Com- plexity and algorithms. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI) , pages 1846–1853, 2020. 14 Robert Bredereck, Till Fluschnik, and Andrzej Kaczmarczyk. When votes change and committees should (not). In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI) , pages 144–150, 2022. Markus Brill and Jannik Peters. Robust and verifiable proportionality axioms for multiwinner voting. In Proceedings of the 24th ACM Conference on Economics and Computation (EC) , page 301, 2023. Markus Brill, Paul Gölz, Dominik Peters, Ulrike Schmidt-Kraepelin, and Kai Wilker. Approval-based apportionment. Mathematical Programming , 203:77–105, 2024. Laurent Bulteau, Noam Hazon, Rutvik Page, Ariel Rosenfeld, and Nimrod Talmon. Justified represen- tation for perpetual voting. IEEE Access , 9:96598–96612, 2021. Jeff Burdges, Alfonso Cevallos, Peter Czaban, Rob Habermeier, Syed Hosseini, Fabio Lama, Han- dan Kilinç Alper, Ximin Luo, Fatemeh Shirazi, Alistair Stewart, and Gavin Wood. Overview of polkadot and its design considerations. arXiv preprint arXiv:2005.13456 , 2020. Alfonso Cevallos and Alistair Stewart. A verifiably secure and proportional committee election rule. In Proceedings of the 3rd ACM Conference on Advances in Financial Technologies (AFT) , pages 29–42, 2021. Nikhil Chandak, Shashwat Goel, and Dominik Peters. Proportional aggregation of preferences for se- quential decision making. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI) , pages 9573–9581, 2024. Jiehua Chen, Christian Hatschka, and Sofia Simola. Multi-winner reconfiguration. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS) , pages 85899– 85931, 2024. Vincent Conitzer, Rupert Freeman, | https://arxiv.org/abs/2505.22513v1 |
and Nisarg Shah. Fair public decision making. In Proceedings of the 18th ACM Conference on Economics and Computation (EC) , pages 629–646, 2017. Théo Delemazure, Tom Demeulemeester, Manuel Eberl, Jonas Israel, and Patrick Lederer. Strate- gyproofness and proportionality in party-approval multiwinner elections. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI) , pages 5591–5599, 2023. Eva Michelle Deltl, Till Fluschnik, and Robert Bredereck. Algorithmics of egalitarian versus equitable sequences of committees. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI) , pages 2651–2658, 2023. Edith Elkind, Sonja Kraiczy, and Nicholas Teh. Fairness in temporal slot assignment. In Proceedings of the 15th International Symposium on Algorithmic Game Theory (SAGT) , pages 490–507, 2022. Edith Elkind, Tzeh Yuan Neoh, and Nicholas Teh. Temporal elections: Welfare, strategyproofness, and proportionality. In Proceedings of the 27th European Conference on Artificial Intelligence (ECAI) , pages 3292–3299, 2024a. Edith Elkind, Svetlana Obraztsova, and Nicholas Teh. Temporal fairness in multiwinner voting. In Pro- ceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI) , pages 22633–22640, 2024b. Edith Elkind, Alexander Lam, Mohamad Latifian, Tzeh Yuan Neoh, and Nicholas Teh. Temporal fair division of indivisible items. In Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) , pages 676–685, 2025a. 15 Edith Elkind, Tzeh Yuan Neoh, and Nicholas Teh. Not in my backyard! Temporal voting over public chores. In Proceedings of the 34th International Joint Conference on Artificial Intelligence (IJCAI) , 2025b. Edith Elkind, Svetlana Obraztsova, Jannik Peters, and Nicholas Teh. Verifying proportionality in tem- poral voting. In Proceedings of the 39th AAAI Conference on Artificial Intelligence (AAAI) , pages 13805–13813, 2025c. Brandon Fain, Kamesh Munagala, and Nisarg Shah. Fair allocation of indivisible public goods. In Proceedings of the 19th ACM Conference on Economics and Computation (EC) , pages 575–592, 2018. Sara Fish, Paul Gölz, David C. Parkes, Ariel D. Procaccia, Gili Rusak, Itai Shapira, and Manuel Wüthrich. Generative social choice. In Proceedings of the 25th ACM Conference on Economics and Computation (EC) , page 985, 2024. Grzegorz Gawron and Piotr Faliszewski. Using multiwinner voting to search for movies. Theory and Decision , 2024. Yusuf Hakan Kalayci, Jiasen Liu, and David Kempe. Full proportional justified representation. In Proceedings of the 24th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) , 2025. D. Marc Kilgour. Approval Balloting for Multi-winner Elections , pages 105–124. Springer, 2010. Martin Lackner. Perpetual voting: Fairness in long-term decision making. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI) , pages 2103–2110, 2020. Martin Lackner and Jan Maly. Proportional decisions in perpetual voting. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI) , pages 5722–5729, 2023. Martin Lackner and Piotr Skowron. Multi-Winner Voting with Approval Preferences . Springer, 2023. Tomáš Masa ˇrík, Grzegorz Pierczy ´nski, and Piotr Skowron. A generalised theory of proportionality in collective decision making. In Proceedings of the 25th ACM Conference on Economics and Compu- tation (EC) , pages 734–754, 2024. Gourab K. Patro, Prithwish Jana, Abhijnan Chakraborty, Krishna P. Gummadi, and Niloy Ganguly. Scheduling virtual conferences fairly: Achieving equitable participant | https://arxiv.org/abs/2505.22513v1 |
and speaker satisfaction. In Proceedings of the 2022 ACM Web Conference (WWW) , pages 2646–2656, 2022. Dominik Peters. Proportional representation for artificial intelligence. In Proceedings of the 27th Euro- pean Conference on Artificial Intelligence (ECAI) , pages 27–31, 2024. Dominik Peters and Piotr Skowron. Proportionality and the limits of welfarism. In Proceedings of the 21st ACM Conference on Economics and Computation (EC) , pages 793–794, 2020. Dominik Peters, Grzegorz Pierczy ´nski, and Piotr Skowron. Proportional participatory budgeting with additive utilities. In Proceedings of the 35th International Conference on Neural Information Pro- cessing Systems (NeurIPS) , pages 12726–12737, 2021. Edvard Phragmén. Proportionella val. en valteknisk studie. In Svenska spörsmål 25 . Lars Hökersbergs förlag, Stockholm, 1895. 16 Manon Revel, Smitha Milli, Tyler Lu, Jamelle Watson-Daniels, and Max Nickel. Representative ranking for deliberation in the public sphere. In Proceedings of the 42th International Conference on Machine Learning (ICML) , 2025. Luis Sánchez-Fernández, Edith Elkind, Martin Lackner, Norberto Fernández, Jesús Fisteus, Pablo Bas- anta Val, and Piotr Skowron. Proportional justified representation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI) , pages 670–676, 2017. Mark A Satterthwaite and Hugo Sonnenschein. Strategy-proof allocation mechanisms at differentiable points. The Review of Economic Studies , 48(4):587–597, 1981. Piotr Skowron and Adrian Górecki. Proportional public decisions. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI) , pages 5191–5198, 2022. Errikos Streviniotis and Georgios Chalkiadakis. Preference aggregation mechanisms for a tourism- oriented Bayesian recommender. In Proceedings of the 23rd International Conference on Principles and Practice of Multi-Agent Systems (PRIMA) , pages 331–346, 2022. Jizhou Wu, Jianye Hao, Tianpei Yang, Xiaotian Hao, Yan Zheng, Weixun Wang, and Matthew E. Taylor. Portal: Automatic curricula generation for multiagent reinforcement learning. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI) , pages 15934–15942, 2024. Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou Ammar, Jun Wang, and Matthew E. Taylor. Diverse auto-curriculum is critical for successful real-world multiagent learn- ing systems. In Proceedings of the 20th International Conference on Autonomous Agents and Multi- Agent Systems (AAMAS) , pages 51–56, 2021. Valentin Zech, Niclas Boehmer, Edith Elkind, and Nicholas Teh. Multiwinner temporal voting with aversion to change. In Proceedings of the 27th European Conference on Artificial Intelligence (ECAI) , pages 3236–3243, 2024. 17 A Relation to General Approval Voting In this appendix, we discuss the relation between our EJR and FJR axioms and analogous notions defined by Masa ˇrík et al. [2024] for a more general setting. A.1 Notation for General Approval Voting Let us first introduce a notation for the general approval voting setting. A general approval election instance is a tuple I= (C,N,F,A), where Cis a set of candidates, Nis a set of nvoters, F ⊆ 2C is a family of feasible subsets of the set of candidates, and A= (Ai)i∈Nis the approval profile, where Ai⊆ C is a ballot of each voter i∈ N , i.e., a subset of candidates that he or she approves. We assume that the family Fis closed under inclusion, i.e., if Y⊆X⊆Cand we know that X∈ F, then | https://arxiv.org/abs/2505.22513v1 |
also Y∈ F. In particular, this means that necessarily ∅∈ F. Let ¯Fbe a subset of maximal feasible subsets, i.e., subsets X∈ F, for which there is no Y∈ F such that X⊊Y. Then, a voting rule is a function fthat for every instance I= (C,N,F,A)returns a non-empty subset of maximal feasible subsets of candidates f(I)⊆¯F. For a subset of voters S⊆ N and a feasible subset of candidates X∈ F, the satisfaction of SforX is a number of candidates in Xthat are approved by at least one voter in S, i.e., satS(X) =|{c∈X: c∈S i∈NAi}|. IfS={i}for some i∈ N , we drop the brackets and write simply sati(X). To show that temporal voting is a special case of general approval voting, for every temporal election E= (C, N, ℓ, A )let us define a corresponding general approval voting instance IE= (CE,NE,FE,AE). The set of candidates in IEwill be the set of all pairs of candidates and rounds, i.e., CE={(c, r) :c∈ C, r∈[ℓ]}. The set of voters will be identical, i.e., NE=N. Next, for every voter i∈N, candidate c∈C, and round r∈[ℓ], we will say that iapproves (c, r), if and only if, it approves candidate cin round r, i.e.,AE= (Ai)i∈Nwhere Ai={(c, r)∈ CE:c∈ai,r}. Finally, the feasible subsets of CEare exactly those in which every round appears at most once, i.e., FE={X⊆ CE: ((c, r),(c′, r)∈X)⇒ (c=c′)}. Then, there is a one-to-one correspondence between the outcomes o∈Cℓin temporal elec- tionEand maximal feasible subsets of CE, i.e., subsets X∈ FEin which every round appears exactly once. In turn, all feasible subsets, not necessarily maximal, correspond to suboutcomes oR∈CRfor some subsets of rounds R⊆[ℓ], where the selected candidates in the remaining rounds are not specified (including suboutcome o∅, in which we do not specify a selected candidate in any round). Note that for every R⊆[ℓ]suboutcome oRand a corresponding feasible set Xwe have satS(oR) = sat S(X), for every subset of voters S⊆N. A.2 EJR Let us now discuss the relation between EJR as defined in Definition 2.2 and base extended justified representation (BEJR) introduced by Masa ˇrík et al. [2024] for the general approval voting setting. The definition of BEJR is as follows. Definition A.1. Given a general approval election instance I= (C,N,F,A)a group of voters S⊆ N deserves a satisfaction level of γ, if for each feasible set X∈ F, it holds that either i) there exists Y⊆T i∈SAiwith|Y| ≥γsuch that X∪Y∈ F, or ii)|S|/n > γ/ (|X|+γ). Then, a feasible outcome W∈ F satisfies base extended justified representation (BEJR) if for every γ∈Nand every group of voters S⊆ N that deserves a satisfaction level γthere is a voter i∈Sfor which sati(W)≥γ. 18 Intuitively, we decide what level of satisfaction Sdeserves by looking at feasible sets Xthat we can interpret as possible counterproposals for a solution that Swould like. In order to prove that it deserves a satisfaction of γ, subset Shas to show that each such counterproposal Xis either compatible with the demands of S(condition i) or is too large (condition ii). To show the former, Shas to find their own | https://arxiv.org/abs/2505.22513v1 |
proposal Yof size at least γ, universally approved in S, that can be combined with Xin an outcome, i.e.,X∪Y∈ F. For the latter, the proportion between the size of |X|and the number of voters outside ofS(so all potential voters behind the counterproposal) must be larger than the ratio between γand the number of voters in S, i.e.,|X|/(n−|S|)> γ/|S|. This inequality is equivalent to |S|/n > γ/ (|X|+γ) that appears in the definition. As we will show later in this subsection, in the temporal voting setting BEJR implies EJR.4How- ever, the converse is not true, i.e., EJR is strictly weaker than BEJR. To see this consider the following example. Example A.2. LetE= (C, N, ℓ, A )be a temporal election with ℓ= 1 round, 2 candidates C= {c1, c2}, 3 voters N={1,2,3}, and an approval profile in which voters 1and2approve candidate c1 while voter 3approves c2, i.e., a1,1=a2,1={c1}anda3,1={c2}. Observe that the outcome o= (c2) satisfies EJR (and even sCore and sEJR+). Indeed, voter 3has maximal possible satisfaction, and for the subset of voters S={1,2}we have ⌊ℓ·|S|/n⌋=⌊2/3⌋= 0, hence they do not receive any satisfaction guarantee from EJR. However, according to BEJR, they do deserve satisfaction of γ= 1. To see this, consider all possible counterproposals X, i.e., all feasible subsets of candidates CE, i.e.,∅,{(c1,1)},{(c2,1)} ∈ F E. For X=∅orX={(c1,1)}, we can clearly find a set Y={(c1,1)}that is compatible with X, i.e., Y∪X∈ F Eand we have that Y∈A1∩A2as well as |Y|= 1 = γ. On the other hand, if X={(c2,1)}, then observe that |S|/n= 2/3>1/2 =γ/(|X|+γ). Hence, Sdeserves a satisfaction of1, which is not met by o. Thus, outcome oviolates BEJR. Informally, the difference between BEJR and EJR can be explained as follows. For a subset of voters S⊆Nthat agree in certain trounds, EJR gives a satisfaction guarantee of γif the fraction of voters thatSconstitute is greater or equal to the fraction that γis oft, i.e.,|S|/n≥γ/tor, equivalently, γ≤t· |S|/n. (1) However, for BEJR it is enough that Sis large enough so that no other group of voters can make claim tot−γ+ 1rounds. So, the fraction of voters that are inside Shas to be strictly larger than γ/(t+ 1) (as then no other group would constitute a fraction of at least (t−γ+ 1)/(t+ 1)). This gives us inequality |S|/n > γ/ (t+ 1) or, equivalently, γ <(t+ 1)· |S|/n. (2) This is a significant difference as in many cases γmay fail Inequality (1) but at the same time satisfy Inequality (2), which, for example, is the case in Example A.2. The above difference is very similar to the difference between two formulations of Proportionality for Solid Coalitions (PSC) axiom in the setting of multiwinner voting with ordinal preferences (see e.g., Aziz and Lee [2020]). Roughly speaking, the axiom says that for every subset of voters Sthat all agree that candidates in a certain subset Xare better than every candidate outside of this subset, at leastmin(η,|X|)candidates need to be select to the committee from X. How big is ηdepends on the formulation. In Hare-PSC , we set η=⌊k· |S|/n⌋, where kis the size of the | https://arxiv.org/abs/2505.22513v1 |
committee and nthe total number of voters, which resembles the approach of EJR and Inequality (1). In Droop-PSC , we set η=⌈(k+ 1)· |S|/n⌉ −1, i.e., we take the largest integer strictly smaller than (k+ 1)· |S|/n, which matches Inequality (2). Following this distinction, we can define Droop-EJR for temporal voting as follows. 4This observation was made also by Chandak et al. [2024]. 19 Definition A.3. Given a temporal election E= (C, N, ℓ, A )an outcome oprovides Droop extended justified representation (Droop-EJR) if for every t >0and every subset of voters S⊆Nthat agree in at least trounds, there exists a voter i∈Ssuch that sati(o)≥ ⌈(t+ 1)· |S|/n⌉ −1. Observe that Droop-EJR implies EJR as to every subset of voters S⊆Nit gives the same or larger satisfaction guarantee. We will now show that BEJR implies Droop-EJR, which then means that BEJR implies EJR as well. The converse statement, i.e., that Droop-EJR implies BEJR, is also true under an additional assumption that there is no voter that in some round approves all of the candidates. We need this assumption as BEJR can give even higher satisfaction guarantees to groups of voters that universally approve all available candidates in some rounds. Proposition A.4. For every temporal election E= (C, N, ℓ, A ), every outcome othat provides BEJR provides Droop-EJR as well. If additionally for every voter i∈Nand every round r∈[ℓ]it holds that ai,r̸=C, then every outcome othat provides Droop-EJR provides BEJR as well. Proof. Let us start by showing that BEJR implies Droop-EJR. Take an arbitrary temporal election E= (C, N, ℓ, A )and outcome othat does not provide Droop-EJR. Thus, there is a set of voters S⊆Nthat agree in a subset of rounds R, with|R|=t, and for each i∈Swe have sati(o)<⌈(t+ 1)·|S|/n⌉−1. We will show that according to BEJR Sdeserves a satisfaction guarantee of at least γ=⌈(t+ 1)· |S|/n⌉ −1, which will mean that ofails BEJR as well. To this end, take an arbitrary subset X∈ F Esuch that |S|/n≤γ/(|X|+γ). Let us denote the subset of rounds that appear in XbyRX={r: (c, r)∈X}. We want to show that there is a subset Y⊆ CEuniversally approved by voters in Ssuch that |Y| ≥γandX∪Y∈ F E. Note that the last condition is equivalent to saying that RXand the subset of rounds that appear in Yare disjoint. Observe that |S|/n≤γ/(|X|+γ)is equivalent to (|X|+γ)/γ≤n/|S|, from which we get |X|/γ≤n/|S| −1. Thus, |X| ≤γ·n |S|−γ = (t+ 1)·|S| n −1 ·n |S|−γ (asγ=⌈(t+ 1)· |S|/n⌉ −1) < (t+ 1)·|S| n ·n |S|−γ (since⌈x⌉< x+ 1for every x∈R) =t+ 1−γ. However, as |X|needs to be an integer, this actually implies that |X| ≤t−γand consequently that |RX∩R| ≤t−γ. Hence, there is subset of rounds RY⊆Rsuch that RY∩RX=∅and|RY|=γ. Since Sagrees in every round in R, for each round r∈RY⊆Rwe can find candidate crsuch that cr∈T i∈Sai,r. Then, we set Y={(cr, r) :r∈RY}, which meets all of the desired criteria. This means that Sdeserves a satisfaction guarantee of γaccording to BEJR, which concludes the proof that BEJR implies Droop-EJR. Now, let us show that Droop-EJR implies BEJR when there is no | https://arxiv.org/abs/2505.22513v1 |
voter that in some round approves all of the candidates. To this end, take an arbitrary temporal election E= (C, N, ℓ, A )such that ai,r̸= C, for every i∈Nandr∈[ℓ], and an arbitrary outcome othat does not provide BEJR. This means that there exists a subset of voters Sthat deserves a satisfaction guarantee γ, but for every voter i∈Sit holds that sati(o)< γ. We will show that Sreceives guarantee γby Droop-EJR as well. LetR⊆[ℓ]be a subset of all rounds in which voters in Sagree and let t=|R|. We will show that necessarily γ≤ ⌈(t+ 1)· |S|/n⌉ −1, which will prove the thesis. First let us show that t≥γ. To this end, take X=∅∈ FE. According to the definition of BEJR, since Sdeserves satisfaction γ, there must be a subset of candidates Y⊆ CEuniversally approved in S, 20 such that |Y| ≥γandY=Y∪∅∈ FE. By definition, for every (c, r)∈Yit must hold that c∈ai,r, for every i∈S. Hence, voters in Sagree on at least γrounds. Thus, indeed t≥γ. Now, let us take arbitrary subset of rounds RX⊆Rsuch that |RX|=t+ 1−γ. For every round r∈RX, letcr∈C\ai,rbe a candidate not approved by arbitrary voter i∈S(by the assumption, we know that such exists). Then, let X={(cr, r) :r∈RX}. Observe that now |R\RX|< γ. This means that there is no subset of rounds RY⊆[ℓ]with |RY| ≥γthat would be disjoint with RXand in which all voters in Sagree. And since for every (c, r)∈Xthere is a voter in Sthat does not approve candidate cin round r, there is no set Ythat could satisfy condition (i) of Definition A.1. Hence, it must be the condition (ii) that holds, i.e., |S|/n > γ/(|X|+γ) =γ/(t+ 1) . Observe that this is equivalent to γ < (t+ 1)· |S|/n. Now, for every positive real x, the largest integer value strictly smaller than xis⌈x⌉ −1. Thus, it must hold that γ≤ ⌈(t+ 1)· |S|/n⌉ −1, which concludes the proof of the thesis. A.3 FJR Masa ˇrík et al. [2024] also introduced an axiom that is an analogue of FJR in the general approval voting setting. Definition A.5. Given a general approval election instance I= (C,N,F,A)a group of voters S⊆ N is(α, β)-BFJR-cohesive , if for each feasible set X∈ F, it holds that either i) there exists Y⊆ C with|Y|=αsuch that sati(Y)≥β, for every i∈S, and X∪Y∈ F, or ii)|S|/n > α/ (|X|+α). Then, a feasible outcome W∈ F satisfies base full justified representation (BFJR) if for every α, β∈N and every (α, β)-BFJR-cohesive group of voters S⊆ N there is a voter i∈Sfor which sati(W)≥β. Observe that BFJR is a stronger axiom than BEJR as BEJR gives the same guarantees but only to (γ, γ)-BFJR-cohesive groups. Analogously to what we have proved in Proposition A.4, we can show that BFJR implies a stronger version of the FJR axiom, which we can define as follows. Definition A.6. Given a temporal election E= (C, N, ℓ, A ), we say that an outcome oprovides Droop full justified representation (Droop-FJR) if for every subset of voters S⊆Nand every subset of rounds T⊆[ℓ]there exists subset R⊆Tsuch that | https://arxiv.org/abs/2505.22513v1 |
|R|=⌈(|T|+ 1)· |S|/n⌉ −1and for every other outcome o′, there is a voter i∈Swithsati(o)≥minj∈Ssatj(o′ R).Equivalently, for every subset of voters S, there is i∈Ssuch that sati(o)≥max T⊆[ℓ]min R⊆T:|R|=⌈(|T|+1)·|S|/n⌉−1max o′ R∈CRmin j∈Ssatj(o′ R). Note that Droop-FJR implies FJR as for every subset of voters S⊆Nand every subset of rounds T, Droop-FJR gives at least as many rounds Rto be used by the voters in Sas does the standard FJR. Thus, the minimum satisfaction from suboutcome o′ Rcannot be smaller for Droop-FJR than for FJR. As we will now show, BFJR implies Droop-FJR in the temporal voting setting, which means that it implies FJR as well. We leave open the question whether, possibly under some additional assumptions, the converse implication is true, i.e., whether Droop-FJR implies BFJR. Proposition A.7. For every temporal election E= (C, N, ℓ, A ), every outcome othat provides BFJR provides Droop-FJR as well. 21 Proof. Fix arbitrary temporal election E= (C, N, ℓ, A )andothat does not provide Droop-FJR. This means that there exists a subset of voters S⊆Nsuch that for every i∈Sit holds that sati(o)< µ, where µ= max T⊆[ℓ]min R⊆T:|R|=⌈(|T|+1)·|S|/n⌉−1max o′ R∈CRmin j∈Ssatj(o′ R). Let us fix an arbitrary subset T⊆[ℓ]that provides the above equality and denote t=|T|as well as α=⌈(t+ 1)· |S|/n⌉ −1. We will show that Sis(α, µ)-BFJR-cohesive, which will imply that odoes not provide BFJR as well. Take an arbitrary subset X∈ FEsuch that |S|/n≤α/(|X|+α). Let us denote the subset of rounds inXasRX={r: (c, r)∈X}. Our goal is to show that there is a subset Y⊆ CEsuch that |Y|=α, sati(Y)≥β, for every i∈S, and X∪Y∈ F E. The last condition simply says that the subset of rounds in Yis disjoint from RX. Now,|S|/n≤α/(|X|+α)is equivalent to (|X|+α)/α≤n/|S|, which gives us |X|/α≤n/|S|−1. Further, |X| ≤α·n |S|−α = (t+ 1)·|S| n −1 ·n |S|−α (asα=⌈(t+ 1)· |S|/n⌉ −1) < (t+ 1)·|S| n ·n |S|−α (since⌈x⌉< x+ 1for every x∈R) =t+ 1−α. But|X|needs to be an integer, which means that actually |X| ≤t−α. Thus, |RX∩T| ≤t−αas well, which implies that there is a subset of rounds RY⊆Tsuch that RY∩RX=∅and|RY|=α. Then, observe that max o′ RY∈CRYmin j∈Ssatj(o′ R)≥ min R⊆T:|R|=⌈(|T|+1)·|S|/n⌉−1max o′ R∈CRmin j∈Ssatj(o′ R) =µ. In other words, there exists an outcome o′such that for every voter i∈Sits satisfaction from o′in rounds RYis at least µ. We can thus take Y={(o′ r, r) :r∈RY}and it will hold that sati(Y)≥µ for every i∈S. Also, |Y|=αand, since RXandRYare disjoint, X∪Y∈ F E. Thus, indeed Sis (α, µ)-BFJR-cohesive, which concludes the proof. B Omitted Proofs from Section 3 Proposition 3.3. It holds that: (i) sEJR+ =⇒sEJR, (ii) sEJR =⇒sPJR and EJR, (iii) sPJR =⇒ sJR and PJR, and (iv) sJR =⇒JR. Proof. Consider a subset of voters Sthat agree in trounds. We have t≤ℓ,|S|/n≤1and hence ⌊t· |S|/n⌋ ≤min(t,⌊ℓ· |S|/n⌋). This shows that sEJR implies EJR and sPJR implies PJR: in both cases, the strong version of the axiom provides a stronger guarantee to each group of voters. Similarly, sJR implies JR because t≤ℓ. We will now argue that sEJR+ implies sEJR. Consider an outcome othat provides sEJR+, and a subset of voters | https://arxiv.org/abs/2505.22513v1 |
Swho agree in a size- tsubset of rounds R. We need to show that the satisfaction of some voter i∈Sis at least min(t,⌊ℓ· |S|/n⌋). Suppose first that for each round r∈Rthe outcome oris approved by all voters in S. Then the satisfaction of each voter in Sis at least |R|=t≥ min(t,⌊ℓ· |S|/n⌋). Otherwise, consider a round r∈Rsuch that not all voters in Sapprove or. By the choice of r, there is some candidate cthat is approved by all voters in Sin round r. Then or̸=cand hence sEJR+ implies that sati(o)≥ ⌊ℓ· |S|/n⌋ ≥min(t,⌊ℓ· |S|/n⌋)for some i∈S. 22 It remains to show that sEJR implies sPJR and sPJR implies sJR; this follows by the same arguments as in the multiwinner case. Proposition 3.6. It holds that: (i) sEJR+ =⇒EJR+ and (ii) EJR+ =⇒EJR. Proof. For part i), consider an arbitrary election E= (C, N, ℓ, A )and an outcome othat does not provide EJR+. We will show that odoes not provide sEJR+ either. By the definition of EJR+ there exist σ∈[n],τ∈[ℓ], a(σ, τ)-cohesive subset of voters S⊆N, a round r∈[ℓ], and a candidate c∈Capproved by all voters in Sin round rsuch that or̸=cand for each voter i∈Swe have sati(o)<⌊τ·σ/n⌋. Then, since τ≤ℓandσ≤ |S|, for each voter i∈Sit holds that sati(o)<⌊τ·σ/n⌋ ≤ ⌊ ℓ· |S|/n⌋. Hence, Sis a witness that ofails to provide sEJR+. For part ii), consider an arbitrary election E= (C, N, ℓ, A )and an outcome othat does not provide EJR. We will show that odoes not provide EJR+ either. By the definition of EJR, there exists a τ∈[ℓ], a set of rounds R,|R|=τ, and a subset of voters Ssuch that all voters in Sagree in each round r∈R, but for every i∈Swe have sati(o)<⌊τ· |S|/n⌋. As|S| ≤n, this means that sati(o)< τ, which implies that there is a round r∈Rsuch that some voter in Sdoes not approve or, and hence or̸∈T i∈Sai,r. Letσ=|S|and note that Sis(σ, τ)-cohesive, as witnessed by R. Hence, Switnesses that ofails to provide EJR+. Proposition 3.11. It holds that: (i) EJR+ =⇒wEJR+ and (ii) wEJR+ =⇒wEJR. Proof. We will first show that EJR+ implies wEJR+. Consider an arbitrary election E= (C, N, ℓ, A ) and an outcome othat satisfies EJR+. For any σ∈[n]andτ∈[ℓ], for any (σ, τ)-cohesive subset of voters S⊆Nit holds that either there exists a voter i∈Ssuch that sati(o)≥ ⌊τ·σ/n⌋, or for every round r∈[ℓ]withT i∈Sai,r̸=∅we have that or∈T i∈Sai,r. Consider the case when τ=ℓ. For those (σ, ℓ)-cohesive subsets of voters Swhere there exists a voter i∈Ssuch that sati(o)≥ ⌊τ·σ/n⌋=⌊ℓ·σ/n⌋, the first condition of wEJR+ is satisfied. Otherwise, for those (σ, ℓ)-cohesive subsets of voters Swhere no such voter i∈Sexists, by definition of EJR+ we know that for every round withT i∈Sai,r̸=∅we have that or∈T i∈Sai,r. This satisfies the second condition of wEJR+, hence at least one of the two wEJR+ conditions always holds, so o provides wEJR+. It remains to show that wEJR+ implies wEJR. Consider an arbitrary election E= (C, N, ℓ, A )and an outcome othat does not provide wEJR. We will show that odoes not provide wEJR+ either. | https://arxiv.org/abs/2505.22513v1 |
By definition of wEJR, there exists a subset of voters Ssuch that all voters in Sagree in each round r∈[ℓ], but for every i∈Swe have sati(o)<⌊ℓ· |S|/n⌋. As|S| ≤n, this means that sati(o)< ℓ, which implies there is a round r∈[ℓ]such that some voter in Sdoes not approve or, and hence or/∈T i∈Sai,r. Letσ=|S|and note that Sis(σ, ℓ)-cohesive,. Hence, Switnesses that ofails to provide wEJR+. C Omitted Proofs from Section 4 Proposition 4.4. It holds that: (i) sFJR =⇒sEJR and FJR, (ii) FJR =⇒EJR and wFJR, and (iii) wFJR =⇒wEJR. Proof. For part i), we first show that sFJR implies sEJR. Consider an arbitrary election E= (C, N, ℓ, A ) and an outcome othat provides sFJR. Let S⊆Nbe a subset of voters that agree in a set of trounds T, t >0, in election E(note that if t= 0thenScannot be a witness that sEJR is violated). 23 First, suppose that t≥ ⌊ℓ· |S|/n⌋. Let T′⊆Tbe a subset of Tof size |T′|=⌊ℓ· |S|/n⌋. Then voters in Sagree in every round r∈T′, so there exists an outcome o′such that for all voters i∈Sit holds that sati(o′ T′) =⌊ℓ· |S|/n⌋. Since oprovides sFJR, there exists a voter i∈Ssuch that sati(o)≥ max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥min j∈Ssatj(o′ T′) =⌊ℓ·|S|/n⌋= min( t,⌊ℓ·|S|/n⌋) Hence, sEJR is satisfied. On the other hand, suppose that t <⌊ℓ· |S|/n⌋. We extend Tto a subset of rounds T′⊆[ℓ]such that|T′|=⌊ℓ· |S|/n⌋andT⊆T′. Consider an outcome o′that satisfies o′ r∈T i∈Sai,rfor all r∈T. Then for each i∈Swe have sati(o′ T′)≥t. Given that oprovides sFJR, by definition there exists a voter i∈Ssuch that sati(o)≥ max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥min j∈Ssatj(o′ T′)≥t= min( t,⌊ℓ· |S|/n⌋). Hence, sEJR is satisfied in this case as well. We will now show that sFJR implies FJR. Fix a temporal election E, and consider an outcome othat does not provide FJR. Then there exists a subset of voters S⊆Nsuch that for all i∈Sit holds that sati(o)<max T⊆[ℓ]µS(T). This means that for some subset of rounds T⊆[ℓ]it holds that for each subset R⊆Tof size ⌊|T| · |S|/n⌋there exists an outcome o′such that sati(o)<minj∈Ssatj(o′ R)for all voters i∈S. Let R⊆Tbe some such subset of rounds. Extend it to a subset of rounds R′⊆[ℓ] such that R⊆R′and|R′|=⌊ℓ· |S|/n⌋. Then for R′and for all voters i∈Swe have sati(o)<min j∈Ssatj(o′ R)≤min j∈Ssatj(o′ R′). Hence, S,R′ando′witness that odoes not satisfy sFJR. For part ii), we first show that FJR implies EJR. Take an arbitrary election E= (C, N, ℓ, A )and an outcome othat provides FJR. Let S⊆Nbe a subset of voters that agree in a set of rounds Tof size |T|=tin election E. Consider an outcome o′that satisfies o′ r∈T i∈Sai,rfor each r∈T. Observe that for every subset of rounds R⊆Twith|R|=⌊t· |S|/n⌋we have satj(o′ R) =⌊t· |S|/n⌋for each j∈S. Consequently, µS(T) = min R⊆T:|R|=⌊t·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥ min R⊆T:|R|=⌊t·|S|/n⌋min j∈Ssatj(o′ R)≥ ⌊t· |S|/n⌋. Since oprovides FJR, there exists a voter i∈Ssuch that sati(o)≥max T′⊆[ℓ]µS(T′)≥µS(T)≥ ⌊t· |S|/n⌋. Hence, oprovides EJR. We will now argue that FJR implies wFJR. Consider an outcome othat provides FJR for a temporal election E. Then for every subset of | https://arxiv.org/abs/2505.22513v1 |
voters Swe have sati(o)≥max T⊆[ℓ]µS(T). Thus, in particular, forT= [ℓ]we have sati(o)≥µS([ℓ]) = min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). This means that oprovides wFJR. For part iii), we will show that wFJR implies wEJR. Fix an election E= (C, N, ℓ, A )and an outcome othat provides wFJR. Suppose there exists a subset of voters S⊆Nsuch thatT i∈Sai,r̸=∅ for all r∈[ℓ]. We construct an outcome o′so that o′ r∈T i∈Sai,rfor each r∈[ℓ]. Then, for each subset of rounds R⊆[ℓ]such that |R|=⌊ℓ· |S|/n⌋we have satj(o′ R) =⌊ℓ· |S|/n⌋for all j∈S. Since o provides wFJR, there exists some voter i∈Ssuch that sati(o)≥ min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥ min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋min j∈Ssatj(o′ R)≥ ⌊ℓ· |S|/n⌋. Hence, oprovides wEJR. 24 D Omitted Proofs from Section 5 We first state and prove the result that was implied by footnote 3. Proposition D.1. There exists an n-voter, ℓ-round temporal election E∈ E =1such that n|ℓand no outcome oprovides sEJR for E. Proof. Consider an election E= (C, N, ℓ, A )withℓ= 6 rounds, a set of voters N= [6] , and a set of candidates C={x, y, z, c 1, . . . , c 6}. The voters’ ballots are as follows (each ai,ris a singleton, so we omit the braces): V oter Rounds 1 2 3 4 5 6 1 x x y y z z 2 x x y y z z 3 c1c2c3c4c5c6 4 c1c2c3c4c5c6 5 c1c2c3c4c5c6 6 c1c2c3c4c5c6 Intuitively, in the first two rounds the voters form three cohesive blocks of size two, and in the next four rounds, every voter supports a different candidate. Observe that sEJR requires that every voter in Nshould receive satisfaction of at least 1. Addition- ally, since each pair of voters (1,2),(3,4), and (5,6)agrees in two rounds, at least one voter from each of these pairs needs to receive satisfaction of at least 2, according to sEJR. We will show that it is not possible to satisfy both of this requirements at the same time. Take an arbitrary outcome o. It can either select the same candidate in the first two rounds, or two different ones. Let us consider both cases. Case 1, o1=o2. Without loss of generality assume that o1=o2=x. Then, in order to guarantee satisfaction of at least 1to each of the voters 3,4,5, and 6, we need to select a different candidate in each of the rounds 3,4,5, and 6. However, in this way no voter in the pair (3,4)will receive satisfaction of at least 2. Thus, sEJR cannot be satisfied if o1=o2. Case 2, o1̸=o2. Without loss of generality assume that o1=xando2=y. In order to guarantee that one voter in each of the pairs (1,2)and(3,4)has satisfaction of at least 2, in one round from 3, 4,5, or6, we need to select candidate c1orc2and in one c3orc4. Then, in the remaining two rounds we have to select candidates c5andc6, once each, so to guarantee that both voter 5and voter 6have satisfaction of at least 1. However, in this way no agent from the pair (5,6)will have satisfaction of at least2. Therefore, it is not possible | https://arxiv.org/abs/2505.22513v1 |
to satisfy sEJR also in the case when o1̸=o2, which concludes the proof. Proposition 5.2. It holds that: (i) sFJR =⇒sFPJR =⇒sPJR and FPJR, (ii) FJR =⇒FPJR =⇒ PJR and wFPJR, and (iii) wFJR =⇒wFPJR =⇒wPJR. Proof. For part i), we first show that sFJR implies sFPJR. Take an arbitrary election E= (C, N, ℓ, A ) and an outcome osatisfying sFJR. For any subset of voters S⊆N, there exists an i∈Ssuch that sati(o)≥ max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). Then, since i∈S, we must have that satS(o)≥sati(o)andosatisfies sFPJR as well. 25 Next, we show that sFPJR implies sPJR. Take an arbitrary election E= (C, N, ℓ, A )and an outcome osatisfying sFPJR. Let S⊆Nbe a subset of voters that agree in a set of trounds T, t > 0, in election E(note that if t= 0thenScannot be a witness that sPJR violated). First, suppose that t≥ ⌊ℓ· |S|/n⌋. Let T′⊆Tbe a subset of Tof size |T′|=⌊ℓ· |S|/n⌋. Then voters in Sagree in every round r∈T′, so there exists an outcome o′such that for all voters i∈S, it holds that sati(o′ T′) =⌊ℓ· |S|/n⌋. Since o′provides sFPJR, by definition, we have that satS(o)≥ max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥min j∈Ssatj(o′ T′)≥ ⌊ℓ·|S|/n⌋= min( t,⌊ℓ·|S|/n⌋). Hence, sPJR is satisfied. On the other hand, suppose that t <⌊ℓ· |S|/n⌋. We extend Tto a subset of rounds T′⊆[ℓ]such that|T′|=⌊ℓ· |S|/n⌋andT⊆T′. Consider an outcome o′that satisfies o′ r∈T i∈Sai,rfor all r∈T. Then for each i∈S, we have sati(o′ T′)≥t. Given that oprovides sFPJR, by definition, we have that satS(o)≥ max R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥min j∈Ssatj(o′ T′)≥t= min( t,⌊ℓ· |S|/n⌋). Hence, sPJR is satisfied in this case as well. We now show that sFPJR implies FPJR. Fix a temporal election E= (C, N, ℓ, A ), and consider an outcome othat does not satisfy FPJR. Then there exists a subset of voters S⊆Nand a subset of rounds T⊆[ℓ]such that for all subsets R⊆T⊆[ℓ]where |R|=⌊|T|·|S|/n⌋, there exists an outcome o′such thatsatS(o)<mini∈Ssati(o′ R). Take one such round subset R⊆Tthat violates the FPJR condition. The round subset R′⊆[ℓ]such that R⊆R′and|R′|=⌊ℓ· |S|/n⌋also violates the FPJR condition because satS(o)<min i∈Ssati(o′ R)≤min i∈Ssati(o′ R′). Note that R′also violates the sFPJR condition, and hence odoes not satisfy sFPJR. As we assumed that oviolates FPJR, the desired result follows by contraposition. For part ii), we first show that FJR implies FPJR. Take an arbitrary election E= (C, N, ℓ, A )and an outcome osatisfying FJR. For any subset of voters S⊆N, there exists an i∈Ssuch that sati(o)≥max T⊆[ℓ]min R⊆T:|R|=⌊|T|·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). Then, since i∈S, we must have that satS(o)≥sati(o)andosatisfies FPJR as well. Next, we show that FPJR implies PJR. Take an arbitrary election E= (C, N, ℓ, A )and an outcome othat provides FPJR. Let S⊆Nbe a subset of voters that agree in a set of rounds Tof size |T|=t in election E. Consider an outcome o′that satisfies o′ r∈T i∈Sai,rfor each r∈T. Observe that for every subset of rounds R⊆Twith|R|=⌊t· |S|/n⌋we have satj(o′ R) =⌊t· |S|/n⌋for each j∈S. Consequently, µS(T) = min R⊆T:|R|=⌊t·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥ min R⊆T:|R|=⌊t·|S|/n⌋min j∈Ssatj(o′ R)≥ | https://arxiv.org/abs/2505.22513v1 |
⌊t· |S|/n⌋ Since oprovides FPJR, we have that satS(o)≥max T′⊆[ℓ]µS(T′)≥µS(T)≥ ⌊t· |S|/n⌋. Hence, oprovides PJR. We will now argue that FPJR implies wFPJR. Consider an outcome othat provides FPJR for a temporal election E. Then for every subset of voters Swe have satS(o)≥max T⊆[ℓ]µS(T). Thus, in particular, for T= [ℓ], we have satS(o)≥µS([ℓ]) = min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). 26 This means that oprovides wFPJR. For part iii), we first show that wFJR implies wFPJR. Take an arbitrary election E= (C, N, ℓ, A ) and an outcome osatisfying wFJR. For any subset of voters S⊆N, there exists an i∈Ssuch that sati(o)≥ min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). Then, since i∈S, we must have that satS(o)≥sati(o)andosatisfies wFPJR as well. Next, we show that wFPJR implies wPJR. Fix an election E= (C, N, ℓ, A )and an outcome othat provides wFPJR. Suppose there exists a subset of voters S⊆Nsuch thatT i∈Sai,r̸=∅for all r∈[ℓ]. We construct an outcome o′so that o′ r∈T i∈Sai,rfor each r∈[ℓ]. Then, for each subset of rounds R⊆[ℓ]such that |R|=⌊ℓ· |S|/n⌋, we have satj(o′ R) =⌊ℓ· |S|/n⌋for all j∈S. Since oprovides wFPJR, we have that satS(o)≥ min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′′ R∈CRmin j∈Ssatj(o′′ R)≥ min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋min j∈Ssatj(o′ R)≥ ⌊ℓ· |S|/n⌋. Hence, oprovides wPJR. Proposition 5.4. For every temporal election E∈ E=1, it holds that wPJR ⇐⇒ wEJR. Proof. It is trivially observable that wEJR implies wPJR, since for every subset of voters S⊆Nthat agrees in all rounds and any outcome osatisfying wEJR, satS(o)≥sati(o)≥ ℓ·|S| n , where the first inequality follows from the fact that i∈Sand the second inequality follows from the definition of wEJR. We now prove that wPJR implies wEJR for any election E∈ E=1. Note that any subset of voters S⊆Nthat agree in all ℓrounds must necessarily have the same ballot. This means that for any outcome o,satS(o) = sat i(o), giving us the desired wEJR guarantee sati(o) = sat S(o)≥ ℓ·|S| n . E Omitted Proofs from Section 6 Proposition 6.2. It holds that: (i) sCore =⇒sFJR and Core, (ii) Core =⇒FJR and wCore, and (iii) wCore =⇒wFJR. Proof. For part i), we first show that sCore implies sFJR. Take an arbitrary election E= (C, N, ℓ, A ) and an outcome osatisfying sCore. For any subset of voters S⊆N, any subset of rounds R⊆[ℓ]such that|R|=⌊ℓ· |S|/n⌋and any outcome o′, it follows that sati(o)≥sati(o′ R)for some voter i∈S. Given that sati(o)≥minj∈Ssatj(o)for every voter i∈S,omust also satisfy sFJR since the amount of satisfaction that sCore provides to a voter is at least as much as sFJR provides by definition. We now show that sCore implies Core by considering an outcome othat is not core stable. There then exists a subset of voters S⊆Nand a subset of rounds T⊆[ℓ]such that for any subset R⊆T where |R|=⌊|T| · |S|/n⌋, there exists an outcome o′such that sati(o)<sati(o′ R)for all voters i∈S. Fix one such subset of rounds R⊆Tsuch that o′ Rwitnesses a violation of core stability, and consider a subset of rounds R′⊆[ℓ]such that |R′|=⌊ℓ· |S|/n⌋andR⊆R′(i.e.,R′is an extension of R). Given thatsati(o′ R)≤sati(o′ R′)for all voters i∈Sandsati(o)<sati(o′ R)since ois not core | https://arxiv.org/abs/2505.22513v1 |
stable, it follows that sati(o)<sati(o′ R′)and so ois not strongly core stable either. As our original assumption 27 was that oviolated core stability, it follows by contraposition that osatisfying sCore implies that o satisfies Core. For part ii), consider an arbitrary election E= (C, N, ℓ, A )and an outcome osatisfying Core. For any subset of voters S⊆Nand any subset of rounds T⊆[ℓ], there therefore exists a subset R⊆Tsuch that |R|=⌊|T| · |S|/n⌋and for any outcome o′, there exists some voter i∈Ssuch that sati(o)≥sati(o′ R). We first show that Core implies FJR. As with the proof of sCore implying sFJR, we observe that Core guarantees at least as much satisfaction as FJR given that sati(o)≥minj∈Ssatj(o)for every voter i∈S, therefore oautomatically satisfies FJR too. We now show that Core implies wCore. If we look at the satisfaction guarantee made by owhen T= [ℓ], we find that for any subset of voters S⊆N, there exists a subset of rounds R⊆T= [ℓ] such that |R|=⌊ℓ· |S|/n⌋and for any outcome o′, there exists some voter i∈Ssuch that sati(o)≥ sati(o′ R). Notice that this is exactly the definition of wCore, hence osatisfies wCore. For part iii), we show that wCore implies wFJR. Take an arbitrary election E= (C, N, ℓ, A )and an outcome osatisfying wCore. For any subset of voters S⊆N, there exists a subset of rounds R⊆[ℓ]with|R|=⌊ℓ· |S|/n⌋such that for any outcome o′, there exists a voter i∈Ssuch that sati(o)≥sati(o′ R). Observe that for any voter i∈Swe have that sati(o′ R)≥minj∈Ssatj(o′ R). It therefore follows that there exists a voter i∈Ssuch that sati(o)≥minj∈Ssatj(o′ R), where Ris the same subset of rounds that enabled oto satisfy wCore, hence oalso satisfies wFJR. F Omitted Proofs from Section 7 As mentioned in the discussion in Section 7, to rule out all remaining potential implications, we identify twelve specific pairs of axioms for which we construct outcomes that satisfy one notion but not the other. Therefore, to prove Proposition 7.1, it is equivalent to prove the following (which completes the ‘if’ part; the ‘only if’ part follows from all our results in previous sections). Proposition F.1. There exist elections E1,E2,E3,E4,E5,E6,E7,E8,E9∈ E=1andE0∈ E≥1, as well as outcomes o1,o2,o3,o4,o5,o6,o7,o8,o9, and o0for respective elections, such that i) inE1, outcome o1provides wCore & wEJR+ but not JR, ii) in E2, outcome o2provides Core & EJR+ but not sJR, iii) in E3, outcome o3provides sJR but not wPJR, iv) in E4, outcome o4provides sEJR+ but not wFPJR, v) in E5, outcome o5provides sFJR but not wCore, vi) in E6, outcome o6provides sCore but not wEJR+, vii) in E7, outcome o7provides sFPJR but not wFJR, viii) in E8, outcome o8provides sFPJR but not wEJR+, ix) in E9, outcome o9provides sFPJR but not EJR, and x) in E0, outcome o0provides sFPJR but not wEJR. Proof. Let us consider each claim separately. For each election in E=1we will omit the curly braces when writing the ballots of the voters. In the proof of Claim ii, we explain how any outcome of E2may be selected as o2. In the proofs of the other nine claims, we show that | https://arxiv.org/abs/2505.22513v1 |
a specific outcome owitnesses the 28 desired result, where the underlined candidate in row rof the corresponding table indicates the candidate that was selected in round rofo. Claim i: wCore and wEJR+ do not imply JR even for E=1 LetE1= (C, N, ℓ, A )∈ E =1be a temporal election with 4 candidates C={a, b, c, d }, 6 voters N= [6] ,ℓ= 4rounds, and the ballots of voters as presented in Table 1. V oter Rounds 1 2 3 4 5 6 1 aaab b b 2 aaab b b 3 aaab c d 4 aaab c d Table 1: The ballots of voters in the proof of Proposition F.1, Claim i. Consider outcome o1= (a, a, a, a ). First, let us show that o1provides wCore. Given that voters 1, 2 and 3 approve the winner in every round of o1, none of these voters can be more satisfied by any other outcome of E1, so any subset of voters containing one of these three voters cannot witness a violation of wCore. It therefore remains to check that no subset of voters S⊆ {4,5,6}can violate wCore. We now consider the three possible cases for the size of such subsets of voters S. Let o′be an arbitrary outcome that witnesses a violation of wCore. If |S|= 1 then we find that ⌊ℓ· |S|/n⌋= ⌊4·1/6⌋= 0, therefore any suboutcome o′ Rcausing the violation would have |R|=⌊ℓ· |S|/n⌋= 0 rounds. Clearly no voter can be more satisfied by a suboutcome of 0 rounds than by o1, so wCore is not violated in this case. If|S|= 2then we calculate that |R|=⌊ℓ· |S|/n⌋= 1. However, taking Rto be round 3, we find that there exists a round subset Rwhere at least one of the two voters of Smust have a satisfaction of 0 given that no two voters approve a common candidate. This voter is therefore not more satisfied by any proposed suboutcome o′ Rthan by o1, so wCore is not violated. If|S|= 3 then we calculate that |R|=⌊ℓ· |S|/n⌋= 2. However, taking Rto be rounds 3 and 4, we find that there exists a round subset Rwhere at least one of the three voters of Smust have a satisfaction of 0 given that no two voters approve a common candidate in either round. This voter is therefore not more satisfied by any proposed suboutcome o′ Rthan by o1, so again wCore is not violated. This exhausts the possibilities of |S|, therefore the assumption that o′witnesses a violation of wCore is contradicted. As o′was an arbitrary outcome, we conclude that o1satisfies wCore. We now show that o1provides wEJR+. First, we observe that voters 1, 2 and 3 receive utility ℓ from o1, therefore for any subset of voters S⊆Ncontaining any of these three voters, it holds that sati(o) =ℓ≥ ⌊ℓ·σ/n⌋for some i∈Sbecause σ≤n. This satisfies the first condition of wEJR+, hence if a subset of voters exists that witnesses a violation of wEJR+, then this subset must itself be a subset of {4,5,6}. Given that no two voters from the subset {4,5,6}agree on a common | https://arxiv.org/abs/2505.22513v1 |
candidate in rounds 3 or 4, it just remains to consider the three (1, ℓ)-cohesive singleton subsets of {4,5,6}. Let S′be one such singleton subset. By noting that ⌊ℓ·σ/n⌋=⌊4·1/6⌋= 0, it vacuously holds that sati(o1)≥ ⌊ℓ·σ/n⌋ for the singleton voter i∈S′, hence the first condition of wEJR+ is satisfied. We therefore conclude thato1satisfies wEJR+. It remains to show that o1violates JR. To see this, consider the subset of voters S={4,5,6}, which agree on candidate bin rounds 1 and 2 (so t= 2). However, we find that min(1 ,⌊t· |S|/n⌋) = min(1 ,⌊2·3/6⌋) = min(1 ,1) = 1 >0 = sat S(o). 29 Therefore, Switnesses a violation of JR, hence o1violates JR. Claim ii: Core and EJR+ do not imply sJR even for E=1 LetE2= (C, N, ℓ, A )∈ E =1withℓ= 6 rounds, a set of voters N= [12] , a set of candidates C={x, y, z, c 1, . . . , c 12}, and the ballots of voters as presented in Table 2. V oter Rounds 1 2 3 4 5 6 7 8 9 10 11 12 1 x x x x y y y y z z z z 2 c1c2c3c4c5c6c7c8c9c10c11c12 3 c1c2c3c4c5c6c7c8c9c10c11c12 4 c1c2c3c4c5c6c7c8c9c10c11c12 5 c1c2c3c4c5c6c7c8c9c10c11c12 6 c1c2c3c4c5c6c7c8c9c10c11c12 Table 2: The ballots of voters in the proof of Proposition F.1, Claim ii. Observe that this is the same election as the one we have considered in the proof of Proposition 3.4. Thus, we already know that there is no sJR outcome in this election. On the other hand, we will prove that every outcome oprovides EJR+ and Core, as both of these axioms do not give any satisfaction guarantee in this election to any subset of voters. For EJR+, let us consider for what values of σandτwe can have (σ, τ)-cohesive subsets of voters S⊆N. Clearly, σcan be at most 4as in no round more than 4voters agree. However, if σ >1, then τ has to be equal to 1, because there is only one round in which at least two voters agree. However, even for the maximum σ= 4, a subset of voters that is (4,1)-cohesive receives a satisfaction guarantee of ⌊τ·σ/n⌋=⌊1·4/12⌋=⌊1/3⌋= 0. On the other hand, if σ= 1, then even for the maximum τ=ℓ= 6, we again get a satisfaction guarantee of ⌊τ·σ/n⌋=⌊6·1/12⌋=⌊1/2⌋= 0. Thus, indeed, EJR+ does not give any satisfaction guarantee in election E2, which means that every outcome oprovides EJR+. For Core, in order for it to give a positive satisfaction guarantee, there has to be a subset of voters S⊆Nand a subset of rounds T⊆[ℓ]such that for every R⊆Twith|R|=⌊|T| · |S|/n⌋there exists o′ R∈CRfor which the minimum satisfaction in Sis positive, i.e., mini∈Ssati(o′ R)>0. Otherwise, the voter yielding the minimum satisfaction equal to 0would not benefit from the deviation to o′ Rfrom any other outcome o. Since |R| ≤ |T| · |S|/n≤ℓ· |S|/n≤ |S|/2and each candidate in rounds 2–6 gives a positive satisfaction to at most one voter, it is not possible to guarantee all voters in Sa positive satisfaction from o′ RifRdoes not contain round 1. However, as we | https://arxiv.org/abs/2505.22513v1 |
choose the worst ⌊|T| · |S|/n⌋ rounds from TtoR, in order to have round 1inRwe would need to have |S|/n= 1, i.e., S=N. But ifS=N, then o′ Rwould have to give a positive satisfaction to 12voters. This is not possible as every suboutcome can give a summed satisfaction to all voters of at most 9(satisfaction of 4can be given in the first round and 1in each consecutive round). Thus, indeed Core does not give a positive satisfaction guarantee to any subset of voters, which means that it is satisfied by every outcome o. Claim iii: sJR does not imply wPJR even for E=1 LetE3= (C, N, ℓ, A )∈ E=1be a temporal election with 2 candidates C={a, b}, 2 voters N= [2] , ℓ= 4rounds, and the ballots of voters as presented in Table 3. 30 V oter Rounds 1 2 1 ab 2 a b 3 a b 4 a b Table 3: The ballots of voters in the proof of Proposition F.1, Claim iii. Consider outcome o3= (a, b, b, b ). Clearly, o3provides sJR as sati(o3)≥1for every voter i∈N(thus, there cannot exist a subset of voters Swho collectively receive a satisfaction of less than min(1 ,⌊ℓ· |S|/n⌋)). It remains to show that o3violates wPJR. To see this, consider the singleton subset S={1}. Clearly, voter 1agrees with itself on candidate ain every round of E3. However, we find that ⌊ℓ· |S1|/n⌋=⌊4·2/4⌋= 2<1 = sat S(o3). Therefore, Sis a witness of a violation of wPJR, which means that o3violates wPJR. Claim iv: sEJR+ does not imply wFPJR even for E=1 LetE4= (C, N, ℓ, A )∈ E=1be a temporal election with 3 candidates C={x, y, z}, 7 voters N= [7] , ℓ= 7rounds, and the ballots of voters as presented in Table 4. V oter Rounds 1 2 3 4 5 6 7 1 xxxxxxx 2 y x xxxxx 3 y x xxxxx 4 y x xxxxx 5 yz z x x x x 6 yx x z z x x 7 yx x x x z z Table 4: The ballots of voters in the proof of Proposition F.1, Claim iv. Consider outcome o4= (x, x, x, x, y, y, y ). First, let us show that o4provides sEJR+. Observe that since the number of rounds is equal to the number of voters, sEJR+ requires that for every round r∈[ℓ] and a subset of voters Ssuch thatT i∈Sai,r̸=∅, either or∈T i∈Sai,ror there exists a voter in S with the satisfaction greater or equal to |S|. Observe thatT i∈Sai,r̸=∅implies that there has to be a candidate c∈Cthat is approved by all voters in Sin round r. LetS′={i∈N:c∈ai,r}be a subset of all voters that approve cinr. Then observe that if one of the conditions is satisfied by S′it has to be also satisfied by S. Thus, we can exhaustively check for every round r∈[ℓ]and every unselected candidate in this round c∈C, whether either of the conditions is satisfied for the subset of all voters that approve cinr. It turns out to be the case, which we summarize in Table 5. On the | https://arxiv.org/abs/2505.22513v1 |
other hand, we can prove that wFPJR is violated. Consider a subset of agents S={2,3,4,5,6,7}. To determine the satisfaction guarantee for this subset according to wFPJR we take the worst possible subset of rounds Rof size |R|=⌊ℓ· |S|/n⌋=⌊7·6/7⌋= 6for which we fix a suboutcome o′ Rso to 31 Candidate Rounds x y z 1 selected not approved by any voter not approved by any voter 2 selected sat1(o4)≥1 not approved by any voter 3 selected sat1(o4)≥1 not approved by any voter 4 selected sat1(o4)≥1 not approved by any voter 5 sat4(o4)≥4 selected sat2(o4)≥2 6 sat2(o4)≥4 selected sat4(o4)≥2 7 sat2(o4)≥4 selected sat6(o4)≥2 Table 5: sEJR+ conditions satisfaction for each round r∈[ℓ]and each candidate in C. maximize the minimum satisfaction of voters from S. We can easily check that by selecting candidate x in each round, we guarantee the minimum satisfaction of 5to every voter in S, no matter which 6rounds we choose to R. This means that according to wFPJR, the satisfaction of subset Sfor the outcome should be at least 5. However, satS(o4) = 4 , as there are only 4 rounds in which the selected candidate is supported by at least one voter from S. Hence, wFPJR is indeed violated. Claim v: sFJR does not imply wCore even for E=1 LetE5= (C, N, ℓ, A )∈ E=1be a temporal election with 3 candidates C={a, b, c}, 8 voters N= [8] , ℓ= 4rounds, and the ballots of voters as presented in Table 6. V oter Rounds 1 2 3 4 5 6 7 8 1 aaaab a c c 2 aaaaab c c 3 a a a a a a c c 4 a a a a a a c c Table 6: The ballots of voters in the proof of Proposition F.1, Claim v. Consider outcome o5= (a, a, c, c ). Let us first show that it satisfies sFJR. The satisfaction guarantee given by sFJR to a subset Sis the minimum satisfaction of any voter in Sunder the best possible suboutcome o′ Rfor any subset of rounds R⊆[ℓ]with|R|=⌊ℓ· |S|/n⌋=|S|/2. Observe that under o5, there are only two voters with satisfaction 1 and the rest of voters have satisfaction 2. Thus, if sFJR was to be violated, we would need to have a subset of voters S⊆Nfor which sFJR gives a satisfaction guarantee of at least 3. This means that the size of Sneeds to be at least 6. Observe that voters 7and8do not agree with any voter from {1,2,3,4,5,6}in any of the rounds. Hence, we can analyze these two groups of voters separately. However, combined with the fact that the size of Sneeds to be at least 6, this means that the only subset of voters that we need to check is S={1,2,3,4,5,6}. Now, observe that there are no 3rounds in which all voters from Sagree. Thus, for any subset of 3rounds R, there is no suboutcome o′ Rthat would give a satisfaction of at least 3to each voter in S. Therefore, there is no subset Switnessing the violation of sFJR, which means that o5 provides sFJR. On the | https://arxiv.org/abs/2505.22513v1 |
other hand, to show that o5fails wCore, consider arbitrary subset of 3rounds Rand a suboutcome o′ Ron these rounds in which we select candidate aeach time. Observe that sati(o′ R) = 32 3≥2 = sat i(o5)for every i∈ {1,2,3,4}. Moreover, sati(o′ R)≥2≥1 = sat i(o5)fori∈ {5,6}. Thus, subset of voters S={1,2,3,4,5,6}is a witness that wCore is violated. Claim vi: sCore does not imply wEJR+ even for E=1 LetE6= (C, N, ℓ, A )∈ E =1be a temporal election with 4 candidates C={a, b, c, d }, 7 voters N= [7] ,ℓ= 7rounds, and the ballots of voters as presented in Table 7. V oter Rounds 1 2 3 4 5 6 7 1 a a a a a a d 2 aaaab b d 3 aaab a b d 4 aaab b a d 5 a a a a b bd 6 a a a b a b d 7 a a a b ba d Table 7: The ballots of voters in the proof of Proposition F.1, Claim vi. Consider outcome o6= (d, a, a, a, b, b, b ). First, let us show that o6satisfies sCore. Since in each round, voter 7 does not agree with any voter from {1, . . . , 6}andsat7(o6)≥1 =⌊ℓ· |{7}|/n⌋, for the purpose of checking whether sCore is satisfied we can focus on subsets S⊆ {1, . . . , 6}only. Observe that the satisfaction from outcome o6of each voter in the set {1, . . . , 6}is equal to 3. Thus, if sCore was not to be satisfied, there would have to be a subset S⊆ {1, . . . , 6}, subset of rounds R⊆[ℓ] of size |R|=⌊ℓ· |S|/n⌋=|S|and suboutcome o′ Rsuch that sati(o′ R)≥4for every i∈S. Since sati(o′ R)≤ |R|for every i∈N,R⊆[ℓ]ando′ R∈CR, we know that Smust be of size 4,5, or6. Let us consider each case separately. Case 1, |S|= 4. When Sconsists of 4voters, in order for some o′ Rto give satisfaction 4to all voters inS, it would have to hold that voters in Sall agree in some 4rounds. There are no 4voters that all agree in some 4rounds, hence no set Sof size 4is a witness of sCore violation. Case 2, |S|= 5. Observe that election E6is symmetric with regard to voters 1,2,3as well as 4,5,6. Hence, without loss of generality, we can focus on two sets of size 5, i.e., S1={1,2,3,4,5}and S2={2,3,4,5,6}. In the first round, selecting candidate ato suboutcome o′ Rwould give us satisfaction 1 to each voter in S. Hence, every subset Rthat does not include round 1and every suboutcome o′ R, would clearly be dominated by suboutcome we obtain by exchanging any round in Rfor round 1and selecting athere. Thus, let us assume that the first round is in Rand candidate ais selected in that round. In the remaining rounds, we have to choose between selecting candidate aorb(selecting cwould never be beneficial). Let say that we selected candidate aintarounds and candidate bintbrounds. Observe that |R|= 1 + ta+tb. Then, for set S1={1,2,3,4,5}, the summed satisfaction of voters 1,2, and 3would be equal | https://arxiv.org/abs/2505.22513v1 |
to sat1(o′ R) + sat 2(o′ R) + sat 3(o′ R) = 3 + 3 ·ta. Since satisfaction of each voter must be at least 4, we get that ta≥3. On the other hand, the total satisfaction of the remaining voters is bounded by sat4(o′ R) + sat 5(o′ R)≤2 +ta+ 2·tb=|R|+ 1 + tb= 6 + tb. In order to give a satisfaction guarantee of at least 4to each of the voters, we need tb≥2. However, if round 1is inRand|R|= 5, it is impossible to have both ta≥3andtb≥2. Thus, S1(and any symmetric subset) is not a witness of sCore violation. 33 For set S2={2,3,4,5,6}, the total satisfaction of voters 2and3under o′ Rcan be expressed as sat2(o′ R) + sat 3(o′ R) = 2 + 2 ·ta. Thus, again we need ta≥3in order to guarantee a satisfaction of at least 4. For voters 4,5, and 6, we have sat4(o′ R) + sat 5(o′ R) + sat 6(o′ R)≤3 +ta+ 2·tb=|R|+ 2 + tb= 7 + tb. This time we would need tb≥5in order to get a satisfaction guarantee of at least 4, which is clearly not possible if ta≥3and|R|= 5. Thus, S2(and any symmetric subset) is not a witness of sCore violation, which concludes the case of |S|= 5. Case 3, |S|= 6. Finally consider subset S={1,2,3,4,5,6}. Here, again we can assume that round 1 is in Rand candidate ais selected to o′ Rin that round. Let taandtbbe the respective numbers of times aandbis selected in the remaining rounds. Then, the summed satisfaction of voters 1,2, and 3 is given by sat1(o′ R) + sat 2(o′ R) + sat 3(o′ R) = 3 + 3 ·ta. To guarantee a satisfaction of at least 4to all voters, we need that ta≥3. On the other hand, the summed satisfaction of voters 4,5, and 6can be bounded by sat4(o′ R) + sat 5(o′ R) + sat 6(o′ R)≤3 +ta+ 2·tb=|R|+ 2 + tb= 8 + tb. A satisfaction of at least 4for all voters would imply that tb≥4. However, ta≥3andtb≥4cannot be both met when |R|= 6, thus Sis not a witness that sCore is violated. Since we have exhaustively shown that there is no witness to sCore violation, this means that o6 indeed provides sCore. Finally, let us show that wEJR+ is violated. To this end, take S={1,2,3,4,5,6}and observe that it is(4, ℓ)-cohesive. Indeed, in every round, candidate ais approved by at least 4voters from S. Then, wEJR+ requires that either at least one voter in Shas a satisfaction of at least ⌊ℓ·4/n⌋=⌊7·4/7⌋= 4 or candidate ais selected in the first round. None of these conditions are satisfied by o6, which means that wEJR+ is indeed violated. Claim vii: sFPJR does not imply wFJR even for E=1 LetE7= (C, N, ℓ, A )∈ E=1be a temporal election with 2 candidates C={a, b}, 3 voters N= [3] , ℓ= 3rounds, and the ballots of voters as presented in Table 8. V oter Rounds 1 2 3 1 ba a 2 a b a 3 a a b Table 8: The ballots of voters in the proof | https://arxiv.org/abs/2505.22513v1 |
of Proposition F.1, Claim vii. Consider outcome o7= (b, b, b ). First, let us show that o7provides sFPJR. We notice that each voter has a satisfaction of 1 from o7and voter i∈Napproves the winning candidate in round i. Therefore, for any subset of voters Swe find that satS(o7) =|S|. Now, let o′be an outcome and R⊆[ℓ]be a subset of rounds such that o′ Rwitnesses a violation of sFPJR. This means that there must exist a subset of voters Ssuch that satS(o7)<sati(o′ R)for every voter i∈S. Given that n=ℓ, we find that ⌊ℓ· |S|/n⌋=|S|, but this gives rise to a contradiction because for every voter i∈S, sati(o′ R)≤ |R|=⌊ℓ· |S|/n⌋=|S|= sat S(o7). 34 Therefore, no outcome o′and subset of rounds Rexists that can witness a violation of sFPJR, so o7 satisfies sFPJR. It remains to show that o7violates wFJR. To see this, consider the voter subset S=N. Given that |S|=nit follows that ⌊ℓ· |S|/n⌋=ℓ= 3, so for any subset of rounds Rof an outcome that could witness a violation of wFJR, |R|=⌊ℓ· |S|/n⌋= 3. As the only subset of 3 rounds of [ℓ]is[ℓ]itself, it suffices to show that there exists an outcome o′wherein minj∈Ssatj(o′)>1since sati(o7) = 1 for every voter i∈N. However, o′= (a, a, a )is exactly this required outcome given that sati(o′) = 2 for every voter i∈N, which completes the proof. Claim viii: sFPJR does not imply wEJR+ even for E=1 LetE8= (C, N, ℓ, A )∈ E=1be a temporal election with 3 candidates C={a, b, c}, 4 voters N= [4] , ℓ= 4rounds, and the ballots of voters as presented in Table 9. V oter Rounds 1 2 3 4 1 a a a c 2 ba a c 3 a b a c 4 a a b c Table 9: The ballots of voters in the proof of Proposition F.1, Claim viii. Consider outcome o8= (c, b, b, b ). First let us show that it satisfies sFPJR. According to sFPJR, for every subset of voters S⊆Nand every subset of rounds R⊆[ℓ]with|R|=⌊ℓ· |S|/n⌋, and every suboutcome o′ R, subset Sshould receive a satisfaction of at least mini∈Ssati(o′ R). Observe that for every voter i∈N, suboutcome o′ Rcan give a satisfaction to iof at most |R|. Hence, mini∈Ssati(o′ R)≤ |R|. On the other hand, since in outcome o8in every round a different voter approves the selected candidate, for every subset of voters S⊆N, we have that satS(o8) =|S|. Combining both inequalities we get that for every subset S⊆N, it holds that min i∈Ssati(o′ R)≤ |R|=⌊ℓ· |S|/n⌋=|S|= sat S(o8). Thus, o8indeed satisfies sFPJR. Now, let us show that o8fails wEJR+. To this end, take subset of voters S={1,2,3}and consider round 1. Observe thatT i∈Sai,1={a} ̸=∅. Moreover, Sis(2, ℓ)-cohesive as in every round at least 2 voters from Sapprove candidate a. Thus, wEJR+ requires that either a candidate fromT i∈Sai,1is selected in the first round or there is a voter i∈Swith satisfaction of at least ⌊ℓ·2/n⌋=⌊4·2/4⌋= 2. However, sati(o8) = 1 for every i∈Sanda̸=o1, so neither requirement is satisfied. Thus, wEJR+ is violated. Claim ix: sFPJR does not imply EJR | https://arxiv.org/abs/2505.22513v1 |
even for E=1 LetE9= (C, N, ℓ, A )∈ E=1be a temporal election with 2 candidates C={a, b}, 8 voters N= [8] , ℓ= 8rounds, and the ballots of voters as presented in Table 10. Consider outcome o9= (b, b, b, b, b, b, b, b ). First, let us show that o9provides sFPJR. Given that voters 5, 6, 7 and 8 approve the winner in every round of o9, none of these voters can be more satisfied by another outcome, so any subset of voters containing one of these four voters cannot witness a violation of sFPJR. It therefore remains to check that no subset of voters S⊆ {1,2,3,4}can violate sFPJR. Observe that for each voter i∈ {1,2,3,4},sati(o9) = 1 and for any two voters in the subset {1,2,3,4}, there exists no round in outcome o9wherein both voters are satisfied by the winning candi- date. Therefore, for any subset of voters S⊆ {1,2,3,4}, we find that satS(o9) =|S|. 35 V oter Rounds 1 2 3 4 5 6 7 8 1 ba a a b bbb 2 a b a a b bbb 3 a a b a b bbb 4 a a a b bbbb 5 a a a a b bbb 6 a a a a b bbb 7 a a a a b bbb 8 a a a a b bbb Table 10: The ballots of voters in the proof of Proposition F.1, Claim ix. Now we assume that there exists an outcome o′that witnesses a violation of sFPJR. We would therefore have that for some subset of voters S⊆ {1,2,3,4},satS(o9)<sati(o′ R)for each i∈S where |R|=⌊ℓ·|S|/n⌋=|S|given that n=ℓ. However, this is impossible because sati(o′ R)≤ |R|= |S|= sat S(o9)for every voter i∈S. Therefore, no outcome can bear witness to a violation of sFPJR and we deduce that o9satisfies sFPJR. To show that o9violates EJR, we consider the subset of voters S={1,2,3,4}, which agree on candidate bin rounds 5 to 8, hence t= 4. Thus, EJR requires that at least one voter from Sshould receive a satisfaction of at least ⌊t·|S|/n⌋=⌊4·4/8⌋= 2. However, for every voter i∈Swe find that sati(o9) = 1 , thereby violating EJR. Claim x: sFPJR does not imply wEJR for E≥1 LetE0= (C, N, ℓ, A )∈ E≥1be a temporal election with 5 candidates C={a, b, c, d, e }, 6 voters N= [6] ,ℓ= 6rounds, and the ballots of voters as presented in Table 11. V oter Rounds 1 2 3 4 5 6 1{a, b, c} {a, b, d} {a, c, d} {b, c, d, e } {b, c, d, e } {b, c, d, e } 2{a, b, c} {a, b, d} {a, c, d} {b, c, d, e} {b, c, d, e} {b, c, d, e} 3{a, b, c} {a, b, d} {a, c, d} {b, c, d , e} {b, c, d , e} {b, c, d , e} 4{a, b, c} {a, b, d} {a, c, d} {b, c, d, e } {b, c, d, e } {b, c, d, e } 5{a, b, c} {a, b, d} {a, | https://arxiv.org/abs/2505.22513v1 |
c, d} {b, c, d, e } {b, c, d, e } {b, c, d, e } 6{a, b, c} {a, b, d} {a, c, d} {b, c, d, e } {b, c, d, e } {b, c, d, e } Table 11: The ballots of voters in the proof of Proposition F.1, Claim x. Consider outcome o0= (b, c, d, e, e, e ). First, let us show that o0provides sFPJR. Given that voters 4, 5 and 6 approve the winner in every round of o0, none of these voters can be more satisfied by another outcome, so any subset of voters containing one of these three voters cannot witness a violation of sFPJR. It therefore remains to check that no subset of voters S⊆ {1,2,3}can violate sFPJR. Given that sati(o0) = 2 for every voter i∈ {1,2,3}, any outcome o′that could violate sFPJR would have to provide a satisfaction of at least 3 to every voter. This would require us to find a suboutcome that contains at least 3 rounds (i.e. o′ Rwhere |R| ≥3), but as n=ℓthis ensures that |S| ≥3since |R|=⌊ℓ· |S|/n⌋=|S|. Therefore, the only subset of voters that we need to check is S={1,2,3}. 36 However, given that satS(o0) = 3 and any suboutcome of 3 rounds can provide a satisfaction of at most 3 to any voter, Sis not a witness of sFPJR violation. Since there is no such witness, o0provides sFPJR. It remains to show that o0violates wEJR. To see this, we again consider the subset of voters S= {1,2,3}. Every voter in Sagrees on candidate ain all 6 rounds, hence in order for o0to satisfy wEJR, there must exist a voter i∈Ssuch that sati(o0)≥ ⌊ℓ· |S|/n⌋=⌊6·3/6⌋= 3. However, each voter i∈S1only receives a satisfaction of 2 from o0, hence o0violates wEJR. 37 | https://arxiv.org/abs/2505.22513v1 |
Evaluating Supervised Learning Models for Fraud Detection: A Comparative Study of Classical and Deep Architectures on Imbalanced Transaction Data Chao Wang 1 Department of Computer Science Rice University Houston, United States cw104@rice.edu Chuanhao Nie1 College of Computing Georgia Institute of Technology Atlanta, United States cnie30@gatech.edu Yunb o Liu1,* Department of Electrical and Computer Engineer ing Duke University Durham, United States * Corresponding author: *yunbo.liu954@duke.ed u Abstract: Fraud detection remains a critical task in high - stakes domains such as finance and e -commerce, where undetected fraudulent transactions can lead to significant economic losses. In this study, we systematically compare the performance of four supervised le arning models —Logistic Regression, Random Forest, Light Gradient Boosting Machine (LightGBM), and a Gated Recurrent Unit (GRU) network —on a large -scale, highly imbalanced online transaction dataset. While ensemble methods such as Random Forest and LightGBM demonstrated superior performance in both overall and class -specific metrics, Logistic Regression offered a reliable and interpretable baseline. The GRU model showed strong recall for the minority fraud class, though at the cost of precision, highlighting a trade -off relevant for real -world deployment. Our evaluation emphasizes not only weighted averages but also per - class precision, recall, and F1 -scores, providing a nuanced view of each model’s effectiveness in detecting rare but consequential fraudulent activity. The findings underscore the importance of choosing models based on the specific risk tolerance and operational needs of fraud detection systems. Keywords: Supervised Learning, Machine Learning & Deep Learning, Class Imbalance, Online Transactions , Fraud Detection I. INTRODUCTION Fraud continues to be a pervasive issue across industries such as banking, insurance, and healthcare, inflicting not only financial damage but also reputational harm [1]. In financial services alone, losses from fraudulent transactions reach into the billions annually, prompting a growing reliance on automated detection technologies. Traditional systems built on static rules and manual screening are increasingly inadequate —they often fail to detect emerging fraud strategies and tend to generate high false -positive rates, burdening investigative teams and delaying legitimate transactions. In recent years, machine learning (ML) has taken center stage in the fight against fraud. Unlike rule -based systems, ML algorithms can adapt to changing behavior and uncover intricate patterns in transactional data. Among supervised approaches, models like Logistic Regression, Random Forests, and LightGBM have become standard choices due to their ease of use and strong empirical performance. However, a persistent challenge remains: class imbalance. Fraud cases typically account for only a tiny fraction of t ransactions, making it difficult for models to learn meaningful decision boundaries and causing traditional metrics like accuracy to overstate performance. To address this, researchers have turned to deep learning, particularly models that can process sequential or tokenized input formats. In this work, we explore the use of a Gated Recurrent Unit (GRU) network as a representative deep architecture capable of capturing complex temporal or categorical dependencies. While deep models offer the potential to discover patterns missed by conventional methods, they often require more data, greater tuning effort, and come with trade - offs in interpretability and latenc y—factors that are | https://arxiv.org/abs/2505.22521v1 |
critical in high-stakes domains like fraud prevention. This study offers a comprehensive comparison of four supervised learning models —Logistic Regression, Random Forest, LightGBM, and GRU —applied to a large -scale, highly imbalanced online transaction dataset. Our emphasis is not only on overall model accuracy but also on each model’s ability to detect fraudulent transactions specifically, which are rare but operationally vital to capture. We highlight how performance varies across models and discuss practical considerations such as false alarm rates, detection sensitivity, and resource implications. Our contributions include: • A comparative analysis of classical machine learning and deep learning models for fraud detection, with detailed evaluation of both weighted and minority class metrics. • An examination of trade -offs between fraud recall and precision, underscoring the importance of model selection based on operational goals. • A discussion of the interpretability -performance balance, illustrating where simpler models may be preferred and 1 These authors contributed equally to this work when more complex architectures like GRUs provide added value. The remainder of this paper is organized as follows. Section 2 outlines the dataset, feature engineering, and modeling workflow. Section 3 presents the empirical results with both aggregate and fraud -specific evaluation. Section 4 offers a discussion on pr actical implications, limitations, and potential extensions. Section 5 concludes with key takeaways and directions for future research. II. METHODS A. Dataset and Preprocessing In this project, we employed the publicly available IEEE - CIS Fraud Detection dataset, which mimics a real -world e - commerce transaction environment and was published in collaboration with Vesta Corporation. The dataset contains over 590,000 anonymized online payment transactions, each labeled as fraudulent or legitimate. This dataset includes a wide range of structured features designed to mimic real -world fraud detection scenarios. It contains transactional -level details such as TransactionAmt (transactio n amount), ProductCD (product type), as well as card1 through card6 (various card identifiers). In addition, it captures user and account metadata, including address -related fields like addr1 and addr2 , as well as email domains such as P_emaildomain and R_emaildomain (for email domains). The dataset also incorporates device and digital identity signals, including DeviceType , DeviceInfo , and over 30 anonymized id_ features that reflect browser versions, operating systems, and other behavioral fingerprints. Together, these features provide a comprehensive foundation for training models to identify fraudulent transaction patterns. A binary label (isFraud ) indicates whether a given transaction was fraudulent. Benchmarking [2] provides standardized datasets and methodologies, which help to assess the effectiveness of different models under controlled conditions. Fraudulent transactions account for a small portion of the dataset (less than 4%), reflecting the inherent class imbalance found in real -world fraud detection problems. To ensure realistic evaluation and training integrity, we used the official train -test split from the Amazon Fraud Dataset Benchmark [3], resulting in 561,013 samples for training and 29,52 7 samples for testing. In our project, the training data was further divided into 80% training and 20% validation subsets (448,810 and 112,203 samples respectively) using stratified sampling to maintain class distribution. Feature dimensionality after preprocessing was | https://arxiv.org/abs/2505.22521v1 |
standardi zed at 74 columns. For preprocessing, categorical features such as ProductCD , card1 -card6 , DeviceType , and email domains were label -encoded. Timestamp variables were converted into interpretable features such as hour of day, weekday, and month. Missing val ues were imputed using a constant -fill strategy, and irrelevant identifiers such as transaction IDs and entity IDs were removed. All processing was applied uniformly across training, validation, and test splits. B. Machine Learning Models To systematically examine the trade -offs between model complexity, interpretability, and fraud detection performance under class imbalance, we evaluated three supervised learning algorithms: Logistic Regression, Tree-based Models, including Random Forest (i.e., bagging techniques), and Light Gradient Boosting Machine (LightGBM, i.e., boosting techniques). These models represent a spectrum of approaches —from transparent linear classification to sophisticated tree -based ensembles —and were selected to assess h ow well different algorithmic families adapt to the challenges of real -world fraud detection [4,5,6 ]. For each model, we conducted hyperparameter tuning using grid search over relevant parameter spaces, with the primary evaluation metric being the weighted F1 score on a held -out validation set. Logistic Regression Logistic Regression [ 7] serves as a baseline model with a linear classification boundary, offering high interpretability and ease of implementation. It estimates the likelihood of a transaction being fraudulent as a logistic function of a weighted sum of the input features. Alt hough it lacks the flexibility to capture complex feature interactions or nonlinear decision boundaries, its transparency is advantageous in regulated domains like finance. To account for the class imbalance in our data, we enabled th e class_weight='balanced' option, which adjusts the model’s loss function to penalize misclassification of minority class instances more heavily. We performed grid search over regularization strength ( C), solver type ( lbfgs or liblinear ), and class weighting to identify the configuration that maximized F1 score on the validation set. Decision Trees Decision Trees build a hierarchy of if -then rules by recursively partitioning the data to minimize node impurity, such as using Gini index or information gain. They are capable of modeling nonlinear relationships and are highly interpretable, as the decisi on path for any prediction is explicitly defined. However, standalone trees are prone to overfitting, especially in high-dimensional datasets or when class distributions are skewed. In preliminary experiments, a single tree demonstrated poor generalization on the validation set, leading us to exclude it from further evaluation in favor of ensemble methods. Random Forest: Bagging Techniques Random Forest [8] is an ensemble learning method that constructs multiple decision trees using bootstrapped subsets of the data and aggregates their outputs via majority voting. This strategy reduces overfitting and enhances model stability while preserving the tree model ’s capacity to handle nonlinear interactions and heterogeneous feature types. We used the class_weight='balanced' parameter when training to ensure that the model gave appropriate emphasis to the minority (fraudulent) class. A grid search o ver key hyperparameters, such as tree depth, number of estimators, and minimum leaf size, was performed to optimize performance. Light Gradient Boosting Machine: Boosting Techniques LightGBM [9,10] | https://arxiv.org/abs/2505.22521v1 |
is a high -efficiency gradient boosting framework designed for scalable learning on large, sparse datasets. Unlike Random Forest, which builds trees in parallel, LightGBM grows trees sequentially, where each new tree is trained to correct the residuals of the ensemble thus far. It employs a leaf -wise tree growth strategy and histogram -based feature binning, which accelerates training and reduces memory usage. This model handles class imbalance through its gradient - based optimization and splitting heuristics. Hyperparameter tuning was performed over key settings such as the number of estimator s, learning rate, maximum tree depth, number of leaves, and minimum child samples. The selected configuration optimized performance while maintaining computational efficiency. All supervised models were trained on the labeled training set, and hyperparamet ers were tuned using the validation set. Oversampling of the minority class was applied to mitigate imbalance effects. C. Deep Learning Models To complement the classical machine learning methods [11,12,13 ], we implemented neural architectures capable of capturing complex patterns in sequence -based or tokenized categorical features: recurrent network (GRU). It is fine-tuned for binary fraud classification, with performance evaluated using the weighted F1 score. Gated Recurrent Unit (GRU) The GRU model was implemented in PyTorch and consisted of an embedding layer, a GRU layer, and a fully connected output layer. It was designed to capture sequential dependencies across tokenized categorical features. To mitigate overfitting and handle clas s imbalance, dropout regularization was applied and weighted cross -entropy loss was used, with class weights derived from the training distribution. Hyperparameter tuning was performed through random sampling across 10 iterations, exploring embedding dimen sions (150 –250), hidden units (256 – 768), learning rates ( 10−4-10−3), and training epochs (5 –10). Each sampled configuration was trained using the Adam optimizer and evaluated on the validation set, with the weighted F1-score serving as the primary selection criterion. Both models demonstrated the ability to learn from tokenized categorical inputs, offering complementary strengths to tree -based classifiers in fraud detection under imbalanced conditions. D. Evaluation Metrics Given the severe class imbalance inherent in fraud detection, overall accuracy is often misleading, as a model biased toward the majority class can achieve high accuracy while failing to detect fraudulent cases. To provide a more meaningful evaluation, we assessed all models using four standard metrics: precision, recall, F1 -score, and Area Under the Receiver Operating Characteristic Curve (AUC -ROC). Precision measures the proportion of predicted fraud cases that are actually fraudulent, while recall indicates the proportion of true fraud cases successfully identified. In the context of this study, recall is particularly important, as failing to detect a fraudulent transaction (a false negative) can carry high financial and security risks. At the same time, precision remains operationally relevant, as a high false positive rate can overwhelm fraud analysts and reduce trust in the system. To balance thes e competing priorities, we relied on the F1 -score as the primary model selection criterion, as it synthesizes both precision and recall into a single, interpretable metric. AUC - ROC was used to assess the overall discriminative ability of each model, | https://arxiv.org/abs/2505.22521v1 |
capturing performance across varying decision thresholds. This evaluation framework was applied consistently across both traditional and deep learning models, ensuring comparability under imbalanced classification conditions. III. RESULTS A. Hyperparameter Selection For each supervised learning model, we conducted systematic hyperparameter tuning to identify configurations that optimized performance under class imbalance. Model -specific hyperparameters were selected based on validation weighted F1 - score, which balance s precision and recall —two metrics critical in the context of fraud detection. Grid search was employed for machine learning models as indicated in Section II. For Logistic Regression, we varied the regularization strength (C), solver type (liblinear vs. lbfgs), and class weighting (None vs. balanced). For Random Forest, the search space included the number of trees, maximum depth, and minimum leaf size, in addition to class weighting. LightGBM parameters were selected by adjusting the number of estimators, learning rate, tree depth, and numbe r of leaves. In addition to tuning classical models, we conducted random hyperparameter search for the deep learning architectures to optimize their performance under the same evaluation framework. Due to higher computational cost, a smaller number of configurations wa s explored for each model, with parameter ranges tailored to the architecture type. The optimal configuration selected from the GRU random search consisted of an embedding dimension of 184, hidden layer size of 502, a learning rate of 1.47×10−4, and 8 training epochs. This setup achieved the best validation F1-score during tuning and was used for final evaluation on the test set. The optimal settings for each model are summarized in Table 1. This tuning process ensured fair comparison across models and robust evaluation under the skewed class distribution typical of fraud detection tasks. Table 1: Optimal Hyperparameter Configurations B. Model Performance Table 2 summarizes the weighted average precision, recall, and F1 -score for each model on the held -out test set, alongside AUC and 95% bootstrap confidence intervals. These metrics reflect aggregate performance across both classes, with weighting based on class support. Model Key Hyperparameters Logistic Regression C=100, solver=liblinear, penalty=l2, class_weight=None Random Forest n_estimators=200, max_depth=None, min_samples_leaf=4, class_weight=Balanced LightGBM n_estimators=100, lr=0.1, max_depth=10, num_leaves=50, class_weight=None GRU Embedding dimension = 184; Hidden units = 502; Learning rate = 1.47 × 10−4; Epochs = 8 Table 2: Overall Model Performance Model Weighted Average AUC Precision Recall F1-Score Logistic Regression 0.95 0.96 0.95 0.82 (0.81, 0.84) Random Forest 0.97 0.97 0.97 0.91 (0.90, 0.92) LightGBM 0.97 0.97 0.97 0.90 (0.89, 0.91) GRU 0.95 0.92 0.93 0.84 (0.83, 0.86) Random Forest and LightGBM achieved the highest overall scores, each recording a weighted F1 -score of 0.97 and AUCs of 0.91 and 0.90, respectively. Their strong performance is attributed to their capacity to model complex feature interactions and adapt well to structured, imbalanced data. Logistic Regression, while simpler and linear, performed competitively with a weighted F1 -score of 0.95 and an AUC of 0.82, offering an interpretable and computationally efficient baseline. The GRU model, a recurrent neural network trained on tokenized categorical sequences, achieved a weighted F1 -score of 0.937 and an AUC of 0.84. Despite its relatively compact architecture and smaller | https://arxiv.org/abs/2505.22521v1 |
hyperparameter tuning budget, GRU demonstrated solid generalization across classes and competitive overall performance. All models showed high weighted precision and recall, largely due to their accuracy on the majority class (non -fraud). However, these aggregate metrics can mask performance disparities on the minority class, which is critical in fraud detection. Table 3 provides a focused comparison of each model’s precision, recall, and F1 -score for the minority (fraud) class, which comprised only 3.96% of the test data. These metrics offer a clearer view of how well each model handled the core challenge of fraud detection. Table 3: Minority Class (Fraud) Performance Model Precision (Fraud) Recall (Fraud) F1-Score (Fraud) Logistic Regression 0.69 0.10 0.18 Random Forest 0.79 0.40 0.53 LightGBM 0.76 0.38 0.50 GRU 0.28 0.54 0.37 While GRU achieved the highest recall (0.54), capturing more actual fraud cases, it did so at the cost of precision (0.28), indicating a relatively high false positive rate. Logistic Regression, by contrast, maintained high precision (0.69) but detected only 10% of true frauds, limiting its practical utility in fraud detection. Random Forest and LightGBM offered a more favorable balance between precision and recall, each achieving F1-scores near 0.50 on the fraud class. This contrast highlights the limitations of relying solely on weighted metrics in imbalanced classification. From a fraud detection standpoint, where missing fraudulent transactions may carry significant consequences, models like GRU or Random Forest may b e preferred over more conservative classifiers like Logistic Regression. IV. DISCUSSION A. Interpretation of Findings This study systematically evaluated four supervised models —Logistic Regression, Random Forest, LightGBM, and a GRU -based neural network —on a highly imbalanced fraud detection dataset. The results underscore a clear performance gap between simple linear mod els and more complex architectures. Both Random Forest and LightGBM achieved top-tier performance in terms of overall weighted metrics and minority class detection, confirming their ability to capture non - linear interactions and detect rare fraud patterns. Logistic Regression, while computationally efficient and interpretable, struggled with the imbalance, achieving high precision but extremely low recall on the fraud class. This indicates that it correctly identified a small number of fraud cases with confi dence, but missed the majority. Its poor recall underscores the difficulty linear models face when fraudulent behavior does not manifest as linearly separable patterns. GRU, in contrast, demonstrated stronger fraud recall (0.54) than Logistic Regression and comparable F1 -score to tree -based models, despite being trained on tokenized inputs with less handcrafted feature engineering. This highlights the GRU's ability to lea rn sequential or contextual representations from categorical fields. However, its relatively low fraud precision (0.28) suggests a higher false positive rate —an important trade - off to consider depending on operational constraints. All models demonstrated high weighted F1 -scores, largely driven by strong performance on the majority class. This reinforces the importance of looking beyond aggregate metrics and evaluating class -specific performance when working with imbalanced data. In our case, fraud recall was particularly prioritized, given the high cost of undetected fraudulent transactions. B. Practical Implications Our findings offer several practical insights for deploying | https://arxiv.org/abs/2505.22521v1 |
fraud detection systems: • Logistic Regression remains a viable option in regulated environments or audit -sensitive pipelines where model transparency is critical. However, it is best suited for environments with well -engineered features and relatively balanced data. • Random Forest and LightGBM demonstrated strong out -of- the-box performance, scalability, and robustness under class imbalance. LightGBM, in particular, offers fast training and inference, making it well -suited for real -time fraud monitoring systems. • The GRU model shows promise in learning from minimally processed, tokenized data. This can be advantageous in systems where raw categorical sequences are abundant but manually engineered features are sparse. With further tuning or hybridization (e.g., attention mechanis ms), its practical utility may improve. Given the trade -offs between precision and recall observed across all models, threshold adjustment, cost -sensitive training, or ensemble calibration strategies may be necessary to align model behavior with specific business objectives. C. Limitations and Future Works This study has several limitations that warrant further exploration: • The dataset, while rich and realistic, is public and partially anonymized, which may not fully reflect the diversity or evolving nature of fraud in proprietary financial systems. • The deep learning component was limited to GRU. Future work should investigate more advanced architectures such as transformer -based models (e.g., ALBERT, BERT), or hybrid models that integrate structured and unstructured features. • While our evaluation focused on standard classification metrics, we did not incorporate cost-sensitive learningor business -impact -weighted metrics, which are critical in production environments where false positives and false negatives carry asymmetric costs. • All models were trained in a static, batch -learning setting. In practice, fraud patterns change dynamically. Future research should consider online learning, continual learning, or adaptive drift -handling frameworks to better simulate real -time deployment scenarios. [1] E. W. T. Ngai, Y. Hu, Y. H. Wong, Y. Chen, and X. Sun, “The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature,” Decision Support Systems , vol. 50, no. 3, pp. 559 –569, Mar. 2011 . [2] Y. Xin, S. Luo, X. Liu, H. Zhou, X. Cheng, C. E. Lee, J. Du, H. Wang, M. Chen, T. Liu et al., "V -petl bench: A unified visual parameter -efficient transfer learning benchmark," Advances in Neural Information Processing Systems, vol. 37, pp. 80522 –80535, 202 4. [3] P. Grover, J. Xu, J. Tittelfitz, A. Cheng, Z. Li, J. Zablocki, J. Liu, and H. Zhou, “Fraud dataset benchmark and applications,” arXiv preprint arXiv:2208.14417, Aug. 2023. Available: https://arxiv.org/abs/2208.14417 . [4] Z. Ding, Z. Wang, Y. Zhang, Y. Cao, Y. Liu, X. Shen, Y. Tian, and J. Dai, “Trade -offs between machine learning and deep learning for mental illness detection on social media,” Scientific Reports , 15 (14497), 2025. [5] Y. Zhang, Z. Wang, Z. Ding, Y. Tian, J. Dai, X. Shen, Y. Liu, and Y. Cao, “Tutorial on using machine learning and deep learning models for mental illness detection,” arXiv preprint arXiv:2502.04342, 2025. [6] Z. Li, B. Wang, and Y. Chen, "A contrastive deep learning approach to cryptocurrency portfolio | https://arxiv.org/abs/2505.22521v1 |
with US Treasuries," Journal of Computer Technology and Applied Mathematics, vol. 1, no. 3, pp. 1 –10, 2024. [7] D. W. Hosmer and S. Lemeshow, Applied Logistic Regression , 2nd ed., New York, NY: John Wiley & Sons, Inc., 2000. [8] L. Breiman, “Random forests,” Machine Learning , vol. 45, no. 1, pp. 5 –32, 2001. [9] J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Annals of Statistics , vol. 29, no. 5, pp. 1189 –1232, 2001. [10] G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T. -Y. Liu, “LightGBM: A highly efficient gradient boosting decision tree,” in Proc. 31st Int. Conf. Neural Information Processing Systems (NeurIPS) , pp. 3149 – 3157, 2017 . [11] Y. Zhang, F. Wang, X. Huang, X. Li, S. Liu, and H. Zhang, “Optimization and application of cloud -based deep learning architecture for multi -source data prediction,” arXiv preprint arXiv:2410.12642, Oct. 2024. [12] X. Li, S. Liu, D. Yu, Y. Zhang, and X. Liu, “Predicting 30 - day hospital readmission in Medicare patients: Insights from an LSTM deep learning model,” in Proc. 2024 3rd Int. Conf. Cloud Comput., Big Data Appl. Softw. Eng. (CBASE), 2024, pp. 61 –65, doi: 10.1109/CBASE6 4041.2024.10824316. [13] L. Wang, C. Shi, S. Du, Y. Tao, Y. Shen, H. Zheng, and X. Qiu, "Performance Review on LLM for Solving Leetcode Problems," in Proceedings of the 2024 4th International Symposium on Artificial Intelligence and Intelligent Manufacturing (AIIM), 2024, pp. 1050 –1054, doi: 10.1109/AIIM64537.2024.10934280. | https://arxiv.org/abs/2505.22521v1 |
arXiv:2505.22525v1 [cs.CV] 28 May 2025 GenerativeAIResearchThinking with Generated Images Ethan Chern1,4*†Zhulin Hu1,4*Steffi Chern4*Siqi Kou1Jiadi Su3,4Yan Ma3,4 Zhijie Deng1Pengfei Liu1,2,4‡ 1Shanghai Jiao Tong University2SII 3Fudan University4Generative AI Research Lab (GAIR) Abstract We present Thinking with Generated Images , a novel paradigm that fundamentally transforms how large multimodal models (LMMs) engage with visual reasoning by enabling them to natively think across text and vision modalities through spontaneous generation of intermediate visual thinking steps. Current visual reasoning with LMMs is constrained to either processing fixed user-provided images or reasoning solely through text-based chain-of-thought (CoT). Thinking with Generated Images unlocks a new dimension of cognitive capability where models can actively construct intermediate visual thoughts, critique their own visual hypotheses, and refine them as integral components of their reasoning process. To implement Thinking with Generated Images , we introduce the native long-multimodal thought process , which en- ables unified LMMs to seamlessly generate intermediate visual thoughts, establish visual subgoals, and iteratively critique their visual hypotheses within a single, coherent reasoning process. This approach naturally performs test-time scaling across modalities. We demonstrate the effectiveness of our approach through two complementary mechanisms: (1) vision generation with intermediate visual subgoals, where models decompose complex visual tasks into manageable components that are generated and integrated progressively, and (2) vision generation with self-critique, where models generate an initial visual hy- pothesis, analyze its shortcomings through textual reasoning, and produce refined outputs based on their own critiques. Our experiments on vision generation benchmarks show substantial improvements over baseline approaches, with our models achieving up to 50% (from 38% to 57%) relative improvement in handling complex multi-object scenarios. Beyond these immediate performance gains, Thinking with Generated Images opens transformative possibilities for AI systems across diverse real-world applica- tions. From biochemists exploring novel protein structures, and architects iterating on spatial designs, to forensic analysts reconstructing crime scenes, and basketball players envisioning strategic plays, our approach enables AI models to engage in the kind of visual imagination and iterative refinement that characterizes human creative, analytical, and strategic thinking. This work establishes a foundation for future research in multimodal cognition and complex visual reasoning tasks. We release our open-source suite at https://github.com/GAIR-NLP/thinking-with-generated-images . Figure 1: Real-world tasks that require Thinking with Generated Images . These tasks often require visual foresight and imagination, which text-based thought alone cannot fully accomplish. *Equal Contribution. †Partial work done at Bytedance Seed. ‡Corresponding author. 1 1. Introduction 1 Introduction “Thorough thought leads to victory; insufficient thought leads to defeat. ” —Sun Tzu Human goal -oriented cognition (Simon and Newell, 1971; Newell, 2014; Xia et al., 2025) can be characterized as a goal -directed search through a problem space1—a landscape of intermediate states connected by operators (action or thinking steps). Large language models (LLMs) exhibit this same pattern: when prompted to write a chain -of-thought (CoT) (Wei et al., 2022), they traverse intermediate states, with performance improving with additional inference compute—a.k.a. test -time scaling (Brown et al., 2024; Snell et al., 2024). Yet, the text-only CoT process of LLMs captures only a partial view of cognitive search. Crucially, human cognition is natively multimodal | https://arxiv.org/abs/2505.22525v1 |
. Biochemists explore protein structures to discover new treatment approaches; forensic analysts verify crime scene reconstructions to establish evidence connections; architects revise spaces and light patterns to optimize building designs. Visual thinking creates unique combinations and novel connections between concepts (Hao et al., 2024; Barrault et al., 2024). This helps us glimpse new possibilities in our thinking—ideas and connections we wouldn’t have discovered through text reasoning alone. An AI system that thinks or reasons only in the text modality is therefore constrained:2it cannot explore alternative therapeutic pathways as freely, verify criminal evidence at a glance, or revise design configurations in place3—capacities that underlie much of human creativity and analysis. Extending the thoughts of AI systems to multiple modalities unlocks new forms of interactivity: a model can generate not only intermediate text but also visual thoughts, decompose tasks into both text and visual subgoals, critique these intermediate steps, and iterate—ideally, all within a single, coherent thought process. While standard large multimodal models (LMMs) or vision-language models (VLMs) can process visual in- formation, these models are merely seeing images—processing them once during the forward pass rather than thinking with images more deeply. To address this limitation, previous works have explored leveraging multi-step or tool-augmented approaches (Zhang et al., 2023; Gupta and Kembhavi, 2023; Sur ´ıs et al., 2023; Hu et al., 2024b; OpenAI, 2025; Su et al., 2025) to enable models to revisit or conduct simple operations on user-input images as intermediate steps toward solutions. However, these works on Thinking with Images only enable models to perform limited operations on images, constraining the thinking space of the model. To establish a generalized, scalable model that can naturally think and reason across modalities while avoiding the limitations mentioned above, we propose the idea of “ Thinking with Generated Images ”: An AI model (system) thatspontaneously thinks and reasons across (e.g., text and vision) modalities. Rather than entirely relying on user-supplied images, which would constrain its thinking, the model proactively produces its own visual steps or subgoals toward solving problems when needed. Recent works have attempted to take the first steps toward Thinking with Generated Images , using either agentic (Guo et al., 2025; Fang et al., 2025; Ni et al., 2024) or single-model approaches (Li et al., 2025). We argue that improving AI system performance via agentic approaches is parallel to single-model approaches, but integrating various capabilities into a single, end-to-end generalist model represents the continued evolution toward superintelligence (Bubeck et al., 2023). More recent work (Li et al., 2025) attempted to leverage unified LMMs (models that can generate multimodal content simultaneously) to solve spatial reasoning tasks such as maze navigation. While taking a first step, their approach is limited to maze problem-solving, which can be adequately solvable without visual intermediate steps (Dao and Vu, 2025). We argue that Thinking with Generated images is best applied to complex tasks (as shown in Fig. 1), where their intermediate steps cannot be adequately represented in textual form alone. Thus, to demonstrate the potential of Thinking with Generated Images , in this paper, we focus on vision (image) | https://arxiv.org/abs/2505.22525v1 |
generation tasks. To effectively realize Thinking with Generated Images with an end-to-end, single-model approach and demonstrate its strong potential on vision generation tasks, we introduce the native long-multimodal thought process on unified autoregressive LMMs (Team, 2024a; Chern et al., 2024). The native long-multimodal thought process comprises interleaved multimodal tokens—words or subwords for text, patches for vision, frames for audio, and 1A problem space is the basic unit of goal -oriented symbolic activity (Newell, 2014). It consists of states (the possible configurations of the system), operators (actions that transform one state into another), an initial state, one or more goal states, and optional path constraints that restrict legal moves. Take the Tower of Hanoi as an example: a state is any arrangement of the disks on three pegs; the operators are “move the top disk from one peg to another” and “recognizing the arrangement of the disks”; the initial state has all disks stacked on peg1; the goal state has them stacked on peg3; and the path constraint forbids placing a larger disk atop a smaller one. 2We argue that vision-language models that are only capable of generating text are still reasoning only in the text modal- ity—they cannot construct intermediate visual thinking. 3Although visual content can sometimes be encoded into pure text descriptions, this process is usually inevitably lossy, except for certain tasks like maze problem solving where states and steps can be fully encoded with code or text. 2 2. Thinking with Generated Images other domain-specific representations4. The native long-multimodal thought process not only naturally enable models to spontaneously generate images in the thinking process, but also natively perform test-time scaling for better model capabilities. In summary, our contributions are as follows: •We propose the concept of Thinking with Generated Images , demonstrating how a single AI model can learn to spontaneously think and reason across text and vision modalities. • We explore test-time scaling for native unified LMMs, shedding light on future research directions. •We introduce the native long-multimodal thought process to instantiate Thinking with Generated Images , enabling models to think and reason across modalities, and naturally performs test-time scaling. We demonstrate that the native long-multimodal thought process is beneficial for vision generation and can potentially be applied to more diverse and complex tasks with the emergence of stronger base models. 2 Thinking with Generated Images Figure 2: Examples of “seeing with images” vs. “thinking with images” vs. “thinking with generated images.” 2.1 “Seeing” vs. “Thinking” with Images Standard LMMs or VLMs only seean image once. During a single forward pass, the image is converted into visual tokens, and the model then autoregressively generates text tokens conditioned on those visual tokens. The thinking process of AI models happens entirely in the text modality; the image acts as a fixed prior or conditioning context for a text-only chain-of-thought. To integrate images into the thought process, recent work leverages established multi-step pipelines (Zhang et al., 2023), multi-agent systems (Yang et al., 2023), or external tools such as code executors, OCR, or image manipulators (Gupta and Kembhavi, 2023; Sur ´ıs et al., | https://arxiv.org/abs/2505.22525v1 |
2023; Hu et al., 2024b; OpenAI, 2025). These approaches let the model revisit the (user-given) images multiple times or generate transformed versions (e.g., cropped, rotated, zoomed) as intermediate steps toward a solution. Thinking with images—rather than merely seeing them once—enables models to better tackle multi -step visual reasoning tasks such as spatial puzzles, complex visual question answering, and chart or diagram interpretation. 2.2 Thinking with Images v.s. Thinking with “Generated” Images Thinking with images alone is not sufficient—models are still constrained when limited to reasoning with fixed user input images or slightly “transformed” versions of these images. The reasoning steps for tasks like design planning and physical-world reasoning would be lossy if they entirely relied on text and user-input image transformations. To attempt the first steps toward thinking with generated images, previous works have primarily relied on multi-component agentic approaches (Guo et al., 2025; Fang et al., 2025; Ni et al., 2024) that require multiple model passes to achieve pipelined planning, (re)captioning, generation, critique, and reward components. Though demonstrating great potential, the complexity of these systems makes further generalizability and scaling to diverse scenarios complicated. Additionally, established test-time scaling and post-training scaling methods like long-form CoT and reinforcement learning must be engineered separately for each module, a burden that adds unwarranted 4We focus on text and vision modalities in this paper. 3 2.3 Thinking with Generated Images with the Native Long-Multimodal Thought Process complexity. These limitations ultimately place a ceiling on the performance potential of such fragmented systems. More recent works (Li et al., 2025) have attempted single-model, end-to-end approaches. However, their approach limited its tasks to spatial reasoning such as maze problem-solving, where the maze itself and the intermediate states and steps are essentially discrete and can be entirely encoded with pure text descriptions (Dao and Vu, 2025). Such tasks are thus solvable without visual intermediate steps and don’t fully unlock the potential of Thinking with Generated Images . To fully unlock the potential of Thinking with Generated Images , we focus on tasks that fundamentally require intermediate thoughts expressed in visual space. In this paper, we concentrate specifically on vision (image) generation, while demonstrating how our approach (the native long-multimodal thought process) enables models to inherently and deeply integrate vision into their reasoning processes. Unlike approaches that rely on fixed, user-inputted images, Thinking with Generated Images allows models to dynamically create and manipulate visual representations throughout their reasoning process. This fundamental shift unlocks new forms of interactivity and enables capabilities such as 3D modeling, visual foresight, design, and intuitive physics—domains where visual intermediate steps are not merely helpful but essential for effective problem-solving. A summary of the differences between our work and previous related works can be found in Fig. 3. Figure 3: Comparison between related works on multimodal cognition. 2.3 Thinking with Generated Images with the Native Long-Multimodal Thought Process We implement Thinking with Generated Images through the native long-multimodal thought process that leverages unified autoregressive LLMs via effective supervised fine-tuning on carefully curated high-quality samples. The native long-multimodal thought process offers multi-faceted benefits that transcend previous approaches: (1) | https://arxiv.org/abs/2505.22525v1 |
enables models to “think” across modalities by generating modality-specific tokens with “natively” embedded multimodal generation capabilities via a single forward pass; (2) performs diverse multimodal tasks natively via the generative paradigm; (3) provides natural test-time scaling across modalities via the generated long-thought; and (4) facilitates future integration with post-training scaling techniques like reinforcement learning. 2.3.1 The Native Long-Multimodal Thought Process The native long-multimodal thought process is a sequence of interleaved multimodal tokens. Formally, if we have m modalities with vocabulary spaces of V1,V2, . . . ,Vm, we define the unified vocabulary as V=V1∪ V 2∪. . .∪ Vm. Importantly, these vocabulary spaces need not be finite sets of discrete tokens (i.e., not constrained to autoregressive next-token-prediction LMMs); they could also be infinite sets of continuous tokens. A sequence of multimodal tokens is represented as x= (x1, x2, . . . , x n)where ∀i, xi∈ V. Several length or token constraints may be introduced to ensure reasonable interaction within the multimodal thought process. For example, vision tokens might be required to appear in fixed-size blocks (e.g., exactly 1024 consecutive tokens for each image representation), and special delimiter tokens may be introduced to explicitly mark the boundaries and transitions between different modality spaces within the unified vocabulary V. 2.3.2 Unified Auto-Regressive Next-Token-Prediction LMMs Developing the native long-multimodal thought process on top of unified auto-regressive next-token-prediction LMMs is a natural approach, since the model can intuitively generate the thought process token-by-token. However, the main technical challenge in implementing the native long-multimodal thought process on unified LMMs stems primarily from the developmental stage of the unified LMMs field itself. Such implementation requires models to 4 3. Experiments inherently support interleaved multimodal generation–a frontier that remains less developed. Many current open- source models claiming to be “unified vision understanding and generation LMMs” typically lack native interleaved multimodal generation support, instead supporting only text or image outputs in isolation. Recent successes in closed-source models like GPT-4o and Gemini have demonstrated the tremendous potential of interleaved multimodal generation in real-world applications. In this paper, we employ Anole (Chern et al., 2024; Team, 2024a) as our base model for generating native long- multimodal thought process for test-time scaling on LMMs. Anole is a unified auto-regressive next-token-prediction LMMs that are trained to directly predict the next multimodal (text or image) token. Anole presents several compelling advantages over alternative LMMs. First, Anole is pre-trained and post-trained directly on interleaved text-image tokens, enabling inherent capabilities for interleaved generation of multimodal tokens. Second, Anole’s relatively efficient image representation encodes each image with 1024 tokens, making test-time scaling with the native long-multimodal thought process viable within reasonable inference compute budget. Third, Anole’s modeling strategy closely resembles the state-of-the-art LLMs, making it capable to leverage existing training and inference infrastructure of LLMs. 2.3.3 Methodology for Data Curation We deliberately curate our set of supervised fine-tuning (SFT) dataset, which consists of diverse vision (image) generation prompts to ensure high-quality alignment (Team, 2024a). To elicit LMMs to perform the native long- multimodal thought process, we carefully design and construct the solution multimodal reasoning chain to elicit | https://arxiv.org/abs/2505.22525v1 |
LMMs’ capabilities to spontaneously (1) critique their own generated visual steps and (2) generate intermediate visual subgoals. For more details, please refer to Section 3. 3 Experiments To instantiate Thinking with Generated Images on vision generation tasks, we meticulously design two types of native long multimodal thought processes: vision generation with intermediate visual subgoals andvision generation with self-critique . We explore how these two processes can help LMMs to better conduct vision (image) generation via generative visual thinking. Figure 4: Demonstration of our long-multimodal thought process on GenEval. 3.1 Vision Generation with Intermediate Visual Subgoals Through the vision generation with intermediate visual subgoals process, LMMs are able to plan and achieve visual subgoals as intermediate steps toward the final solution, as presented in equation 1. Specifically, in the image generation task, when presented with a complex prompt containing multiple objects, the model generates each object individually and then integrates them into the final output image to fulfill the requirements of the multi-object prompt. An example shown in Fig. 4 for demonstration. 5 3.2 Vision Generation with Self-Critique [Text Planning] →[Visual SubGoal i]→[Text Planning and Reflection]| {z } iteratively→[Final Visual Output] (1) 3.2 Vision Generation with Self-Critique Through the vision generation with self-critique process, LMMs are able to set a visual hypothesis, then perform critiquing and reflection the hypothesis, and finally make refinements toward better solution, as presented in equation 2. Specifically, in the image generation task, after generating an image based on a prompt, the model spontaneously reflects on the image to assess whether it meets the prompt’s requirements. It then attempts to improve the image further, generating a higher-quality version that better aligns with the prompt by incorporating both the previously generated image and its own reflections. An example shown in Fig. 5 for demonstration. [Initial Visual Hypothesis] →[Text Planning and Reflection] →[Final Visual Output] (2) Figure 5: Demonstration of our long-multimodal thought process on DPG-Bench. 3.3 Dataset We carefully designed our synthetic SFT data–construction workflow that produces high-quality training samples to enable trained models to generate two types of long-multimodal thought processes. Specifically, we developed a pipeline that constructs synthetic examples illustrating long-multimodal reasoning. Our pipeline can generate the two types of long multimodal-thought process mentioned above. After fine-tuning on this data, the LMMs become capable of producing end-to-end long-multimodal chains of thought. Since there are no single off-the-shelf LMM that supports dynamic test-time scaling, traditional distillation techniques are not applicable. Our full dataset-curation pipeline is illustrated in Fig. 6. We follow the below rules of thumb when collecting our synthetic data: •High-quality image-generation prompts: To ensure diverse, high-quality outputs, we leverage several state-of-the-art models (including Deepseek-V3 (DeepSeek-AI et al., 2024), GPT-4o (Hurst et al., 2024), Claude3.7-Sonnet (Anthropic, 2025), and Qwen2.5-72B-Instruct (Yang et al., 2024)) to brainstorm complex prompts and then apply rule-based filtering to guarantee both variety and proper formatting. •High-quality reflection-reasoning chains: We use QVQ-72B-Preview (Team, 2024b) to construct the core components of critique and reflection, conditioned on images. •High-quality intermediate visual thoughts: We employ Flux1-dev (Labs, 2024) (for vision generation with intermediate subgoals) or Anole-7b (Chern et | https://arxiv.org/abs/2505.22525v1 |
al., 2024) (for vision generation with self-critique) to generate initial visual hypotheses from the prompts. We then refine or update these with Flux1-Redux (Labs, 2024) (a variant of Flux1-dev that accepts both image and text inputs). 6 3.4 Training Figure 6: Our data collection pipeline for Thinking with Generated Images . 3.3.1 High-Quality Image-Generation Prompts We curate diverse, complex prompts that go beyond simple object depiction to include variations in color, orientation, and shape with multiple elements. We deduplicate prompts5for data efficiency and reformat them into a unified structure optimized for our “thinking with generated images” paradigm. For vision generation with intermediate subgoals, we use Qwen3-32B (Yang et al., 2024) to break down complex visual prompts into simpler components that can be processed independently. 3.3.2 High-Quality Reflection-Reasoning Chains The image critique step employs QVQ-72B-Preview (Team, 2024b) for its strong long-CoT reasoning capabilities. For vision generation with self-critique, it analyzes each prompt-image pair, assessing accuracy, identifying discrepancies, and suggesting improvements. For vision generation with self-critique, it takes a holistic view of the entire image sequence, outputting structured reasoning chains that tries to reconstruct how a model could arrive at the final image through iterative decomposition. This process enables training our model to “think visually” by using images as a medium for solving tasks through reasoning. 3.3.3 High-Quality Intermediate Visual Thoughts Our three-part image generation process creates visual representations of prompts. Initially, we use Anole-7b (Chern et al., 2024) (for vision generation with self-critique) or Flux1-dev (Labs, 2024) (for vision generation with intermediate subgoals) to generate first-round images. In the second stage, Flux1-Redux (Labs, 2024) refines images using a multimodal approach that incorporates the original prompt, first-stage image, and critique feedback to refine the image for the self-critique thought process. For the intermediate subgoals thought process, we instead use Flux1-Redux (Labs, 2024) conditioned on the first-round images (but based on the second component from the original prompt) to generate the second-round images. The third part of the image generation process continues by using Flux1-Redux (Labs, 2024) as the model (conditioned on both first and second round images) to generate a final image. Finally, QVQ-72B-Preview (Team, 2024b) performs quality control, filtering out examples where final generated images significantly deviate from the image-generation prompts. 3.4 Training Loss Function Design There remains considerable room for optimization when training unified LMMs for visual generation tasks using only cross-entropy loss. To address image integrity issues arising from raster-based prediction, we introduce a reconstruction loss at the visual feature level. Specifically, we project the hidden states of the generated images back into the visual feature space and compute the mean squared error (MSE) loss against the corresponding features of the ground-truth image. This encourages the model to produce outputs with greater visual coherence and structural integrity. Our ablation studies demonstrate that the model achieves optimal performance on the GenEval benchmark (Ghosh et al., 2023) when the reconstruction loss is combined with the cross-entropy loss, using a weight of 1 for the reconstruction term. Thus, in subsequent training, we adopt a composite loss function that combines cross-entropy with the reconstruction loss | https://arxiv.org/abs/2505.22525v1 |
(weighted at 1) to enhance the visual quality of the generated images. Further details on the auxiliary loss design and the effectiveness of this approach can be found in the Appendix. Training Stages The training is conducted in two stages. In the first stage, we performed continued training on Anole-7b with the JourneyDB dataset (Sun et al., 2023) to enhance the model’s fundamental vision generation capabilities. In the second stage of training, we train the model with the synthetic dataset constructed as described in section 3.3. We fine-tuned two models: TwGI-Anole-7b-Obj., using the vision generation with intermediate 5Prompt deduplication is crucial when leveraging language models for prompt generation, as these models sample from probability distributions that inherently lead to clustering around common outputs. When repeatedly sampling from these models, we observed occasional cases of repetition. 7 3.5 Inference subgoals dataset, and TwGI-Anole-7b-Crit., using the vision generation with self-critique dataset. These two trained models are capable of generating visual intermediate subgoals and self-critiquing visual hypothesis, respectively. 3.5 Inference Unlike standard VLMs or LLMs that don’t need to think much about inference issues, for vision generation tasks on unified LMMs, classifier-free guidance (Ho and Salimans, 2022) will significantly improve model performance on vision generation. Specifically, on top of the full conditions, unconditions, and the image conditions as mentioned in (Team, 2024a), we add extra “original prompt conditions” and “negative conditions” to facilitate the intermediate visual steps to be more faithful without being disturbed too much by the generated long-text thought. By balancing between these conditions, we can enable the model to benefit from the long-text thought, without being distracted too much by potential noise inside the long thoughts. 3.6 Main Results We benchmark the performance of TwGI-Anole-7b-Obj. and TwGI-Anole-7b-Crit. on GenEval (Ghosh et al., 2023) and DPGBench (Hu et al., 2024a). We benchmark TwGI-Anole-7b-Obj. against the base model Anole-7b (which is also continued pretrained on the JourneyDB dataset), exploring whether generating visual subgoals can be beneficial for vision generation on complex prompts (i.e., in GenEval and DPGBench, this refers to image generation prompts that require generating at least two objects). Note that since some categories of GenEval require generating only single object, we omit the comparison on these categories. Then, we benchmark whether the TwGI-Anole-7b-Crit. model can correct its initial visual hypothesis (i.e., TwGI-Anole-7b-Crit. (visual hypo.) in Tab. 1 and Tab. 2), and generate a better image generation result (i.e., TwGI-Anole-7b-Crit. (final) in Tab. 1 and Tab. 2). ModelGenEval Single Obj. Two Obj. Counting Colors Position Color Attri. Overall ↑ LlamaGen (Sun et al., 2024) 0.71 0.34 0.21 0.58 0.07 0.04 0.32 LDM (Rombach et al., 2022) 0.92 0.29 0.23 0.70 0.02 0.05 0.37 Chameleon-7b (Team, 2024a) – – – – – – 0.39 SDv1.5 (Rombach et al., 2022) 0.97 0.38 0.35 0.76 0.04 0.06 0.43 Intermediate Visual SubGoals Anole-7b (Chern et al., 2024) – 0.38 – – 0.04 0.04 – TwGI-Anole-7b-Obj. – 0.57 – – 0.11 0.16 – Self-Critique on Visual Thoughts TwGI-Anole-7b-Crit. (visual hypo.) 0.98 0.51 0.24 0.84 0.06 0.08 0.45 TwGI-Anole-7b-Crit. (final) 0.98 0.59 0.23 0.87 0.12 0.10 0.48 Table 1: | https://arxiv.org/abs/2505.22525v1 |
Performance on GenEval. 3.7 Analysis Generating intermediate visual thoughts is beneficial for vision generation We show in Tab. 1 and Tab. 2 that TwGI-Anole-7b-Obj. consistently outperforms the baseline Anole-7b model across both GenEval and DPGBench. On GenEval, Anole-7b-Obj. achieves a significant boost in the ”Two Obj.” category (0.57 vs. 0.38), indicating its improved ability to handle complex prompts involving multiple entities. It also shows notable improvements in position and color attribute alignment, suggesting a stronger capacity for precise spatial and visual compositional reasoning. Similarly, on DPGBench, TwGI-Anole-7b-Obj. yields substantial gains in the ”Entity”, ”Attribute”, and ”Relation” categories, reflecting its enhanced understanding of fine-grained visual semantics. These improvements Model Global Entity Attribute Relation Other Overall ↑ Intermediate Visual SubGoals Anole-7b (Chern et al., 2024) 74.16 70.31 69.42 80.12 43.60 58.32 TwGI-Anole-7b-Obj. 74.47 78.06 76.17 83.21 42.00 68.44 Self-Critique on Visual Thoughts TwGI-Anole-7b-Crit. (visual hypo.) 77.81 71.45 69.24 78.34 58.00 62.83 TwGI-Anole-7b-Crit. (final) 76.90 75.70 73.88 81.86 55.60 67.14 Table 2: Performance on DPG-Bench. 8 4. Conclusion validate our hypothesis that breaking down visual tasks into intermediate subgoals enables LMMs to reason more systematically and generate higher-quality outputs. The native long-multimodal thought process enables models to correct and refine their own visual hypothesis The results on self-critique on visual thoughts , as shown in Tab. 1 and Tab. 2, demonstrate the effectiveness of enabling models to reflect on and revise their own visual outputs. The TwGI-Anole-7b-Crit. model achieves a notable increase in performance after the self-critique step, improving the overall GenEval score from 0.45 to 0.48, and the DPG-Bench score from 62.83 to 67.14. This indicates that the ability to introspectively analyze generated images—through textual reasoning chains grounded in visual feedback—allows the model to identify mismatches, hallucinations, or missing elements, and subsequently correct them. The effectiveness of this visual feedback loop reflects a form of inter-modality synergy, where visual and textual modalities iteratively guide each other. We further include qualitative examples in the Appendix to illustrate how critique-driven revisions result in visually and semantically improved generations. 4 Conclusion In this paper, we introduce the concept of Thinking with Generated Images , and implement it with the native long-multimodal thought process on autoregressive next-token-prediction LMMs. While we focus on text and vision modalities, our core idea can be extended to diverse modalities. As more capable LMMs continue to emerge, particularly in the open-source domain, with the Thinking with Generated Images paradigm, we anticipate future AI models will explore protein structures or revise building designs as naturally as they write a poem. 5 Limitations and Future Directions 5.1 Limitations In this paper, we demonstrate Thinking with Generated Images on Anole-7b (Chern et al., 2024). As mentioned in the introduction, the developmental stage (especially in the open-source realm) of unified LMMs is still evolving. With stronger unified LMMs emerging in the future, we anticipate more powerful or even emergent capabilities via the Thinking with Generated Images paradigm. Also, while in this paper we demonstrate our approach on autoregressive next-token-prediction LMMs, we anticipate our core idea can be applied to diffusion-based LMMs or mixed autoregressive/diffusion LMMs. We leave | https://arxiv.org/abs/2505.22525v1 |
the exploration on different architectures to future work. 5.2 Future Directions Better Benchmarking on Thinking with Generated Images Current vision generation benchmarks for unified LMMs focus on standard image generation tasks. In this paper, we also use standard image generation benchmarks (Ghosh et al., 2023; Hu et al., 2024a). However, with the increased inherent capabilities of LMMs and emergent capabilities coming to light, real-world tasks such as those illustrated in Fig. 1 and Fig. 2 will become increasingly feasible for LMMs via Thinking with Generated Images . We need more realistic benchmarks to evaluate these models. Test-Time and Post-Training Scaling on Unified LMMs Our work represents the first step for test-time scaling on unified LMMs. As stronger unified LMMs emerge, we believe that test-time scaling and post-training scaling will become more viable, effective, and worth exploring. Efficient Vision Representation for LMMs Efficient vision representation will be essential for achieving scalable test-time and post-training scaling in the vision modality. Recent research demonstrates that images can be effectively represented with as few as 32 or even 16 tokens/patches (Yu et al., 2025; Zhou et al., 2024). We believe this line of work is a promising direction for future research. 6 Acknowledgment We express our sincere gratitude to Mingxuan Wang for his valuable support on this work. This project is supported by SJTU SEIEE - ByteDance Large Language Model Joint Laboratory and SII. 9 References References [1] Anthropic. 2025. Claude 3.7 sonnet system card. Technical report, Anthropic. [2]Lo¨ıc Barrault, Paul-Ambroise Duquenne, Maha Elbayad, Artyom Kozhevnikov, Belen Alastruey, Pierre Andrews, Mariano Coria, Guillaume Couairon, Marta R Costa-juss `a, David Dale, et al. 2024. Large concept models: Language modeling in a sentence representation space. arXiv preprint arXiv:2412.08821 . [3]Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher R ´e, and Azalia Mirho- seini. 2024. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 . [4]S´ebastien Bubeck, Varun Chadrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. [5]Ethan Chern, Jiadi Su, Yan Ma, and Pengfei Liu. 2024. Anole: An open, autoregressive, native large multimodal models for interleaved image-text generation. arXiv preprint arXiv:2407.06135 . [6]Alan Dao and Dinh Bach Vu. 2025. Alphamaze: Enhancing large language models’ spatial intelligence via grpo. [7]DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, | https://arxiv.org/abs/2505.22525v1 |
Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, and Wangding Zeng. 2024. Deepseek-v3 technical report. CoRR , abs/2412.19437. [8]Rongyao Fang, Chengqi Duan, Kun Wang, Linjiang Huang, Hao Li, Shilin Yan, Hao Tian, Xingyu Zeng, Rui Zhao, Jifeng Dai, et al. 2025. Got: Unleashing reasoning capability of multimodal large language model for visual generation and editing. arXiv preprint arXiv:2503.10639 . [9]Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. 2023. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems , 36:52132–52152. [10] Ziyu Guo, Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Peng Gao, Hongsheng Li, and Pheng-Ann Heng. 2025. Can we generate images with cot? let’s verify and reinforce image generation step by step. arXiv preprint arXiv:2501.13926 . [11] Tanmay Gupta and Aniruddha Kembhavi. 2023. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14953–14962. [12] Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training large language models to reason in a continuous latent space. arXiv preprint arXiv:2412.06769 . [13] Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 . [14] Xiwei Hu, Rui Wang, Yixiao Fang, Bin Fu, Pei Cheng, and Gang Yu. 2024a. Ella: Equip diffusion models with llm for enhanced semantic alignment. arXiv preprint arXiv:2403.05135 . [15] Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. 2024b. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403 . [16] Zhulin Hu, Yan Ma, Jiadi Su, Ethan Chern, and Pengfei Liu. 2025. Fine-tuning token-based large multimodal models: What works, what doesn’t and what’s next. In The Fourth Blogpost Track at ICLR 2025 . [17] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . [18] Black Forest Labs. 2024. Flux. https://github.com/black-forest-labs/flux . 10 References [19] Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vuli ´c, and Furu Wei. 2025. Imagine while reasoning in space: Multimodal visualization-of-thought. arXiv preprint arXiv:2501.07542 . [20] Allen Newell. 2014. Reasoning, problem solving, and decision processes: The problem space as a fundamental category. In Attention and performance VIII , pages 693–718. Psychology Press. [21] Fei Ni, Jianye Hao, Shiguang Wu, Longxin Kou, Jiashun Liu, Yan Zheng, Bin Wang, and Yuzheng Zhuang. 2024. Generate subgoal images before act: Unlocking the chain-of-thought reasoning in diffusion model for robot manipulation with multimodal prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision | https://arxiv.org/abs/2505.22525v1 |
and Pattern Recognition , pages 13991–14000. [22] OpenAI. 2025. Thinking with images. [23] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ¨orn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695. [24] Herbert A Simon and Allen Newell. 1971. Human problem solving: The state of the theory in 1970. American psychologist , 26(2):145. [25] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. 2024. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 . [26] Zhaochen Su, Linjie Li, Mingyang Song, Yunzhuo Hao, Zhengyuan Yang, Jun Zhang, Guanjie Chen, Jiawei Gu, Juntao Li, Xiaoye Qu, and Yu Cheng. 2025. Openthinkimg: Learning to think with images via visual tool reinforcement learning. [27] Keqiang Sun, Junting Pan, Yuying Ge, Hao Li, Haodong Duan, Xiaoshi Wu, Renrui Zhang, Aojun Zhou, Zipeng Qin, Yi Wang, et al. 2023. Journeydb: A benchmark for generative image understanding. Advances in neural information processing systems , 36:49659–49678. [28] Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. 2024. Autore- gressive model beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525 . [29] D´ıdac Sur ´ıs, Sachit Menon, and Carl V ondrick. 2023. Vipergpt: Visual inference via python execution for reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11888–11898. [30] Chameleon Team. 2024a. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818 . [31] Qwen Team. 2024b. Qvq: To see the world with wisdom. [32] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837. [33] Shijie Xia, Yiwei Qin, Xuefeng Li, Yan Ma, Run-Ze Fan, Steffi Chern, Haoyang Zou, Fan Zhou, Xiangkun Hu, Jiahe Jin, Yanheng He, Yixin Ye, Yixiu Liu, and Pengfei Liu. 2025. Generative ai act ii: Test time scaling drives cognition engineering. [34] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2024. Qwen2.5 technical report. CoRR , abs/2412.15115. [35] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381 . [36] Qihang Yu, Mark Weber, Xueqing Deng, Xiaohui Shen, Daniel Cremers, and Liang-Chieh Chen. 2025. An image is worth 32 tokens for reconstruction and generation. Advances in Neural Information Processing Systems , 37:128940–128966. [37] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2023. | https://arxiv.org/abs/2505.22525v1 |
Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 . [38] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. 2024. Transfusion: Predict the next token and diffuse images with one multi-modal model. arXiv preprint arXiv:2408.11039 . 11 A. Experiment Details A Experiment Details A.1 Training Loss This section details our loss function design, with particular emphasis on the visual modality in multi-modal training samples. Given a batch of sequences that may contain multiple images interleaved with text, our model processes each sequence through a backbone transformer to obtain hidden representations: h=ftrans(x;θ) where xrepresents the tokenized input sequence and h∈RB×L×Ddenotes the hidden states for batch size B, sequence length L, and hidden dimension D. The model employs a unified multimodal vocabulary space where image and text tokens share the same embedding space. Specifically, vocabulary indices 4 through 8195 (8192 tokens total) correspond to visual codebook entries from the VQ-V AE, while the remaining indices represent text tokens. A multimodal language modeling head produces logits over this joint vocabulary: z=fmmlm(h;θmmlm) where z∈RB×L×VandVis the total vocabulary size encompassing both modalities. Our training objective combines two loss components: Autoregressive Multimodal Loss : We compute the standard cross-entropy loss over all tokens (both text and image) in the sequence: Lmm=−1 |T |X (b,i)∈Tlogp(xb,i+1|xb,1:i) where (b, i)∈ T denotes valid token positions with b∈ {1, . . . , B }being the batch index and i∈ {1, . . . , L b−1} being the token position within sequence b. The probability p(xb,i+1|xb,1:i)is computed by applying softmax to the logits zb,i∈RVat position i, which are generated by the transformer conditioned on all previous tokens xb,1:i. The setTexcludes padded positions and positions where ground-truth labels are masked. This loss naturally handles both text token prediction and image token generation within the unified vocabulary space. Visual Reconstruction Loss : For image tokens specifically, we extract visual features and compute a reconstruc- tion objective. Each image in the input is demarcated by special tokens ⟨BOI⟩(beginning-of-image, token ID 8197) and⟨EOI⟩(end-of-image, token ID 8196), with exactly 1024 tokens between them. For each image jin the batch: 1. Extract the corresponding hidden states: h(j) img∈R1024×D 2.Project to codebook space: p(j)=fproj(h(j) img;θproj), where fprojis an additional trainable linear projection layer that maps from hidden dimension Dto codebook dimension D′, i.e., fproj:RD→RD′ 3.Retrieve original codebook features: c(j)=fvq(imagej;θvq), where c(j)∈R1024×D′represents the quantized codebook features for the j-th image The visual reconstruction loss is computed as the mean squared error (MSE) between the projected hidden states and the original codebook features: Lrec=1 NimgNimgX j=11 1024·D′∥p(j)−c(j)∥2 2 where Nimgis the total number of images across all sequences in the batch. Note that if a batch contains no images, Lrec= 0. The total loss is: Ltotal=Lmm+λLrec Previous works have made initial attempts to train autoregressive next-token-prediction LMMs beyond cross- entropy loss (Li et al., 2025; Hu et al., 2025). To rigorously benchmark the effectiveness of incorporating the visual reconstruction loss, we fine-tuned Anole-7b on 50,000 images from JourneyDB and evaluated on the GenEval benchmark. We investigated different | https://arxiv.org/abs/2505.22525v1 |
values of λand compared against the token discrepancy loss Ltdproposed by (Li et al., 2025). Note that we did not apply classifier-free guidance to ensure fair and simple comparison. Results in Tab. 3 demonstrate that λ= 1yields optimal performance, improving GenEval scores by approximately 3 points. Neither Ltdalone nor the combination Lmm+λLrec+µLtdsurpassed our proposed objective. Thus, we adopt Ltotal=Lmm+Lrecwithλ= 1for all experiments in this paper. 12 A.2 Training Stages Details ModelGenEval Single Obj. Two Obj. Counting Colors Position Color Attri. Overall ↑ Baseline Lmm 0.84 0.20 0.11 0.59 0.02 0.04 0.30 Reconstruction Loss Variants Lmm+ 0.5·Lrec 0.86 0.21 0.15 0.63 0.04 0.06 0.32 Lmm+Lrec 0.86 0.26 0.12 0.61 0.04 0.08 0.33 Lmm+ 5·Lrec 0.86 0.22 0.12 0.58 0.03 0.04 0.31 Token Discrepancy Loss Variants Lmm+ 0.5·Ltd 0.84 0.24 0.11 0.58 0.02 0.05 0.31 Lmm+Ltd 0.84 0.21 0.13 0.60 0.03 0.05 0.31 Lmm+ 5·Ltd 0.80 0.18 0.10 0.56 0.02 0.04 0.28 Combined Loss Lmm+Ltd+Lrec 0.88 0.22 0.13 0.56 0.06 0.03 0.31 Table 3: Evaluation results on GenEval with different loss functions. A.2 Training Stages Details Tab. 4 shows the hyperparameters used in our training experiments. For stage-1 training, we train on approximately 4M text-image pairs. For stage-2 training, we use approximately 5k samples for intermediate visual subgoals and 40k samples for self-critique on visual thoughts, respectively. HyperparametersIntermediate Visual Subgoals Self-Critique on Visual Thoughts Stage 1 Stage 2 Stage 1 Stage 2 Learning rate 1.0×10−51.0×10−51.0×10−51.0×10−5 LR scheduler Linear Linear Linear Linear Gradient clip 1.0 1.0 1.0 1.0 Optimizer AdamW ( β1= 0.9,β2= 0.999) AdamW ( β1= 0.9,β2= 0.999) Training steps 5K 2K 5K 26K Batch size 1536 8 1536 8 Table 4: Hyperparameters for training vision generation with intermediate visual subgoals andvision generation with self-critique . Note that vision generation with intermediate visual subgoals andvision generation with self-critique share the same stage-1 model. The stage-1 model is then separately fine-tuned for generating different types of native long-multimodal thought process. A.3 Inference Details Classifier-free guidance (CFG) is crucial for enhanced image generation quality. We apply the following CFG settings to ensure balanced conditioning, enabling the model to concurrently leverage text-based thoughts, previous visual hypothesis, visual subgoals, and original prompts. Specific configurations are shown in Tab. 5. CFG scale Intermediate Visual Subgoals Self-Critique on Visual Thoughts subgoals final Full conditions 5.0 2.0 1.5 Image conditions 0.0 1.2 0.8 Negative conditions 3.0 3.0 3.0 Prompt conditions 0.0 5.0 5.0 Table 5: CFG scale for vision generation with intermediate visual subgoals andvision generation with self-critique . 13 B. Case Study B Case Study B.1 Vision generation with intermediate visual subgoals Fig. 7 shows cases of vision generation with intermediate subgoals. The model demonstrates its ability to decompose complex visual generation tasks into manageable sub-components. When generating broccoli and vase , the model first generates a high-quality image of a broccoli, focusing on realistic textures and vibrant green colors with detailed florets. It then generates a simple vase. In the final step, it combines these elements to create the complete image as requested by the prompt. Similarly, when tasked with generating both a pizza and a bench , the | https://arxiv.org/abs/2505.22525v1 |
model first creates each object separately, producing a detailed pizza with visible toppings, and then a wooden bench. The final image successfully integrates both elements, demonstrating the model’s ability to handle multiple distinct objects in a single composition. Generating intermediate visual subgoals allows the model to focus on completing each subgoal before producing the final output, resulting in a higher-quality solution. Figure 7: Vision generation with intermediate visual subgoals on GenEval. 14 B.2 Vision generation with self-critique B.2 Vision generation with self-critique Fig. 8 shows cases of vision generation with self-critique. The model demonstrates its ability to analyze initial visual hypothesis and iteratively improve them through reflection. For generating a yin-yang symbol with a tiger head, the model initially generates a imperfect yin-yang symbol featuring a tiger. Through self-critique, it recognizes the need to emphasize the tiger faces to be more stylized and integrated. In the final version, the model successfully implements the corrections, with the tiger head now seamlessly blend into the yin-yang design. For generating a 4K-resolution portrait of “Blind Ambition,” the model first produces a split-view sketch with rough annotations, uneven shading, and a cluttered background that breaks the required central symmetry and leaves the character’s gaze only loosely obscured. Upon self-critique, it recognizes that the composition must be unified and centrally framed, the metaphorical blindness needs either a clear blindfold or shadowed hollows, and the background should shift from sketchy walls to a neutral, atmospheric void while textures demand deeper, more subtle rendering. The final image successfully centers a lone figure in a balanced stance, with warm steel armor carved by refined gradients, and a misty, minimalist backdrop that underscores the character’s solitary, driven ambition. Figure 8: Vision generation with self-critique on DPG-Bench. 15 B.3 Failure Cases B.3 Failure Cases The left-half of Fig. 9 shows a failure case of vision generation with intermediate subgoals. While the model successfully generates individual components, it fails to combine them in the final output. When tasked with generating microwave and bench , the model first produces a microwave placed on a kitchen counter, then generates a bench in a park setting. However, in the final step, the model fails to meaningfully combine both elements. This failure illustrates that while the model can correctly generate individual subgoals (the microwave and the bench separately), it sometimes struggles with the spatial reasoning required to integrate disparate objects that don’t naturally belong together. We anticipate this issue to be mitigated with stronger base models or further post-training. The right-half of Fig. 9 shows a failure case of vision generation with self-critique. When prompted to generate a photo of a tv remote , the model initially produces a retro TV set instead, completely misinterpreting the request. Through self-critique, the model correctly identifies this error and provides detailed suggestions: remove the TV , focus on the remote either held in hand or on a surface, and ensure the remote is the main subject. Despite this accurate analysis and reasonable corrective suggestions, the second attempt still shows a TV set rather than a TV remote. This failure demonstrates | https://arxiv.org/abs/2505.22525v1 |
a disconnect between the model’s analytical capabilities and generative execution—while the self-critique mechanism can accurately identify problems and propose solutions, the model sometimes fails to implement its own corrections in subsequent generations. The model can reason about what’s wrong but cannot translate this understanding into proper image generation. Similarly, we anticipate this issue to be mitigated with stronger base models or further post-training. Figure 9: Example failure cases of using thinking with generated images . Either the intermediate images generated are correct but fails to combine them together in the end, or the critique is correct but didn’t generate the correspond- ing correct image. 16 | https://arxiv.org/abs/2505.22525v1 |
arXiv:2505.22531v1 [cs.LG] 28 May 2025TRAINING RL A GENTS FOR MULTI -OBJECTIVE NETWORK DEFENSE TASKS A P REPRINT Andres Molina-Markham1, Luis Robaina1, Sean Steinle1, Akash Trivedi1, Derek Tsui1, Nicholas Potteiger1, Lauren Brandt1, Ransom Winder1, Ahmed Ridley2 1The MITRE Corporation2NSA January 31, 2025 ABSTRACT Open-ended learning (OEL) –which emphasizes training agents that achieve broad capability over narrow competency –is emerging as a paradigm to develop artificial intelligence (AI) agents to achieve robustness and generalization. However, despite promising results that demonstrate the benefits of OEL, applying OEL to develop autonomous agents for real-world cybersecurity applications remains a challenge. We propose a training approach, inspired by OEL, to develop autonomous network defenders. Our results demonstrate that like in other domains, OEL principles can translate into more robust and generalizable agents for cyberdefense. To apply OEL to network defense, it is necessary to address several technical challenges. Most importantly, it is critical to provide a task representation approach over a broad universe of tasks that maintains a consistent interface over goals, rewards and action spaces. This way, the learning agent can train with varying network conditions, attacker behaviors, and defender goals while being able to build on previously gained knowledge. With our tools and results, we aim to fundamentally impact research that applies AI to solve cyberse- curity problems. Specifically, as researchers develop gyms andbenchmarks for cyberdefense, it is paramount that they consider diverse tasks with consistent representations, such as those we propose in our work. Keywords Reinforcement Learning ·Open-ended Learning ·Cybersecurity 1 Introduction Reinforcement Learning (RL) has been demonstrated useful to develop agents that solve complex tasks with high performance. Naturally, this has led researchers to hypothesize that RL may be used to develop agents to defend the next generation of computer networks with increasingly complex requirements and sophisticated adversaries. However, unlike other problems where RL has been applied successfully, network defense is not characterized by a single task with a fixed set of rules. Our work explores the applicability of Open-ended Learning (OEL) [ 34] to offer a solution to this dichotomy, enabling autonomous agents to learn a broad set of behaviors that result in more secure and reliable computer networks, even when information is imperfect and goals are underspecified. Open-ended Learning has emerged as a promising learning process for training agents to achieve multiple goals that require mastering several tasks. Moreover, OEL expects agents to exhibit useful behaviors when agents face situations Approved for Public Release; Distribution Unlimited. Public Release Case Number 25-0177. This technical data deliverable was developed using contract funds under Basic Contract No. W56KGU-18-D-0004. The view, opinions, and/or findings contained in this report are those of The MITRE Corporation and should not be construed as an official Government position, policy, or decision, unless designated by other documentation. ©2025 The MITRE Corporation. ALL RIGHTS RESERVED. Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 1: Overview of dynamic task selection: The outer loop adjusts the training-task distribution based on the latest agent policy evaluation, following predefined rules (e.g., a curriculum heuristic that increases task complexity). The agent trains for | https://arxiv.org/abs/2505.22531v1 |
multiple iterations, with each evaluation round informing the next. that are unknown at design time [ 34]. This kind of learning paradigm is well aligned to the requirements for training agents to defend networks. Cyberdefenders need to reconfigure networks, balancing multiple goals and subtasks that must adapt to service demands and evolving threats. Nonetheless, despite its potential, applying OEL in the context of cyberdefense is challenging. Crucially, OEL requires adequate representations of goals, states and actions [ 34,12] that enable agents to learn complex behavior by becoming competent in a collection of skills or subtasks. This effectively requires representations and learning architectures that allow agents to use previously learned skills to acquire new skills. Providing a framework that presents a learning agent with a broad set of tasks is insufficient. If action spaces, state spaces and goal representations are totally disconnected between two tasks, we can hardly expect that an agent learns how mastering one task relates to mastering the other. Thus, like previous work by Stooke et al. [ 41] in the context of physical 3D worlds , a key component of our approach is to define a representation for network defense tasks that can be procedurally generated while preserving the necessary consistency between tasks to enable agents to learn task relationships. Though previous work has proposed partial answers to do this effectively in other domains [ 41,33,43,6,27], it is not yet understood how to accomplish this in the context of training agents to defend computer networks. Key ideas enabling our approach include the use of action representations [ 11] to maintain consistency of action spaces; and the specification of goals and metrics using the Planning Domain Definition Language (PDDL). Correspondingly, these two ideas allow agents to learn consistent action semantics over a task universe; and the representation of broad sets of goals and metrics that have well-defined relationships. Importantly, full domain dynamics need not be specified in PDDL. Defining all aspects of network tasks using PDDL is intractable for realistic network defense problems. The results of network defender actions are non-deterministic and preconditions are not fully observable. Adequate representations for network defense tasks must be complemented by an approach to sample these tasks and present them to the agent during training. Our work discusses benefits of difficulty-based curriculums (similar to automated domain randomization [32]), as well as the role of task diversity to seek policy robustness and generalization. Similar to previous work outside network defense, we show that presenting a learning agent with a collection of subtasks with varying focus and difficulty results in policies that perform better at solving complex network defense tasks –when compared to an approach where the agent interacts with a single environment and performs a single task (i.e., the end goal). Arriving at a useful policy with this approach requires experiencing fewer timesteps, and the resulting policy is more robust, namely, it performs better under conditions not seen during training. Our results support the hypothesis 2 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT that the OEL paradigm may be more suitable | https://arxiv.org/abs/2505.22531v1 |
for learning network defense tasks than approaches that only train by interacting with a single instance of an adversary. Our work does not fully answer the general question of what distribution of tasks will produce the best agent. However, we provide insights about aspects of tasks and curriculums that are relevant for network defense. Specifically, in the context of defending computer networks, it is important that the behavior of defending agents be robust to changes in the network (e.g., size, topology, congestion); changes in the behavior of normal users (gray agents); the presence of adversaries with different goals (e.g., ransomware vs stealing data); and changes (to a degree) in the procedures (behavior) employed by the adversary. Ultimately, our recommendation is that future developments of gyms for network defense [ 5,40,29,26,2], consider the problem of broad task representation. Our tools and insights provide a foundation in that regard. We also advocate for the consideration of broader sets of network defense tasks when developing benchmarks to evaluate the robustness of network defenses. The issues of generalization in machine learning solutions for network security problems is an increasingly relevant topic [21, 9]. Contributions The following are the most important contributions of this work: •We present evidence to support the idea that OEL presents a promising learning paradigm to develop au- tonomous network defenders. Importantly, when an agent learns to master a broad set of tasks, the knowledge of the agent can be generalized to master a task from the same universe of tasks without having faced that specific task before. •We present a concrete approach for defining a universe of network defense tasks, and demonstrate how this results in training an agent that learns a policy that can generalize to a reasonably complex network defense task. •We explore the extent to which this kind of “flexible” learning may be an important step toward learning robust policies for network defense whose performance does not fall apart when the adversary deviates from its basic behavior. 2 Related Work Our work is related to two main research areas: (1) research pursuing techniques to train agents to perform a broad set of tasks; and (2) research focusing on the specific aspects of applying reinforcement learning to network defense. With regard to the first area, our work is related to research in meta-learning [ 13,14,35,16,23], curriculum learning [ 10,36] and open ended learning [ 34,41]. With regard to the second area, our contributions are complementary to prior work developing autonomous network defenders [24, 5, 37, 26, 17] or autonomous red agents [42, 44, 18]. Multiple approaches have been developed, in the context of robotics, with the general goal of training robots to perform complex tasks. Eysenbach et al. [ 14] studied how to hierarchically compose pretrained skills to learn complex tasks. Finn et al. [16] introduced the notion of online meta-learning to quickly adapt to new tasks. Other work has focused on the role of task selection for fast, effective, and robust learning. Our work is more closely related to these works, such as curriculum learning [ 10] and open | https://arxiv.org/abs/2505.22531v1 |
ended learning [ 41,33,43,6,27]. We share the general goal of achieving robust and generalizable learning. However, our work has an emphasis on adversarial settings. In adversarial settings, an adversary controls aspects of a task definition (i.e., by controlling aspects of the environment), and therefore we study the extent to which RL policies must be robust to various forms of task variation. Several research efforts have pursued the general problem of building gyms , to train network defenders via reinforcement learning [ 24,5,37,26,29]. Kanagawa et al. [ 24] touch on the issue of generalization. However, the environments in their Rogue-Gym are based on roguelike video games. In contrast, our work has sought to be directly applicable to the problem of defending realistic computer networks. Other works have similarly prioritized relevance to network defense [ 5,37,26]. CyberBattleSim [ 40] was created to be an experimental platform to train automated agents through RL and research and investigate how they operate in enterprise network environments, using a high-level simulated abstraction of computer networks and cybersecurity concepts. The Cyber Operations Research Gym (CybORG) [ 5,37] was an environment developed to support the creation of agents capable of autonomous cyber operations with the expectation of training in simulation and the expressed ambition to address the gap between this and validation for deployment to real environments. CyGIL [ 26] is an architecture providing an environment of network operations for emulated RL training, which also includes the integration of state-of-the-art industry tools to increase agent capability. 3 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT The Primary-level AI Training Environment (PrimAITE) [ 28] is a simulator gym for training and assessing AI for cyberdefense against modeled cyberattacks, where training uses RL reward functions balance countering adversaries and maintaining successful operations. Yawning Titan [ 2] is a simulator that stands as a different approach for simulation toward developing cyber defenses, focusing on models for causal inference, using dynamic causal Bayesian optimization (DCBO) to inform blue actions. FARLAND (the framework for advanced RL for autonomous network defense) [ 29], enables a practical approach to create RL-enabled autonomous network defense agents and the evaluation of their robustness against attacks. RL is a typical approach for training defenders with wide possibilities for implementation. In Bates et al. [ 8], the focus was on reshaping how defenders are rewarded, expanding on the basis of giving penalties for adversarial success to inquire as to the effects of the relative magnitude of penalties, of introducing positive rewards, and of establishing agent curiosity, motivating agent exploration of lesser known states, which determined that the precise techniques used matters greatly to ensure improvements instead of degradation. Research has also examined specific approaches of RL against different cyber use cases. In [ 7], RL was pitted against other machine learning approaches (CN2, SVM) in an intrusion detection context. In [ 19], the use case considered was intrusion response, where the authors employed a simulation system in which to train defenders through their RL method of Threshold Fictitious Self-Play which were in turn evaluated in an emulation system, | https://arxiv.org/abs/2505.22531v1 |
although this emulation system was designed around intrusion response specifically. Unlike our work which includes an abstraction that allows transition of simulation to emulation, many approaches remain within a simulation environment. Feng and Xu [ 15] couched cyber state dynamics as a two player zero-sum mathematical game and learned an optimal cyber defense strategy for it using an actor-critic neural network trained via DRL. Applebaum et al. [ 4] performed an analysis of Tabular Q-Learning RL as a baseline algorithmic approach to autonomous cyberdefense in a simulated cyber environment expanding from Microsoft’s CyberBattleSim. Zhu et al. [45] provided an approach for network defense agents trained with an algorithm combining DDQN and dueling DQN in the CybORG framework, but with the future intent of expanding into experiments inclusive of emulation. In [ 31,30] RL was used to learn defenses for intrusion response simulations in an environment using attack graphs specified in the Meta Attack Language, where the future ambition was to bridge the gap between simulation and real environments. Our approach is also related to previous work that suggested defining universes of tasks to train cyberdefenders [ 29]. However, that work did not specifically study how dynamic task selection impacts learning results, or how to concretely implement network defense representations that provide consistency during training. Our paper proposes concrete implementation approaches and provides empirical evidence to justify our hypotheses. Reinforcement learning has also been proposed to drive the behavior of red agents [ 40,42,44,18]. Janisch et al. [ 22] introduced a red-focused gym, Network Attack Simulator & Emulator (NASimEmu), for DRL training offensive penetration agents that can withstand and adapt to novel scenarios, allowing transfer to new settings with different network topology, size, and configuration and supporting seamless deployment from simulation training to emulation. Like all this work, our work is directly concerned with the practical aspects of applying RL to network defense. However, their rationale and approach are different. Our approach relies on sampling tasks from a universe to achieve generalizability and to facilitate progress during training. In contrast, the goal of red applications is not to find single agents that master multiple tasks. In their approach, it is sufficient to find any number of red agents that induce at least one security violation. Moreover, research into attacks can extend to examining and mitigating attacks on the controls in cyberphysical systems managed by RL. In Havens et al. [ 20], to achieve online mitigation of the bias introduced by an attack on a DRL system, the researchers took an approach of optimizing a master policy that can detect attacks and choose between subpolicies appropriate to either the nominal state or the state an adversary perturbed. 3 Approach In Section 4, we discuss the performance of an agent dealing with a relatively complex network defense task. This agent was trained to master simpler tasks to both ensure that the agent continues to learn and that we end with a robust agent that can effectively perform network tasks that are similar to—but not exactly the same as—a task that the agent has performed before. | https://arxiv.org/abs/2505.22531v1 |
This section describes our approach to define network defense tasks and universes of them. In addition, we discuss our strategy to present new tasks to a learning agent to achieve both progress andgeneralization . 3.1 Network Defense Tasks A network defense task in our framework must adequately capture the security and quality of service goals of a concrete enterprise network. In other words, a concrete task encodes various quality of service (QoS) requirements of the 4 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT network, as well as the dynamics of the enterprise network, which in turn, depend on the behavior of the users of the network, the computer applications in the network, and how the network is configured to process and deliver packets. 3.1.1 Task Representation Concretely, our framework defines a network defense task as a pair (N, G), where Ndefines the network dynamics andGdefines the security and liveness goals of the network defender. The network dynamics Nconsist minimally of (I,{πgi},{πrj}, f), an initial network configuration; a set of policies that drive the behavior of gray agents; a set of policies that drive the behavior of red agents; and, a function that transforms an action representation, which is typically a high-level, abstract description of a defender action, into an action that can be executed in the simulation. Iincludes a description of the set of initial hosts in the network, how they are connected, what software is installed in each host, and what packet forwarding rules initially apply. Ialso determines where red remote agent tools (RATs) are installed. Policies in {πgi}and{πrj}can be implemented via traditional programs, stochastic policies, or planners. The dynamics of a RL task also depend on the sets of states and actions, the observation model, and the state transitions. Crucially, fis a key component of the task definition. As we elaborate in Section 3.1.2, to be able to implement OEL ideas we need stable observation and action spaces. For this, we rely on action representations [ 11] that are mapped (by f) to complex actions that can be executed in the environment. Gis specified as a set of conditions that are checked during and at the end of an episode, and a real valued function (reward function) of these conditions. For example, when using sparse rewards, Gis specified as a function of the number of times conditions (related to security or quality of service) were observed during an episode. These conditions may include, among others, network hosts reporting inability to transfer a file from one host to another or hosts with a red RAT being captured in a honey network. Section 3.1.3 describes goals in our framework in more detail. The aspects (N, G)of a network defense task are all aspects that are likely to change in a real network. For that reason, it is necessary for an agent to learn robust policies that perform well for most tasks in a universe N × G , for a suitable set of network dynamics Nand a set of security and liveness goals G(cf. § 3.1.3). The behavior | https://arxiv.org/abs/2505.22531v1 |
of authorized users in a network and the sets of tactics, techniques, and procedures that dictate the behavior of an adversary can be very broad. However, when developing an RL network defender, U=N × G should not be arbitrary. Instead, we argue that the definition of Uand the selection of tasks in U, during training and evaluation, should be important considerations for the development of said defender. As expected, the performance of a RL network defender also depends on what it can observe and the actions it can take. To ensure a consistent interface for the RL defender, varying the task parameters (N, G)across the universe of tasks N × G must be done in a way that maintains a standardized set of possible actions and observations, as described below. 3.1.2 Network Dynamics Similarly to Stooke et al. [ 41], when we train a network defender, we fix the sets of actions Aand observations Ofor a task universe U. Fixing Ois generally not difficult as representations of state do not change as frequently. These would only need to be adjusted, for example, when new sensors or data sources are added. On the other hand, fixing Apresents a technical challenge because defender actions are more accurately characterized as parametric actions, such as isolate host with IP 192.168.1.103; re-image host with hostname webserver1.example.com ; or start routing packets from host with IP in range R1to hosts with IP in range R2through the network function with IP 192.168.100.1. With this parametric representation, the set of defender actions grows as the network scales. For example, a network with 200 hosts results in an action space of about 3,000 possible actions. To address this challenge, we use action representations with heuristics that depend on the state of the network. For example, the actions above could be described as non-parametric actions, such as: isolate-host-flagged-by-nids ;reimage- webserver ; orstart-deep-packet-inspection-in-zone-flagged-by-nids . Because there is no unique way to transform these non-parametric actions into parametric actions, our approach requires specifying such transformation fin the definition of the dynamics Nof the network task. Note that fixing actions and observation spaces does not mean that the dynamics are fixed. The initial conditions and the policies of gray and red agents (I,{πgi},{πrj}) can still vary during training. In addition, different network defense tasks in a curriculum can also specify different goals for the defender agent, as we describe next. 5 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT 3.1.3 Goals One typical way to encode the targeted behavior of RL agents is through the specification of reward functions. However, it is not feasible to manually tailor reward functions to encode each behavior in a large set of desired behaviors. This presents a technical challenge that we address by encoding a reward as a function of a goal and a metric that helps to optimize for desirable characteristics of solutions. Specifically, in our framework, each network defense task defines a boolean goal and a numeric metric that combine terms using PDDL syntax. These terms describe a goal that must be satisfied at | https://arxiv.org/abs/2505.22531v1 |
the end of an episode; and, a metric that is evaluated at the end of the episode (when using a sparse reward) or every step (when using a dense reward). With these goal-metric pairs we defined a sparse reward function as follows, where xis the result of evaluating the numeric expression corresponding to the metric , andgis the boolean expression corresponding to the goal: r(x) =1 2ex2+ 0.5 ifgis satisfied 1 2ex2 ifgis not satisfied It is possible to specify other reward functions from a goal-metric pair. We selected this approach because it provides intuitive and desirable properties. The reward function is bounded between 0 and 1, where 1 represents the most desirable outcome. When xis close to zero (indicating less penalties), r(x)approaches 1 if the goal is satisfied and at most 0.5 if the goal is not satisfied. Conversely, as xdecreases (indicating more penalties), r(x)converges to 0.5 if the goal is satisfied and to 0 if the goal is not satisfied. This approach offers a more intuitive and interpretable reward signal compared to other methods, where reward values may drop to extreme ranges [ 1], which can be difficult to interpret. For instance, using the definition of r(x)and the example goal-metric pair shown in Figure 3, we derive the reward curve illustrated in Figure 2. In our evaluation, the goal-metric pairs are expressed as ratios. For metrics that yield larger integer values for x, alternative reward functions may be more appropriate to prevent r(x)from dropping to zero too quickly. Figure 2: Reward curve derived from the goal-metric pair shown in Figure 3. 3.1.4 The Use of PDDL Given PDDL’s well-established role in defining planning problems and domains for symbolic planning and problem solving [ 39], we build on this familiarity for task representation in our approach. PDDL enables us to formally specify goals and metrics, mask actions, and create plans for bootstrapping. By leveraging PDDL, we gain a consistent and flexible approach to defining these core components of our network defense tasks, which in turn enhances training efficiency and supports the development of robust RL agents. Below, we detail the three key areas where PDDL is used in this work: Goal-Based Rewards: As mentioned in Section 3.1.3, manually defining reward functions for a wide range of behaviors is impractical. Although it is possible to use programming languages like Julia to implement reward functions by parsing agent actions and observed states, this approach often presents several challenges. For example, as the action and state spaces grow, parsing these actions and states into meaningful numeric rewards becomes increasingly cumbersome, making the system difficult to scale. To efficiently define a diverse set of goals and metrics, we instead use PDDL. The key advantage of using PDDL is the ability to crisply specify a broad range of goals and behaviors while minimizing complexity and scaling issues that 6 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT come with parsing state-action data manually. Additionally, PDDL simplifies the process of adapting reward functions as new tasks or behaviors emerge. We utilize PDDL to define | https://arxiv.org/abs/2505.22531v1 |
boolean goals and numeric metrics for each network defense task, forming the core of our reward structure. The goal must be satisfied by the end of an episode, while the metric is evaluated either at the episode’s conclusion (for sparse rewards) or at each step (for dense rewards). Figure 3 illustrates one example of a goal-metric pair. Additional examples and a comprehensive list of the terms used in our evaluation are available in Appendix 6.2. ( : g o a l ( or ( and ( n o t red − i n a c t i v e ) ( n o t r e a l −compromise ) ( d e c l a r e d − v i c t o r y ) ) ( and ( red − i n a c t i v e ) (= n o n t r i v i a l − blue − a c t i o n s 0 ) ) ) ) ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( / bad −qos − e v e n t s (+ bad −qos − e v e n t s good −qos − e v e n t s ) ) ) ) Figure 3: Example of a goal and a metric in PDDL: When an active red agent is detected, the agent should use decoys as a countermeasure to deceive the red agent into thinking it has successfully compromised a real asset on the network. This behavior is encoded by the requirement that the red agent must "declare victory" without the presence of a "real compromise." In the absence of an active red agent, the agent should remain passive ("do nothing" action). The metric specifies to minimize the ratio of non-trivial actions over the total number of actions, and the ratio of negative QoS events over the total number of QoS events. Action Masking: The second application of PDDL is in action masking, which limits the RL agent’s action space to only those actions that are actually available in the current state. Without this mechanism, the agent would have access to the entire action space A(described in section 3.1.2), which is large and likely to include many actions that are not actually applicable to the current state. Many other RL network defense projects implement action filtering in a rule-based manner [ 25,40,37,5], where constraints on the available actions are manually defined based on the current state and environmental conditions. These rule-based methods, though effective in some settings, rely on hardcoded constraints that are specific to the environment, making them less scalable and adaptable to more complex or varied scenarios. While specifying the full domain dynamics in PDDL is not necessary, defining action preconditions in PDDL enables the use of its built-in functions to dynamically | https://arxiv.org/abs/2505.22531v1 |
filter out inapplicable actions. This reduces the action set without needing manually defined rules for every possible constraint. As a result, this approach offers a more scalable and flexible solution for action filtering, well-suited for managing complex and dynamic environments. By focusing on actions that can yield meaningful effects, this approach improves the agent’s decision-making and accelerates the learning process. Blue Agent Planning: The third key use of PDDL, which will be discussed in more detail in future work, is blue-agent planning. In this approach, PDDL is used to combine goal-based rewards and masked actions to generate example plans for the blue agent. These plans are then bootstrapped into the RL algorithm, providing the agent with valuable learning experiences that guide its training. By leveraging these PDDL structures, we can create plans that accelerate the agent’s training and enhance its performance. Future work will explore and elaborate on additional aspects and impacts of this planning. 3.2 Task Selection There are several challenging questions related to task selection, and, while we do not offer complete solutions in this paper, we hope that our framework and insights guide future work. First, it is important to know what strategy should guide the selection of tasks during training, to achieve optimal progress. Similarly, what task selection strategy would lead to greater generalization? The answers to both most likely depend on the training algorithm that is used. An answer to the first question would result in the most efficient curriculum to solve specific tasks. An answer to the 7 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT second question would yield a strategy to train more generally capable agents. Finally, since our goal is to train robust network defenders, we also care about a third question: How do we select testing tasks ? Here, we outline our current task selection strategies and then we evaluate them in Section 4. Our approach for task selection is loosely related to Stooke et al.’s [ 41] and to Akkaya et al.’s [ 32]. Stooke et al. varied three properties of tasks –applied to worlds andagent goals –as agents learned: smoothness ,vastness , and diversity . In contrast, Akkaya et al. only sought to gradually increase the difficulty of tasks during training. Our notion of smoothness is similar to Stooke et al.’s. However, our research motivation includes more modest notions of diversity andvastness . As we present new tasks to learning agents, we keep small distances between tasks (within the universe of tasks). Concretely, these are the attributes of network dynamics that may vary during training: 1.Size of a network controlled by two parameters: the number of hosts in a network and the number of subnets 2. Number of initially compromised hosts (with a red RAT from the beginning) 3.The basic behavior of a red agent consistent with one of the following goals: (Exfiltration, Ransomware, DDoS, or DoS) 4.Deviation from a known basic behavior along two dimensions: the time between actions consistent with the known basic behavior; and the amount of gray-like behavior driven by the red agent | https://arxiv.org/abs/2505.22531v1 |
aimed at masking red behavior1 5. The number of time steps that the blue agent must survive without incident 6.A goal-metric pair in a set of 43 pairs that range from simply detecting the presence of an attack, using specific actions to mitigate the attack, to ultimately determining how to efficiently use any available actions to maximize QoS and satisfy more sophisticated security goals The variation of these parameters, whose list can be naturally extended, helps to provide a somewhat diverse universe of tasks. However, while the number of possible instantiations of tasks is large, we cannot currently consider Nvast. We leave for future work extending our architecture to draw many more TTPs from frameworks, such as ATT&CK [ 38], and considering richer forms of injecting normal behavior. Our task selection strategy, during training, is to simply increase the numbers (or cardinalities of sets) for each of the parameters that determine Nas agents’ mean scores reach a level and to decrease these numbers as the agents’ mean scores fall below a level. Figure 4 provides an example visualization of how the cardinalities for each parameter change as levels increase. To evaluate generalization, we present trained agents with a new task that they never saw during training: A red agent with a DoS goal and multiple levels of deviation from its basic behavior. We describe further details about the testing task in Section 4. 4 Evaluation To assess the extent to which OEL may be a suitable learning paradigm to train autonomous cyberdefenders, we evaluate the following aspects. First, Section 4.1 evaluates the benefits of training with a broad universe of tasks, gradually scaling the complexity of the defense tasks that are presented to the learning agent. Then, Section 4.1 compares different strategies for dynamic task selection. Finally, Section 4.2 evaluates the degree to which it may be possible to achieve generalization and robustness via OEL. This section also discusses relevant implementation details. Specifically, Section 4.3 explains how we created curricu- lums with multiple goal-metric pairs. We describe how we estimate the relative difficulty of each goal-metric as well as how it is possible to use different approaches for representing goals as observations to guide the learning agent. Sections 4.4 and 4.5 also provide details about the algorithms and hyperparameters that we used in our experiments2. 4.1 Scaling With regards to scaling, we evaluate the extent to which using dynamic task selection during training results in a policy with higher performance and with shorter training –when compared to an approach that only trains to master a task of fixed difficulty. 1We capture these two dimensions via parameters of probability distributions that affect the sampling of red actions. 2Source code to replicate our results is forthcoming. 8 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 4: Maximum sampling values at different levels for dynamic task selection of a blue agent with no DoS. Parameters are sampled from a range between the maximum value at each level and the minimum (value at level 0). Diversity of host behavior is calculated as | https://arxiv.org/abs/2505.22531v1 |
gray traffic divided by two, then multiplied by gray agent diversity, with the division preventing plot flattening at larger y-axis values. Figure 5: Comparing Defender Generalization Ability: Dynamic vs. Fixed Task Training Approaches Specifically, we compare the performance of two policies obtained with two different task selection strategies, as the agent is learning: Fixed task selection In this case, the task definition remains constant throughout the training period using only a task that is representative of the testing task. Dynamic task selection This strategy consists of dynamically adjusting the difficulty of the task, as the agent is learning, and until it eventually faces the target task. Figure 6 compares the performance of policies with the two approaches, on multiple dimensions, as the agent trains. Dynamic task selection outperforms fixed task selection in many regards. Importantly, we can arrive at a higher performance with less training on a more diverse set of conditions. We also evaluate the performance of the policies with several testing tasks after training has been completed. The testing tasks consist of defending against one of several attacks in a network with 200 hosts during 100 episodes, while maintaining a minimum quality of service. The policies of the red and gray agents are probabilistic, but the corresponding probability distributions (driving the behavior of the red and gray agents) are fixed. Figure 5 compares the testing performance of policies after 5M training simulation steps. On tasks at the highest difficulty, policies trained via dynamic task selection perform better at with regards to mitigating attacks. However, training with fixed task selection provides higher QoS when there is not an active attack. Policies trained via dynamic task selection outperform those via fixed task selection in robustness and generalization, as we explain the next sections. 9 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 6: Comparison of two policies: one with the fixed task selection strategy from Section 4.1, and another with dynamic task selection (also from Section 4.1). The marker colors indicate task difficulty during training, based on factors like hosts, subnets, and red/gray behavior variability. Figure 17 shows the comparisons across additional network components. Figure 7: Comparison of task selection strategies: one based on increasing difficulty levels, another selecting tasks uniformly at random, and a third using small variations. While ’smooth’ task updates yield higher average rewards, the difficulty of tasks mastered is greater with the difficulty-based approach. Marker colors represent task difficulty during training, derived from hosts, subnets, and the variability of red and gray behaviors. Figure 18 shows additional plots comparing these strategies across various network components. As we discuss in Section 3.2, during training, we select tasks aiming to gradually increase the difficulty of a task. This section examines how this concrete strategy can result in a better performing policy. For this, we compare the performance of the resulting policy guided by difficulty with policies trained via alternative strategies. Figure 7 illustrates the performance of the strategies that we describe next. Difficulty driven One of the policies is obtained via our difficulty-driven task selection. Uniformly | https://arxiv.org/abs/2505.22531v1 |
random A second policy is obtained presenting diverse tasks (drawing from the same universe of tasks), but selecting their parameters uniformly at random. Smooth changes A third policy is obtained by varying tasks by a small amount, but without being guided by difficulty. Concretely, training starts from a task chosen arbitrarily, when the agent masters this task, the training task is updated by varying one of several aspects by the minimum amount (e.g., increasing the number of hosts in a subnet by 1). To control for the potential influence of the initial task in the this case, we train the policy several times and report the average performance of all the policies trained following the same approach. As we can see in Figure 7, given a computing budget, the difficulty-driven strategy masters harder tasks, with larger networks. Sampling uniformly, tends to present an agent with tasks of medium difficulty, while unguided smooth updates are unlikely to arrive at the highest complexity. While there is more work needed to fully understand how to effectively select tasks during training, our empirical results suggest that smooth updates with a bias toward harder tasks could be a reasonable strategy, in the context of training agents for network defense. 4.2 Generalization Next, we evaluate the extent to which an agent trained with dynamic task selection is able to generalize to similar tasks, even if the agent never experienced the task during training. Concretely, we compare the performance of an agent trained to perform only one task (to mitigate a DoS attack) with the performance of an agent that is trained experiencing a variety of tasks, including the mitigation of an exfiltration and a ransomware attack. However, the agent trained with dynamic task selection never experienced a DoS attack (with a single flooding attacker). 10 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 8: Performance comparison of two policies: one trained with a fixed task to mitigate a traditional exfiltration attack, and the other trained with dynamic task selection, where the blue agent mitigates an exfiltration attack while the red agent interleaves gray-like actions between traditional red actions. Marker colors represent task difficulty during training, derived from hosts, subnets, and the variability of red and gray behaviors. Figure 19 shows additional plots comparing the two policies across other network components. In this part of our evaluation, each agent needs to defend against a DoS attack in a network with 200 hosts during 100 episodes, while maintaining a minimum quality of service. As we can see in Figure 5, the policy trained with dynamic task selection outperforms the policy that was trained to mitigate a DoS. This result suggests that, in addition to arriving at a higher performance with shorter training, the diversity of tasks also helps to arrive at more capable agents. Another form of generalization is related to obtaining policies that are robust to variations in adversarial behavior with the intention to evade. We examine the prospect of obtaining policies that are more robust against this type of obfuscation via our approach. For | https://arxiv.org/abs/2505.22531v1 |
this, we evaluate the performance of a policy trained with dynamic task selection on a task similar to the one we describe in 4.1. However, this time, the red agent gradually deviates from its original behavior. Figure 8 compares the performance of two policies. As we can see, the performance during training is similar for these two strategies. However, the agent training via dynamic task selection is ultimately trained against a more sophisticated red TTP. 4.3 Curriculums with Goals As we explained in Section 3.1.3, our experiments include network defense tasks where the agent must learn to master multiple tasks, corresponding to 43 goal-metric pairs. While it is not easy to determine a total order of these goal-metric pairs that corresponds to their difficulty, we compared the difficulty of an agent to master tasks with each of the goal-metric pairs in a subset of the 11 tasks that are more distinct, based on the goals and metrics. In this comparison, all other parameters were sampled uniformly at random from consistent sets. For example, the number of subnets in a task ranged from 2 to 5 and the number of hosts in each subnet ranged from 5 to 20. Figure 9 shows the reward mean as the agent learns to master tasks in each of the corresponding 11 sets. The goal-metric pairs included were: (1, 22, 26, 30, 34, 38, 39, 40, 41, 42, 43). For example, (goal, metric) pair 1 simply requires the blue agent to learn to trigger any non-trivial action when there is an active red agent and do nothing when a red agent is not active. There are no other requirements about QoS or about minimizing unnecessary expensive actions. Goal-metric pair 39 requires the blue agent to learn to use honey networks to mitigate an active attack and to minimize the number of nontrivial blue actions. Goal-metric pair 43 further requires the agent to learn to use honey networks to convince the red agent that it has successfully compromised a real asset, and it requires the agent to minimize the number of nontrivial blue actions and the number of negative QoS events. The descriptions of all these goals are included in Appendix 6.2. Based on the difficulty to master these goals, we selected 6 that the agent was able to reasonably master and used them in a curriculum to measure how five different representations for goal-metric pairs resulted in higher performing policies. The representations include a discrete representation, a one-hot representation and three dense representations. For dense representations, we use a trainable embedding Layer to transform discrete blue goals IDs into a compact, dense vector representation. This embedding technique is enables the treatment of discrete goal IDs within a high-dimensional continuous space. Throughout the training process, back-propagation adjusts the parameters of the goal embedding layer within our neural network to optimize the model’s overall performance. Figures 10 and 11 show that the one-hot representation results in higher performance but the gains are not significant. The one-hot representation drops entropy faster leading to more deterministic behavior. However, further experimentation is | https://arxiv.org/abs/2505.22531v1 |
needed to determine if a dense representation would be better when the cardinality of the set of goal-metric pairs is higher. 11 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 9: Mean reward for configu- rations, each fixing one of 11 goal- metric pairs. Figure 10: Mean reward for several different goal-metric pair representa- tions. Figure 11: Fraction of episodes that result in a real compromise. 4.4 Training In our experiments, we varied the following aspects of a task T= (N, G). With respect to the network dynamics in N, we varied the number of subnets in a network, the number of hosts in each subnet, the amount of gray-agent diversity and the amount of gray-traffic volume, and the specific TTP that drives the behavior of the red agent. For the robustness experiments, we also varied the probability that a red agent deviated from its original behavior, the deviation diversity, and the deviation volume. The last two parameters are similar to the amount and diversity of gray traffic. With respect to the initial conditions, we varied the number and identities of the initially compromised hosts, as well as the placement of special host in the network. Special hosts include those that have digital content that the adversary seeks to exfiltrate and/or servers that the adversary seeks to compromise. In this case, the goal balanced a set of QoS and security subgoals. These were encoded in a sparse reward function that reported the desirability of an outcome at the end of each episode. This QoS-aware sparse reward accounted for the following factors: The result adds 1 when the blue agent wins and subtracts one when the blue agent loses. Depending on the red TTP, the blue agent only wins if the red agent is deceived and forced to compromise a fake target. This implies that preventing a compromise only by isolating compromised hosts does not result in a blue win with one exception. This QoS-aware sparse reward accounts for the possibility that there is no compromise (the red agent is inactive). In this case, the best outcome is achieved by not disrupting QoS. QoS contributes a number between -1 and 1 based on a normalized account of good QoS events andbad QoS events . Additionally, it penalizes unnecessary blue actions (host isolations and crown-jewel relocations). Penalizations contribute a number in the interval [-1, 0] for each type of blue action. As we describe in Appendix 6, our implementation normalizes this reward function to return values in the [0, 1] interval. When the learning agent acts, it observes the following: The number of attempts to relocate a protected host during an episode; the number of attempts to isolate a host during an episode; the number of isolated hosts during an episode; the numeric ID of the last action; the number of log events associated with a jewel search since the previous blue action; the number of log events associated with passive discovery attempts since the previous blue action; the number of log events associated with active discovery attempts since the | https://arxiv.org/abs/2505.22531v1 |
previous blue action; the number of failed HTTP connections reported by hosts since the previous blue action; the number of failed SCP connections reported by hosts since the previous blue action; the number of failed SSH connections reported by hosts since the previous blue action; the number of successful HTTP connections reported by hosts since the previous blue action; the number of successful internal SCP connections reported by hosts since the previous blue action; the number of successful SSH connections reported by hosts since the previous blue action; the number of successful external SSH connections reported by hosts since the previous blue action; and, a metric that summarizes QoS since the previous blue action. Depending on the task, the blue agent needed to mitigate an exfiltration, a DDoS, a DoS, or a ransomware attack by performing one of the following actions: isolating a protected host, isolating a suspicious host, relocating a protected host, reimaging a suspicious host, or migrating a suspicious host to a honey network. Using action representations [ 11], we map a small number of actions to a larger set of parametric actions where the subject host of the action needs to be specified. This strategy already proves some learning benefits, as we illustrate in Figure 12. This plot illustrates that, given two training settings that only differ in the action spaces, when the action space is too large, learning is more difficult. 12 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 12: Comparison of two learning strategies that differ only in the available actions: one uses action representations, while the other allows the agent to select actions based on the target host. Learning stagnates with a large action space. In both strategies, observation spaces, reward functions, and network dynamics are the same. Figure 20 shows additional plots comparing these strategies across other network components. 4.5 Algorithm and Hyperparameters We use the Proximal Policy Optimization (PPO) implementation by RLlib with the hyperparamters in Table 1. We selected these hyperparameters through a combination of experimentation and recommendations by Andrychowicz et al. [3]. For example, we use generalized advantage estimation (GAE); we applied observation normalization to the observation space; and we used separate value and policy networks. To min-max scale the observation features across tasks in the task space, we scaled the minimum and maximum bounds accordingly with the number of hosts and the horizon of the episode. This ensures the observation space remains consistent as we discuss in Section 3.1.2. Table 1: PPO Agent Training Hyperparameters Hyperparameter Value gamma 0.99 lr 0.0001 fcnet_activation ReLU use_critic True use_gae True lambda_ ( GAE lambda parameter ) 1.0 vf_share_layers False fcnet_hiddens ( policy ) [128, 128, 128, 128] fcnet_hiddens ( value ) [256, 256, 256, 256] kl_coeff 0.5 kl_target 0.01 train_batch_size 4000 sgd_minibatch_size 128 num_sgd_iter 30 shuffle_sequences True vf_loss_coeff 1.0 entropy_coeff 0.005 entropy_coeff_schedule None clip_param 0.3 vf_clip_param 1.0 grad_clip 500 evaluation_duration_unit episodes evaluation_duration 20 evaluation_config_explore True Dynamic task selection can also work in combination with other algorithms, such soft actor critic algorithms (i.e., SAC) and DQN. Figure 21, in the | https://arxiv.org/abs/2505.22531v1 |
Appendix, compares the performance for a universe with tasks that vary network dynamics but keep the goal-pair constant. 5 Discussion and Conclusion This paper studies the role of training network defenders with task universes, as opposed to training with a single task. We highlight scalability and generalizability as some of the benefits of training and evaluating cybersecurity agents via the specification of comprehensive sets of tasks, rather than small sets of “representative” tasks. Furthermore, by presenting a broad set of tasks during training, it is possible to train agents to mitigate realistic threats. This represents an important milestone toward being able to delegate useful cybersecurity tasks to an autonomous agent. However, while we show that training with dynamic task selection can result in shorter training and higher generalizabil- ity, more work is needed to identify optimal task selection strategies during training. Similarly, further research must 13 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT address the problem of defining adequate task selection strategies to evaluate policies in adversarial settings. Moreover, we believe that building protection mechanisms for RL-enabled decisions requires more than just training an agent with a broader task universe. In future work, we are investigating how dynamic task selection can be combined with RL algorithms that integrate probabilistic inference to infer relations between the current task and other tasks that the agent has already mastered. We hypothesize that this can help to minimize some forms of adversarial manipulation where an adversary obfuscates a well-known TTP to avoid detection. Acknowledgments This paper builds upon FARLAND [ 29], a tool developed through the collaborative efforts of researchers at MITRE and the NSA. We gratefully acknowledge the contributions of the authors of this paper, as well as the many developers and reviewers whose expertise and dedication have shaped FARLAND into what it is today. Source code to replicate our results and build upon our approach is forthcoming. References [1]2024. CAGE Challenge 4. (2024). https://codalab.lisn.upsaclay.fr/competitions/17672#results [2]Alex Andrew, Sam Spillard, Joshua Collyer, and Neil Dhir. 2022. Developing Optimal Causal Cyber-Defence Agents via Cyber Security Simulation. In Workshop on Machine Learning for Cybersecurity (ML4Cyber) . [3]Marcin Andrychowicz, Anton Raichuk, Piotr Sta ´nczyk, Manu Orsini, Sertan Girgin, Raphaël Marinier, Leonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, Sylvain Gelly, and Olivier Bachem. 2021. What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study. In International Conference on Learning Representations .https://openreview.net/forum?id=nIAxjsniDzg [4]Andy Applebaum, Camron Dennler, Patrick Dwyer, Marina Moskowitz, Harold Nguyen, Nicole Nichols, Nicole Park, Paul Rachwalski, Frank Rau, Adrian Webster, and Melody Wolk. 2022. Bridging Automated to Autonomous Cyber Defense: Foundational Analysis of Tabular Q-Learning. In Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security . ACM, Los Angeles CA USA, 149–159. https://doi.org/10.1145/ 3560830.3563732 [5]Callum Baillie, Maxwell Standen, Jonathon Schwartz, Michael Docking, David Bowman, and Junae Kim. 2020. Cyborg: An autonomous cyber operations research gym. arXiv preprint arXiv:2002.10667 (2020). [6]David Balduzzi, Marta Garnelo, Yoram Bachrach, Wojciech Czarnecki, Julien Perolat, Max Jaderberg, and Thore Graepel. 2019. Open-ended learning in symmetric zero-sum games. In International Conference on Machine Learning . PMLR, 434–443. [7]Ahmad Hoirul | https://arxiv.org/abs/2505.22531v1 |
Basori and Sharaf Jameel Malebary. 2020. Deep Reinforcement Learning for Adaptive Cyber Defense and Attacker’s Pattern Identification. In Advances in Cyber Security Analytics and Decision Systems . Springer, 15–26. [8]Elizabeth Bates, Vasilios Mavroudis, and Chris Hicks. 2023. Reward Shaping for Happier Autonomous Cyber Security Agents. In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security . ACM, Copenhagen Denmark, 221–232. https://doi.org/10.1145/3605764.3623916 [9]Roman Beltiukov, Wenbo Guo, Arpit Gupta, and Walter Willinger. 2023. In Search of NetUnicorn: A Data- Collection Platform to Develop Generalizable ML Models for Network Security Problems. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS ’23) . Association for Computing Machinery, New York, NY , USA, 2217–2231. https://doi.org/10.1145/3576915.3623075 [10] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ICML ’09. [11] Yash Chandak, Georgios Theocharous, James Kostas, Scott Jordan, and Philip Thomas. 2019. Learning Action Representations for Reinforcement Learning. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research) , Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), V ol. 97. PMLR, 941–950. https://proceedings.mlr.press/v97/chandak19a.html 14 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT [12] Stephane Doncieux, David Filliat, Natalia Díaz-Rodríguez, Timothy Hospedales, Richard Duro, Alexandre Coninx, Diederik M Roijers, Benoît Girard, Nicolas Perrin, and Olivier Sigaud. 2018. Open-ended learning: a conceptual framework based on representational redescription. Frontiers in neurorobotics 12 (2018), 59. [13] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. 2016. RL\^2\: Fast Reinforcement Learning via Slow Reinforcement Learning. (2016). https://doi.org/10.48550/ARXIV. 1611.02779 [14] Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. 2018. Diversity is All You Need: Learning Skills without a Reward Function. (2018). https://doi.org/10.48550/ARXIV.1802.06070 [15] Ming Feng and Hao Xu. 2017. Deep reinforecement learning based optimal defense for cyber-physical system in presence of unknown cyber-attack. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI) . IEEE, Honolulu, HI, 1–8. https://doi.org/10.1109/SSCI.2017.8285298 [16] Chelsea Finn, Aravind Rajeswaran, Sham Kakade, and Sergey Levine. 2019. Online Meta-Learning. (2019). https://doi.org/10.48550/ARXIV.1902.08438 [17] Myles Foley, Chris Hicks, Kate Highnam, and Vasilios Mavroudis. 2022. Autonomous Network Defence using Reinforcement Learning. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security . 1252–1254. [18] Rohit Gangupantulu, Tyler Cody, Paul Park, Abdul Rahman, Logan Eisenbeiser, Dan Radke, Ryan Clark, and Christopher Redino. 2022. Using cyber terrain in reinforcement learning for penetration testing. In 2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS) . IEEE, 1–8. [19] Kim Hammar and Rolf Stadler. 2024. Learning Near-Optimal Intrusion Responses Against Dynamic Attackers. IEEE Transactions on Network and Service Management 21, 1 (Feb. 2024), 1158–1177. https://doi.org/10. 1109/TNSM.2023.3293413 [20] Aaron Havens, Zhanhong Jiang, and Soumik Sarkar. [n. d.]. Online Robust Policy Learning in the Presence of Unknown Adversaries. ([n. d.]). [21] A. S. Jacobs, R. Beltiukov, W. Willinger, R. A. Ferreira, A. Gupta, and L. Z. Granville. 2022. AI/ML and Network Security: The Emperor has no Clothes. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS ’22) . Association for Computing Machinery, New York, NY , USA. [22] Jaromír Janisch, | https://arxiv.org/abs/2505.22531v1 |
Tomáš Pevný, and Viliam Lisý. 2023. NASimEmu: Network Attack Simulator & Emulator for Training Agents Generalizing to Novel Scenarios. (Aug. 2023). http://arxiv.org/abs/2305.17246 arXiv:2305.17246 [cs]. [23] Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, and Karol Hausman. 2021. MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale. (2021). https://doi.org/10.48550/ARXIV.2104.08212 [24] Yuji Kanagawa and Tomoyuki Kaneko. 2019. Rogue-gym: A new challenge for generalization in reinforcement learning. In 2019 IEEE Conference on Games (CoG) . IEEE, 1–8. [25] Li Li, Jean-Pierre S. El Rami, Adrian Taylor, James Hailing Rao, and Thomas Kunz. 2022. Enabling A Network AI Gym for Autonomous Cyber Agents. In 2022 IEEE International Conference on Computational Science and Computational Intelligence (CSCI) . IEEE, 172–177. [26] Li Li, Raed Fayad, and Adrian Taylor. 2021. Cygil: A cyber gym for training autonomous agents over emulated network systems. arXiv preprint arXiv:2109.03331 (2021). [27] Xiangyu Liu, Hangtian Jia, Ying Wen, Yujing Hu, Yingfeng Chen, Changjie Fan, Zhipeng Hu, and Yaodong Yang. 2021. Towards unifying behavioral and response diversity for open-ended learning in zero-sum games. Advances in Neural Information Processing Systems 34 (2021), 941–952. [28] Chris McCarthy. 2023. Primaite. (2023). https://github.com/Autonomous-Resilient-Cyber-Defence/ PrimAITE [29] Andres Molina-Markham, Ransom K. Winder, and Ahmad Ridley. 2021. Network Defense is Not a Game. CoRR abs/2104.10262 (2021). https://arxiv.org/abs/2104.10262 15 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT [30] Jakob Nyberg and Pontus Johnson. 2023. Training Automated Defense Strategies Using Graph-based Cyber Attack Simulations. (April 2023). http://arxiv.org/abs/2304.11084 arXiv:2304.11084 [cs]. [31] Jakob Nyberg, Pontus Johnson, and Andras Mehes. 2022. Cyber threat response using reinforcement learning in graph-based attack simulations. In NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium . IEEE, Budapest, Hungary, 1–4. https://doi.org/10.1109/NOMS54207.2022.9789835 [32] OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, and Lei Zhang. 2019. Solving Rubik’s Cube with a Robot Hand . [33] Nicolas Perez-Nieves, Yaodong Yang, Oliver Slumbers, David H Mguni, Ying Wen, and Jun Wang. 2021. Modelling behavioural diversity for learning in open-ended games. In International conference on machine learning . PMLR, 8514–8524. [34] Vieri Giuliano Santucci, Pierre-Yves Oudeyer, Andrew Barto, and Gianluca Baldassarre. 2020. Intrinsically motivated open-ended learning in autonomous robots. (2020). [35] Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, and Karol Hausman. 2019. Dynamics-Aware Unsupervised Discovery of Skills. (2019). https://doi.org/10.48550/ARXIV.1907.01657 [36] Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. 2021. Autonomous Rein- forcement Learning via Subgoal Curricula. (2021). https://doi.org/10.48550/ARXIV.2107.12931 [37] Maxwell Standen, Martin Lucas, David Bowman, Toby J Richer, Junae Kim, and Damian Marriott. 2021. Cyborg: A gym for the development of autonomous cyber agents. arXiv preprint arXiv:2108.09118 (2021). [38] Blake E. Strom, Andy Applebaum, Douglas P. Miller, Kathryn C. Nickels, Adam G. Pennington, and Cody B. Thomas. 2018. MITRE ATT&CK ™: Design and Philosophy. (July 2018). https://www.mitre.org/ publications/technical-papers/mitre-attack-design-and-philosophy [39] Zhi-Xuan Tan. 2022. PDDL.jl: An Extensible Interpreter and Compiler Interface for Fast and Flexible AI Planning. (2022). [40] Microsoft Defender Research Team, Christian Seifert, Michael Betser, William Blum, James | https://arxiv.org/abs/2505.22531v1 |
Bono, and Kate Farris. 2021. CyberBattleSim. (2021). https://github.com/microsoft/cyberbattlesim [41] Open Ended Learning Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michaël Mathieu, Nat McAleese, Nathalie Bradley-Schmieg, Nathaniel Wong, Nicolas Porcel, Roberta Raileanu, Steph Hughes-Fitt, Valentin Dalibard, and Wojciech Marian Czarnecki. 2021. Open-Ended Learning Leads to Generally Capable Agents. CoRR abs/2107.12808 (2021). https: //arxiv.org/abs/2107.12808 [42] Khuong Tran, Ashlesha Akella, Maxwell Standen, Junae Kim, David Bowman, Toby Richer, and Chin-Teng Lin. 2021. Deep hierarchical reinforcement agents for automated penetration testing. arXiv preprint arXiv:2109.06449 (2021). [43] Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou Ammar, Jun Wang, and Matthew E Taylor. 2021. Diverse auto-curriculum is critical for successful real-world multiagent learning systems. arXiv preprint arXiv:2102.07659 (2021). [44] Shicheng Zhou, Jingju Liu, Dongdong Hou, Xiaofeng Zhong, and Yue Zhang. 2021. Autonomous penetration testing based on improved deep q-network. Applied Sciences 11, 19 (2021), 8823. [45] Zhengwei Zhu, Miaojie Chen, Chenyang Zhu, and Yanping Zhu. 2024. Effective defense strategies in network security using improved double dueling deep Q-network. Computers & Security 136 (Jan. 2024), 103578. https://doi.org/10.1016/j.cose.2023.103578 16 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT 6 Appendix 6.1 Reward Functions In order to guide the behavior of a RL network defender to align with defense and quality of service goals, it is crucial to carefully select a reward function for the task at hand. Choosing an appropriate reward function is a critical aspect of the RL design process and can greatly impact the success of the network defender. For the results in Section 4 we applied a sparse reward that depends on the goal and metric specified in PDDL. However, we explored other options as well, including: •An outcome-based sparse reward that returns -1 if the attacker agent compromises the network. 1 if the attacker does not compromise the network by the end of the episode, 0 otherwise. •The QoS-aware sparse reward is similar to the outcome-based-sparse reward but also accounts for Quality of Service (QoS) by penalizing at the end of the episode based on the number of CJ relocation attempts, host isolation attempts, as well as host reported bad QoS events in the network (e.g scp_failed, ssh_failed, http _failed). This reward has range in the interval (0,8]. •The scaled-QoS-aware sparse reward is a a min-max scaled version of the QoS-aware sparse reward with range in the interval (0,1]. It has been shown that optimal behavior may be achieved via multiple rewards [ ?]. However, in practice, some of them result in substantial reductions in learning time [ ?]. As we described in Section 3.1.1, our reward function aims to be consistent across the space of possible tasks exposed to the RL agent. Therefore, we designed reward functions that are invariant to: network size, red agent TTP, and amount network traffic. 6.1.1 Dense rewards We did not extensively explore dense reward functions in this work. In practice, dense rewards that are nonzero on every state must be carefully designed to perform well. Poorly designed dense reward functions have been shown | https://arxiv.org/abs/2505.22531v1 |
to perform worse than goal sparse reward functions [ ?]. More importantly, as the number of tasks to master increases (i.e., as is the case in Open-ended Learning), it is simpler to implement sparse reward functions. In future work we explore the use of other types of less sparse rewards to accommodate interactions with human operators and to formulate long-horizon defense tasks as sequences of smaller sub tasks. 6.2 PDDL Terms for Goals and Metrics Figures 13 - 16 list the goal-metric pairs that we used in our evaluation. These use the following terms: • red-inactive: A boolean that is true when the red is inactive • declared-victory: A boolean that is true when the red agent declares victory •real-compromise: A boolean that is true when the red agent compromises a real asses (as opposed to a fake one) • good-qos-events: An integer that corresponds to the number of good QoS events •steps-to-survive: An integer that corresponds to the upper bound of necessary steps for the red agent to succeed without intervention from the blue agent • bad-qos-events: An integer that corresponds to the number of bad QoS events • isolate-actions: An integer that corresponds to the number of actions to isolate hosts • crown-jewel-relocations: An integer that corresponds to the number of actions to relocate a crown jewel • num-blue-actions: An integer that corresponds to the total number of blue actions •worst-contributor-isolations: An integer that corresponds to the number of actions attempting to isolate a suspicious host •crown-jewel-isolations: An integer that corresponds to the number of actions attempting to isolate a crown jewel •worst-contributor-reimages: An integer that corresponds to the number of actions attempting to reimage a suspicious host 17 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT { " i d " : 1 , " o r i g i n a l _ i d " : 1 , " g o a l " : " ( : g o a l ( > n o n t r i v i a l − blue − a c t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 2 , " o r i g i n a l _ i d " : 2 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 3 , " o r i g i n a l _ i d " : 3 , " g o a l " : " ( : g o | https://arxiv.org/abs/2505.22531v1 |
a l ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 4 , " o r i g i n a l _ i d " : 4 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 5 , " o r i g i n a l _ i d " : 5 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 6 , " o r i g i n a l _ i d " : 6 , " g o a l " : " ( : g o a l ( > crown − jewel − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 7 , " o r i g i n a l _ i d " : 7 , " g o a l " : " ( : g o a l ( > crown − jewel − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a | https://arxiv.org/abs/2505.22531v1 |
c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 8 , " o r i g i n a l _ i d " : 8 , " g o a l " : " ( : g o a l ( > crown − jewel − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 9 , " o r i g i n a l _ i d " : 9 , " g o a l " : " ( : g o a l ( > crown − jewel − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 10 , " o r i g i n a l _ i d " : 10 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 11 , " o r i g i n a l _ i d " : 11 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 12 , " o r i g i n a l _ i d " : 12 , " g o a l " : " ( : g o a l ( > worst − c o n t | https://arxiv.org/abs/2505.22531v1 |
r i b u t o r − r e i m a g e s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , Figure 13: Goal-metrics 1-12. •worst-contributor-relocations: An integer that corresponds to the number of actions attempting to relocate a suspicious host to a different subnet •worst-contributor-honeys: An integer that corresponds to the number of actions attempting to relocate a suspicious host to a honey network •full-nontrivial-blue-actions: An integer that corresponds to the number of all blue actions that "do something" •nontrivial-blue-actions: An integer that corresponds to the number of represented blue actions that "do something" 18 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT { " i d " : 13 , " o r i g i n a l _ i d " : 13 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 14 , " o r i g i n a l _ i d " : 14 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 15 , " o r i g i n a l _ i d " : 15 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 16 , " | https://arxiv.org/abs/2505.22531v1 |
o r i g i n a l _ i d " : 16 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 17 , " o r i g i n a l _ i d " : 17 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 18 , " o r i g i n a l _ i d " : 18 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − honeys 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 19 , " o r i g i n a l _ i d " : 19 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − honeys 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 20 , " o r i g i n a l _ i d " : 20 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − honeys 0) ) " , " m e t r i c " : " ( : m | https://arxiv.org/abs/2505.22531v1 |
e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 21 , " o r i g i n a l _ i d " : 21 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − honeys 0) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 22 , " o r i g i n a l _ i d " : 22 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 23 , " o r i g i n a l _ i d " : 23 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 24 , " o r i g i n a l _ i d " : 24 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) | https://arxiv.org/abs/2505.22531v1 |
) " } , Figure 14: Goal-metrics 13-24. 6.3 Curriculum Design Though the challenge of architecting a good curriculum is a research area in its own right, we have a few practical findings from our experience designing a curriculum. Tier-Based Structuring of Goals. Training independent policies for each goal proved to be an efficient way to establish tiers of difficulty. Our 43 goals naturally divided into four phases: 19 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT { " i d " : 25 , " o r i g i n a l _ i d " : 25 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 26 , " o r i g i n a l _ i d " : 26 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > crown − jewel − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 27 , " o r i g i n a l _ i d " : 27 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > crown − jewel − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 28 , " o r i g i n a l _ i d " : 28 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) | https://arxiv.org/abs/2505.22531v1 |
( > crown − jewel − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 29 , " o r i g i n a l _ i d " : 29 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > crown − jewel − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 30 , " o r i g i n a l _ i d " : 30 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 31 , " o r i g i n a l _ i d " : 31 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 32 , " o r i g i n a l _ i d " : 32 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e i | https://arxiv.org/abs/2505.22531v1 |
m a g e s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 33 , " o r i g i n a l _ i d " : 33 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 34 , " o r i g i n a l _ i d " : 34 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 35 , " o r i g i n a l _ i d " : 35 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 36 , " o r i g i n a l _ i d " : 36 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − | https://arxiv.org/abs/2505.22531v1 |
r e l o c a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , Figure 15: Goal-metrics 25-36. •Initialization: Goal 1 •Basic Skills: Goals 2–21 •Applied Skills: Goals 22–41 •Advanced Goals: Goals 42–43 In this context, skills refer to goals that focus the agent on using specific actions either for their own sake ( basic skills ) or to achieve a major theme of the curriculum ( applied skills ). Using this goal order resulted in an average reward of 20 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT { " i d " : 37 , " o r i g i n a l _ i d " : 37 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 38 , " o r i g i n a l _ i d " : 38 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − honeys 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 39 , " o r i g i n a l _ i d " : 39 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − honeys 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } | https://arxiv.org/abs/2505.22531v1 |
, { " i d " : 40 , " o r i g i n a l _ i d " : 40 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − honeys 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 41 , " o r i g i n a l _ i d " : 41 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − honeys 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } , { " i d " : 42 , " o r i g i n a l _ i d " : 42 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( red − d e c l a r e d − v i c t o r y ) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( 0 ) ) " } , { " i d " : 43 , " o r i g i n a l _ i d " : 43 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( red − d e c l a r e d − v i c t o r y ) ) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e n a l t y ) ) ) " } Figure 16: Goal-metrics 37-43. 0.75 and a compromise rate of 10%, significantly outperforming the | https://arxiv.org/abs/2505.22531v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.