text
string
source
string
strong baselines in task success rate, execution efficiency, and communication cost. Ablation studies confirm the significance of each component, particularly the central role of desire modeling in achieving user-aligned behavior. Qualitative analyses further highlight FAMER’s advantages in intent inference, planning p...
https://arxiv.org/abs/2505.22503v1
Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision , pages 2961–2969, 2017. [15] Laura M Hiatt, Cody Narber, Esube Bekele, Sangeet S Khemlani, and J Gregory Trafton. Human modeling for human–robot collaboration. The International Journal of Robotics Research , 36(5-7):580–596, ...
https://arxiv.org/abs/2505.22503v1
Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, SM Ali Eslami, and Matthew Botvinick. Machine theory of mind. In International Conference on Machine Learning , pages 4218–4227. PMLR, 2018. 11 [31] Muhammad A Rahman, Niklas Hopner, Filippos Christianos, and Stefano V Albrecht. Towards open ad hoc teamwork using g...
https://arxiv.org/abs/2505.22503v1
fulfill those desires. To achieve this, we integrate a proxy human user within the environment, which serves two main functions: 1. The human user determines the specific goal set from a set of potential goals, based on a vague task description and a sampled set of value attributes. 2. The human user responds to the ag...
https://arxiv.org/abs/2505.22503v1
The larger the number is , the less willing I am to talk to Alice . If Alice proposes a goal or action that is incorrect , I can point out the 13 mistake . If the dialogue progresses but the task is not progressing , I may be more inclined to correct her by hinting at one of the goals , but I will never reveal the enti...
https://arxiv.org/abs/2505.22503v1
of which takes one of three discrete levels: Not, Somewhat, or Very. The human user randomly samples values for these dimensions and then uses a language model to generate the corresponding goal set. The Snack-M level represents the medium difficulty, with 2 goals and a maximum of 200 steps. The Snack-L level represent...
https://arxiv.org/abs/2505.22503v1
and previous actions , please help me generate a short message to send to my owner $OPPO_NAME$ to help us achieve the underlying goal as soon as possible . Note that I can hold two objects at a time and there are no costs for holding objects . All objects are denoted as <name > (id), such as < table > (712) . Potential...
https://arxiv.org/abs/2505.22503v1
must infer 4 goals from a set of ten potential options: cupcake, wine, milk, cereal, chips, apple, juice, pudding, creamybuns, and chocolatesyrup. For CoELA, the ground truth goal set is: creamybuns, milk, juice, chips. As shown in Figure 7, the CoELA agent makes several mistakes during the task. Initially, the agent i...
https://arxiv.org/abs/2505.22503v1
arXiv:2505.22513v1 [cs.GT] 28 May 2025Strengthening Proportionality in Temporal Voting Bradley Phillips1, Edith Elkind2, Nicholas Teh1and Tomasz W ˛ as1 1University of Oxford, UK 2Northwestern University, USA Abstract We study proportional representation in the framework of temporal voting with approval ballots. Prior ...
https://arxiv.org/abs/2505.22513v1
[2021], Chandak et al. [2024], and Elkind et al. [2025c] considered temporal voting with approval ballots, where the goal is to select one candidate per round. They adapted some of the more established justified representation axioms (namely, JR, PJR and EJR) to this setting, formu- lated temporal variants of popular m...
https://arxiv.org/abs/2505.22513v1
extended two notions of proportional representation—JR and PJR—to the temporal setting. They showed that a JR outcome can be computed in polynomial time, even for the most demanding version of JR among those they considered. For PJR, they proved the existence of an outcome satisfying the axiom, but their constructive p...
https://arxiv.org/abs/2505.22513v1
voting with static voter preferences. In fair scheduling , each agent’s preference is a permutation, and so is the outcome [Elkind et al., 2022, Patro et al., 2022]. Our model differs from this setting in that we allow each project to be chosen more than once (both in agents’ preferences and the outcome). Fair public d...
https://arxiv.org/abs/2505.22513v1
a certain level of satisfaction to each group of voters Sthat agree on t≥1candidates (in the sense that |T i∈Sai| ≥t). In particular, justified representation (JRmw)requires the satisfaction to be at least 1ifk· |S|/n≥1,proportional justi- fied representation (PJRmw)strengthens this guarantee to min(t,⌊k· |S|/n⌋), whil...
https://arxiv.org/abs/2505.22513v1
gives us the following definition. Definition 3.1. Given a temporal election E= (C, N, ℓ, A ), an outcome oprovides strong extended justified representation + (sEJR+) if for every subset of voters S⊆Nand every round r∈[ℓ]withT i∈Sai,r̸=∅it holds that (i)sati(o)≥ ⌊ℓ· |S|/n⌋for some i∈S, or (ii) or∈T i∈Sai,r. Note that t...
https://arxiv.org/abs/2505.22513v1
most one voter. Therefore, no outcome can provide sJR. The reason why sJR (and hence sEJR+) is hard to satisfy is that it can give strong guarantees to groups of voters based on agreement in just a few rounds. These rounds may be heavily contested, and selecting a candidate to satisfy one group might make it impossible...
https://arxiv.org/abs/2505.22513v1
subset of voters is itself (σ, τ)-cohesive. We go over all possibilities for r∈[ℓ]andc∈C\{or}. For a given pair (r, c), we go over all λ∈[ℓ] and identify the set of all voters i∈Nwho (i) approve cin round rand (ii) have sati(o)< λ; denote this set by Sr,c,λ. If all voters in Sr,c,λapprove orin round r, we disregard thi...
https://arxiv.org/abs/2505.22513v1
let obe an output of ε-lsPA V on E. For a contradiction, assume that odoes not provide EJR+. Then there exist σ∈[n],τ∈[ℓ], a (σ, τ)-cohesive subset of voters S⊆N, a round r∈[ℓ], and a candidate c∈Cthat is approved by all voters in Sin round rsuch that c̸=or, and for every i∈Sit holds that sati(o)<⌊τ·σ/n⌋. If σ=|S|, thi...
https://arxiv.org/abs/2505.22513v1
axiomatic hierarchy. Proposition 3.11. It holds that: (i) EJR+ =⇒wEJR+ and (ii) wEJR+ =⇒wEJR. 4 Full Justified Representation The next axiom that we would like to adapt to the temporal setting is full justified representation (FJRmw) [Peters et al., 2021]. Similarly to EJRmw, it aims to select a size- kcommittee Wthat ...
https://arxiv.org/abs/2505.22513v1
o′ R) inallsubsets of rounds of size⌊ℓ· |S|/n⌋is similar in spirit, but weaker than requiring total agreement in all rounds. Indeed, in a moment we will show that wFJR implies wEJR. Finally, for an FJR axiom in the spirit of EJR/PJR/JR from Definition 2.2, we would like to move away from considering all (subsets of) ro...
https://arxiv.org/abs/2505.22513v1
as to satisfy the ‘demand’ of voters in Sq. Theorem 4.5. For every temporal election E= (C, N, ℓ, A ), every output of GCR provides FJR. Proof. By construction, the sets S1, S2, . . . , S pare pairwise disjoint: for each q∈[p], when we select Sq, we remove voters in Sqfrom V, and therefore no set Siwithi > q can contai...
https://arxiv.org/abs/2505.22513v1
is a lower bound on max i∈Ssati(o); this approach is also used in the definition of EJR. In contrast, the guarantee provided by PJR is of the form ‘collectively, voters in Shave high satisfaction’, i.e., it is a lower bound on satS(o). One can combine the FJR approach to deciding what each group deserves with a PJR-sty...
https://arxiv.org/abs/2505.22513v1
the classic concept of core stability , which originates in cooperative game theory. Core stability for multiwinner voting was defined by Aziz et al. [2017], and it remains open if all multiwinner elections admit outcomes in the core. This concept captures resistance to deviations: a subset of voters can deviate by sel...
https://arxiv.org/abs/2505.22513v1
, pages 25–35, 2023. Shiri Alouf-Heffetz, Laurent Bulteau, Edith Elkind, Nimrod Talmon, and Nicholas Teh. Better collective decisions via uncertainty reduction. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI) , pages 24–30, 2022. Haris Aziz and Barton E Lee. The expanding ap...
https://arxiv.org/abs/2505.22513v1
and Nisarg Shah. Fair public decision making. In Proceedings of the 18th ACM Conference on Economics and Computation (EC) , pages 629–646, 2017. Théo Delemazure, Tom Demeulemeester, Manuel Eberl, Jonas Israel, and Patrick Lederer. Strate- gyproofness and proportionality in party-approval multiwinner elections. In Proce...
https://arxiv.org/abs/2505.22513v1
and speaker satisfaction. In Proceedings of the 2022 ACM Web Conference (WWW) , pages 2646–2656, 2022. Dominik Peters. Proportional representation for artificial intelligence. In Proceedings of the 27th Euro- pean Conference on Artificial Intelligence (ECAI) , pages 27–31, 2024. Dominik Peters and Piotr Skowron. Propor...
https://arxiv.org/abs/2505.22513v1
also Y∈ F. In particular, this means that necessarily ∅∈ F. Let ¯Fbe a subset of maximal feasible subsets, i.e., subsets X∈ F, for which there is no Y∈ F such that X⊊Y. Then, a voting rule is a function fthat for every instance I= (C,N,F,A)returns a non-empty subset of maximal feasible subsets of candidates f(I)⊆¯F. Fo...
https://arxiv.org/abs/2505.22513v1
proposal Yof size at least γ, universally approved in S, that can be combined with Xin an outcome, i.e.,X∪Y∈ F. For the latter, the proportion between the size of |X|and the number of voters outside ofS(so all potential voters behind the counterproposal) must be larger than the ratio between γand the number of voters i...
https://arxiv.org/abs/2505.22513v1
committee and nthe total number of voters, which resembles the approach of EJR and Inequality (1). In Droop-PSC , we set η=⌈(k+ 1)· |S|/n⌉ −1, i.e., we take the largest integer strictly smaller than (k+ 1)· |S|/n, which matches Inequality (2). Following this distinction, we can define Droop-EJR for temporal voting as f...
https://arxiv.org/abs/2505.22513v1
voter that in some round approves all of the candidates. To this end, take an arbitrary temporal election E= (C, N, ℓ, A )such that ai,r̸= C, for every i∈Nandr∈[ℓ], and an arbitrary outcome othat does not provide BEJR. This means that there exists a subset of voters Sthat deserves a satisfaction guarantee γ, but for ev...
https://arxiv.org/abs/2505.22513v1
|R|=⌈(|T|+ 1)· |S|/n⌉ −1and for every other outcome o′, there is a voter i∈Swithsati(o)≥minj∈Ssatj(o′ R).Equivalently, for every subset of voters S, there is i∈Ssuch that sati(o)≥max T⊆[ℓ]min R⊆T:|R|=⌈(|T|+1)·|S|/n⌉−1max o′ R∈CRmin j∈Ssatj(o′ R). Note that Droop-FJR implies FJR as for every subset of voters S⊆Nand ever...
https://arxiv.org/abs/2505.22513v1
Swho agree in a size- tsubset of rounds R. We need to show that the satisfaction of some voter i∈Sis at least min(t,⌊ℓ· |S|/n⌋). Suppose first that for each round r∈Rthe outcome oris approved by all voters in S. Then the satisfaction of each voter in Sis at least |R|=t≥ min(t,⌊ℓ· |S|/n⌋). Otherwise, consider a round r∈...
https://arxiv.org/abs/2505.22513v1
By definition of wEJR, there exists a subset of voters Ssuch that all voters in Sagree in each round r∈[ℓ], but for every i∈Swe have sati(o)<⌊ℓ· |S|/n⌋. As|S| ≤n, this means that sati(o)< ℓ, which implies there is a round r∈[ℓ]such that some voter in Sdoes not approve or, and hence or/∈T i∈Sai,r. Letσ=|S|and note that ...
https://arxiv.org/abs/2505.22513v1
voters Swe have sati(o)≥max T⊆[ℓ]µS(T). Thus, in particular, forT= [ℓ]we have sati(o)≥µS([ℓ]) = min R⊆[ℓ]:|R|=⌊ℓ·|S|/n⌋max o′ R∈CRmin j∈Ssatj(o′ R). This means that oprovides wFJR. For part iii), we will show that wFJR implies wEJR. Fix an election E= (C, N, ℓ, A )and an outcome othat provides wFJR. Suppose there exist...
https://arxiv.org/abs/2505.22513v1
to satisfy sEJR also in the case when o1̸=o2, which concludes the proof. Proposition 5.2. It holds that: (i) sFJR =⇒sFPJR =⇒sPJR and FPJR, (ii) FJR =⇒FPJR =⇒ PJR and wFPJR, and (iii) wFJR =⇒wFPJR =⇒wPJR. Proof. For part i), we first show that sFJR implies sFPJR. Take an arbitrary election E= (C, N, ℓ, A ) and an outcom...
https://arxiv.org/abs/2505.22513v1
⌊t· |S|/n⌋ Since oprovides FPJR, we have that satS(o)≥max T′⊆[ℓ]µS(T′)≥µS(T)≥ ⌊t· |S|/n⌋. Hence, oprovides PJR. We will now argue that FPJR implies wFPJR. Consider an outcome othat provides FPJR for a temporal election E. Then for every subset of voters Swe have satS(o)≥max T⊆[ℓ]µS(T). Thus, in particular, for T= [ℓ], ...
https://arxiv.org/abs/2505.22513v1
stable, it follows that sati(o)<sati(o′ R′)and so ois not strongly core stable either. As our original assumption 27 was that oviolated core stability, it follows by contraposition that osatisfying sCore implies that o satisfies Core. For part ii), consider an arbitrary election E= (C, N, ℓ, A )and an outcome osatisfyi...
https://arxiv.org/abs/2505.22513v1
a specific outcome owitnesses the 28 desired result, where the underlined candidate in row rof the corresponding table indicates the candidate that was selected in round rofo. Claim i: wCore and wEJR+ do not imply JR even for E=1 LetE1= (C, N, ℓ, A )∈ E =1be a temporal election with 4 candidates C={a, b, c, d }, 6 vote...
https://arxiv.org/abs/2505.22513v1
candidate in rounds 3 or 4, it just remains to consider the three (1, ℓ)-cohesive singleton subsets of {4,5,6}. Let S′be one such singleton subset. By noting that ⌊ℓ·σ/n⌋=⌊4·1/6⌋= 0, it vacuously holds that sati(o1)≥ ⌊ℓ·σ/n⌋ for the singleton voter i∈S′, hence the first condition of wEJR+ is satisfied. We therefore con...
https://arxiv.org/abs/2505.22513v1
choose the worst ⌊|T| · |S|/n⌋ rounds from TtoR, in order to have round 1inRwe would need to have |S|/n= 1, i.e., S=N. But ifS=N, then o′ Rwould have to give a positive satisfaction to 12voters. This is not possible as every suboutcome can give a summed satisfaction to all voters of at most 9(satisfaction of 4can be gi...
https://arxiv.org/abs/2505.22513v1
other hand, we can prove that wFPJR is violated. Consider a subset of agents S={2,3,4,5,6,7}. To determine the satisfaction guarantee for this subset according to wFPJR we take the worst possible subset of rounds Rof size |R|=⌊ℓ· |S|/n⌋=⌊7·6/7⌋= 6for which we fix a suboutcome o′ Rso to 31 Candidate Rounds x y z 1 selec...
https://arxiv.org/abs/2505.22513v1
other hand, to show that o5fails wCore, consider arbitrary subset of 3rounds Rand a suboutcome o′ Ron these rounds in which we select candidate aeach time. Observe that sati(o′ R) = 32 3≥2 = sat i(o5)for every i∈ {1,2,3,4}. Moreover, sati(o′ R)≥2≥1 = sat i(o5)fori∈ {5,6}. Thus, subset of voters S={1,2,3,4,5,6}is a witn...
https://arxiv.org/abs/2505.22513v1
to sat1(o′ R) + sat 2(o′ R) + sat 3(o′ R) = 3 + 3 ·ta. Since satisfaction of each voter must be at least 4, we get that ta≥3. On the other hand, the total satisfaction of the remaining voters is bounded by sat4(o′ R) + sat 5(o′ R)≤2 +ta+ 2·tb=|R|+ 1 + tb= 6 + tb. In order to give a satisfaction guarantee of at least 4t...
https://arxiv.org/abs/2505.22513v1
of Proposition F.1, Claim vii. Consider outcome o7= (b, b, b ). First, let us show that o7provides sFPJR. We notice that each voter has a satisfaction of 1 from o7and voter i∈Napproves the winning candidate in round i. Therefore, for any subset of voters Swe find that satS(o7) =|S|. Now, let o′be an outcome and R⊆[ℓ]be...
https://arxiv.org/abs/2505.22513v1
even for E=1 LetE9= (C, N, ℓ, A )∈ E=1be a temporal election with 2 candidates C={a, b}, 8 voters N= [8] , ℓ= 8rounds, and the ballots of voters as presented in Table 10. Consider outcome o9= (b, b, b, b, b, b, b, b ). First, let us show that o9provides sFPJR. Given that voters 5, 6, 7 and 8 approve the winner in every...
https://arxiv.org/abs/2505.22513v1
c, d} {b, c, d, e } {b, c, d, e } {b, c, d, e } 6{a, b, c} {a, b, d} {a, c, d} {b, c, d, e } {b, c, d, e } {b, c, d, e } Table 11: The ballots of voters in the proof of Proposition F.1, Claim x. Consider outcome o0= (b, c, d, e, e, e ). First, let us show that o0provides sFPJR. Given that voters 4, 5 and 6 approve the ...
https://arxiv.org/abs/2505.22513v1
Evaluating Supervised Learning Models for Fraud Detection: A Comparative Study of Classical and Deep Architectures on Imbalanced Transaction Data Chao Wang 1 Department of Computer Science Rice University Houston, United States cw104@rice.edu Chuanhao Nie1 College of Computing Georgia Institute of Technology Atlanta, U...
https://arxiv.org/abs/2505.22521v1
critical in high-stakes domains like fraud prevention. This study offers a comprehensive comparison of four supervised learning models —Logistic Regression, Random Forest, LightGBM, and GRU —applied to a large -scale, highly imbalanced online transaction dataset. Our emphasis is not only on overall model accuracy but a...
https://arxiv.org/abs/2505.22521v1
standardi zed at 74 columns. For preprocessing, categorical features such as ProductCD , card1 -card6 , DeviceType , and email domains were label -encoded. Timestamp variables were converted into interpretable features such as hour of day, weekday, and month. Missing val ues were imputed using a constant -fill strategy...
https://arxiv.org/abs/2505.22521v1
is a high -efficiency gradient boosting framework designed for scalable learning on large, sparse datasets. Unlike Random Forest, which builds trees in parallel, LightGBM grows trees sequentially, where each new tree is trained to correct the residuals of the ensemble thus far. It employs a leaf -wise tree growth strat...
https://arxiv.org/abs/2505.22521v1
capturing performance across varying decision thresholds. This evaluation framework was applied consistently across both traditional and deep learning models, ensuring comparability under imbalanced classification conditions. III. RESULTS A. Hyperparameter Selection For each supervised learning model, we conducted syst...
https://arxiv.org/abs/2505.22521v1
hyperparameter tuning budget, GRU demonstrated solid generalization across classes and competitive overall performance. All models showed high weighted precision and recall, largely due to their accuracy on the majority class (non -fraud). However, these aggregate metrics can mask performance disparities on the minorit...
https://arxiv.org/abs/2505.22521v1
fraud detection systems: • Logistic Regression remains a viable option in regulated environments or audit -sensitive pipelines where model transparency is critical. However, it is best suited for environments with well -engineered features and relatively balanced data. • Random Forest and LightGBM demonstrated strong o...
https://arxiv.org/abs/2505.22521v1
with US Treasuries," Journal of Computer Technology and Applied Mathematics, vol. 1, no. 3, pp. 1 –10, 2024. [7] D. W. Hosmer and S. Lemeshow, Applied Logistic Regression , 2nd ed., New York, NY: John Wiley & Sons, Inc., 2000. [8] L. Breiman, “Random forests,” Machine Learning , vol. 45, no. 1, pp. 5 –32, 2001. [9] J. ...
https://arxiv.org/abs/2505.22521v1
arXiv:2505.22525v1 [cs.CV] 28 May 2025 GenerativeAIResearchThinking with Generated Images Ethan Chern1,4*†Zhulin Hu1,4*Steffi Chern4*Siqi Kou1Jiadi Su3,4Yan Ma3,4 Zhijie Deng1Pengfei Liu1,2,4‡ 1Shanghai Jiao Tong University2SII 3Fudan University4Generative AI Research Lab (GAIR) Abstract We present Thinking with Genera...
https://arxiv.org/abs/2505.22525v1
. Biochemists explore protein structures to discover new treatment approaches; forensic analysts verify crime scene reconstructions to establish evidence connections; architects revise spaces and light patterns to optimize building designs. Visual thinking creates unique combinations and novel connections between conce...
https://arxiv.org/abs/2505.22525v1
generation tasks. To effectively realize Thinking with Generated Images with an end-to-end, single-model approach and demonstrate its strong potential on vision generation tasks, we introduce the native long-multimodal thought process on unified autoregressive LMMs (Team, 2024a; Chern et al., 2024). The native long-mul...
https://arxiv.org/abs/2505.22525v1
2023; Hu et al., 2024b; OpenAI, 2025). These approaches let the model revisit the (user-given) images multiple times or generate transformed versions (e.g., cropped, rotated, zoomed) as intermediate steps toward a solution. Thinking with images—rather than merely seeing them once—enables models to better tackle multi -...
https://arxiv.org/abs/2505.22525v1
enables models to “think” across modalities by generating modality-specific tokens with “natively” embedded multimodal generation capabilities via a single forward pass; (2) performs diverse multimodal tasks natively via the generative paradigm; (3) provides natural test-time scaling across modalities via the generated...
https://arxiv.org/abs/2505.22525v1
LMMs’ capabilities to spontaneously (1) critique their own generated visual steps and (2) generate intermediate visual subgoals. For more details, please refer to Section 3. 3 Experiments To instantiate Thinking with Generated Images on vision generation tasks, we meticulously design two types of native long multimodal...
https://arxiv.org/abs/2505.22525v1
al., 2024) (for vision generation with self-critique) to generate initial visual hypotheses from the prompts. We then refine or update these with Flux1-Redux (Labs, 2024) (a variant of Flux1-dev that accepts both image and text inputs). 6 3.4 Training Figure 6: Our data collection pipeline for Thinking with Generated I...
https://arxiv.org/abs/2505.22525v1
(weighted at 1) to enhance the visual quality of the generated images. Further details on the auxiliary loss design and the effectiveness of this approach can be found in the Appendix. Training Stages The training is conducted in two stages. In the first stage, we performed continued training on Anole-7b with the Journ...
https://arxiv.org/abs/2505.22525v1
Performance on GenEval. 3.7 Analysis Generating intermediate visual thoughts is beneficial for vision generation We show in Tab. 1 and Tab. 2 that TwGI-Anole-7b-Obj. consistently outperforms the baseline Anole-7b model across both GenEval and DPGBench. On GenEval, Anole-7b-Obj. achieves a significant boost in the ”Two ...
https://arxiv.org/abs/2505.22525v1
the exploration on different architectures to future work. 5.2 Future Directions Better Benchmarking on Thinking with Generated Images Current vision generation benchmarks for unified LMMs focus on standard image generation tasks. In this paper, we also use standard image generation benchmarks (Ghosh et al., 2023; Hu e...
https://arxiv.org/abs/2505.22525v1
Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Sheng...
https://arxiv.org/abs/2505.22525v1
and Pattern Recognition , pages 13991–14000. [22] OpenAI. 2025. Thinking with images. [23] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ¨orn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern reco...
https://arxiv.org/abs/2505.22525v1
Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 . [38] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. 2024. Transfusion: Predict the next token and diffuse images with one multi-moda...
https://arxiv.org/abs/2505.22525v1
values of λand compared against the token discrepancy loss Ltdproposed by (Li et al., 2025). Note that we did not apply classifier-free guidance to ensure fair and simple comparison. Results in Tab. 3 demonstrate that λ= 1yields optimal performance, improving GenEval scores by approximately 3 points. Neither Ltdalone n...
https://arxiv.org/abs/2505.22525v1
model first creates each object separately, producing a detailed pizza with visible toppings, and then a wooden bench. The final image successfully integrates both elements, demonstrating the model’s ability to handle multiple distinct objects in a single composition. Generating intermediate visual subgoals allows the ...
https://arxiv.org/abs/2505.22525v1
a disconnect between the model’s analytical capabilities and generative execution—while the self-critique mechanism can accurately identify problems and propose solutions, the model sometimes fails to implement its own corrections in subsequent generations. The model can reason about what’s wrong but cannot translate t...
https://arxiv.org/abs/2505.22525v1
arXiv:2505.22531v1 [cs.LG] 28 May 2025TRAINING RL A GENTS FOR MULTI -OBJECTIVE NETWORK DEFENSE TASKS A P REPRINT Andres Molina-Markham1, Luis Robaina1, Sean Steinle1, Akash Trivedi1, Derek Tsui1, Nicholas Potteiger1, Lauren Brandt1, Ransom Winder1, Ahmed Ridley2 1The MITRE Corporation2NSA January 31, 2025 ABSTRACT Open...
https://arxiv.org/abs/2505.22531v1
multiple iterations, with each evaluation round informing the next. that are unknown at design time [ 34]. This kind of learning paradigm is well aligned to the requirements for training agents to defend networks. Cyberdefenders need to reconfigure networks, balancing multiple goals and subtasks that must adapt to serv...
https://arxiv.org/abs/2505.22531v1
for learning network defense tasks than approaches that only train by interacting with a single instance of an adversary. Our work does not fully answer the general question of what distribution of tasks will produce the best agent. However, we provide insights about aspects of tasks and curriculums that are relevant f...
https://arxiv.org/abs/2505.22531v1
ended learning [ 41,33,43,6,27]. We share the general goal of achieving robust and generalizable learning. However, our work has an emphasis on adversarial settings. In adversarial settings, an adversary controls aspects of a task definition (i.e., by controlling aspects of the environment), and therefore we study the ...
https://arxiv.org/abs/2505.22531v1
although this emulation system was designed around intrusion response specifically. Unlike our work which includes an abstraction that allows transition of simulation to emulation, many approaches remain within a simulation environment. Feng and Xu [ 15] couched cyber state dynamics as a two player zero-sum mathematica...
https://arxiv.org/abs/2505.22531v1
This section describes our approach to define network defense tasks and universes of them. In addition, we discuss our strategy to present new tasks to a learning agent to achieve both progress andgeneralization . 3.1 Network Defense Tasks A network defense task in our framework must adequately capture the security and...
https://arxiv.org/abs/2505.22531v1
of authorized users in a network and the sets of tactics, techniques, and procedures that dictate the behavior of an adversary can be very broad. However, when developing an RL network defender, U=N × G should not be arbitrary. Instead, we argue that the definition of Uand the selection of tasks in U, during training a...
https://arxiv.org/abs/2505.22531v1
the end of an episode; and, a metric that is evaluated at the end of the episode (when using a sparse reward) or every step (when using a dense reward). With these goal-metric pairs we defined a sparse reward function as follows, where xis the result of evaluating the numeric expression corresponding to the metric , an...
https://arxiv.org/abs/2505.22531v1
boolean goals and numeric metrics for each network defense task, forming the core of our reward structure. The goal must be satisfied by the end of an episode, while the metric is evaluated either at the episode’s conclusion (for sparse rewards) or at each step (for dense rewards). Figure 3 illustrates one example of a...
https://arxiv.org/abs/2505.22531v1
filter out inapplicable actions. This reduces the action set without needing manually defined rules for every possible constraint. As a result, this approach offers a more scalable and flexible solution for action filtering, well-suited for managing complex and dynamic environments. By focusing on actions that can yiel...
https://arxiv.org/abs/2505.22531v1
aimed at masking red behavior1 5. The number of time steps that the blue agent must survive without incident 6.A goal-metric pair in a set of 43 pairs that range from simply detecting the presence of an attack, using specific actions to mitigate the attack, to ultimately determining how to efficiently use any available...
https://arxiv.org/abs/2505.22531v1
gray traffic divided by two, then multiplied by gray agent diversity, with the division preventing plot flattening at larger y-axis values. Figure 5: Comparing Defender Generalization Ability: Dynamic vs. Fixed Task Training Approaches Specifically, we compare the performance of two policies obtained with two different...
https://arxiv.org/abs/2505.22531v1
random A second policy is obtained presenting diverse tasks (drawing from the same universe of tasks), but selecting their parameters uniformly at random. Smooth changes A third policy is obtained by varying tasks by a small amount, but without being guided by difficulty. Concretely, training starts from a task chosen ...
https://arxiv.org/abs/2505.22531v1
this, we evaluate the performance of a policy trained with dynamic task selection on a task similar to the one we describe in 4.1. However, this time, the red agent gradually deviates from its original behavior. Figure 8 compares the performance of two policies. As we can see, the performance during training is similar...
https://arxiv.org/abs/2505.22531v1
needed to determine if a dense representation would be better when the cardinality of the set of goal-metric pairs is higher. 11 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 9: Mean reward for configu- rations, each fixing one of 11 goal- metric pairs. Figure 10: Mean reward for sever...
https://arxiv.org/abs/2505.22531v1
previous blue action; the number of failed HTTP connections reported by hosts since the previous blue action; the number of failed SCP connections reported by hosts since the previous blue action; the number of failed SSH connections reported by hosts since the previous blue action; the number of successful HTTP connec...
https://arxiv.org/abs/2505.22531v1
Appendix, compares the performance for a universe with tasks that vary network dynamics but keep the goal-pair constant. 5 Discussion and Conclusion This paper studies the role of training network defenders with task universes, as opposed to training with a single task. We highlight scalability and generalizability as ...
https://arxiv.org/abs/2505.22531v1
Basori and Sharaf Jameel Malebary. 2020. Deep Reinforcement Learning for Adaptive Cyber Defense and Attacker’s Pattern Identification. In Advances in Cyber Security Analytics and Decision Systems . Springer, 15–26. [8]Elizabeth Bates, Vasilios Mavroudis, and Chris Hicks. 2023. Reward Shaping for Happier Autonomous Cybe...
https://arxiv.org/abs/2505.22531v1
Tomáš Pevný, and Viliam Lisý. 2023. NASimEmu: Network Attack Simulator & Emulator for Training Agents Generalizing to Novel Scenarios. (Aug. 2023). http://arxiv.org/abs/2305.17246 arXiv:2305.17246 [cs]. [23] Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Lev...
https://arxiv.org/abs/2505.22531v1
Bono, and Kate Farris. 2021. CyberBattleSim. (2021). https://github.com/microsoft/cyberbattlesim [41] Open Ended Learning Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michaël Mathieu, Nat McAleese, Nathalie Bradley-Schmieg, Nathaniel Wong, Ni...
https://arxiv.org/abs/2505.22531v1
to perform worse than goal sparse reward functions [ ?]. More importantly, as the number of tasks to master increases (i.e., as is the case in Open-ended Learning), it is simpler to implement sparse reward functions. In future work we explore the use of other types of less sparse rewards to accommodate interactions wit...
https://arxiv.org/abs/2505.22531v1
a l ( > worst − c o n t r i b u t o r − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 4 , " o r i g i n a l _ i d " : 4 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − i s o l...
https://arxiv.org/abs/2505.22531v1
c t i o n s s t e p s −to − s u r v i v e ) ) " } , { " i d " : 8 , " o r i g i n a l _ i d " : 8 , " g o a l " : " ( : g o a l ( > crown − jewel − i s o l a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 9 , " o r i g i n a l _ i d " : 9 , " g o a l " : " ( ...
https://arxiv.org/abs/2505.22531v1
r i b u t o r − r e i m a g e s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , Figure 13: Goal-metrics 1-12. •worst-contributor-relocations: An integer that corresponds to the number of actions attempting to relocate a suspicious host to a different subnet •worst-contributor-honey...
https://arxiv.org/abs/2505.22531v1
o r i g i n a l _ i d " : 16 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e l o c a t i o n s 0) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 17 , " o r i g i n a l _ i d " : 17 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − r e...
https://arxiv.org/abs/2505.22531v1
e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 21 , " o r i g i n a l _ i d " : 21 , " g o a l " : " ( : g o a l ( > worst − c o n t r i b u t o r − honeys 0) ) " , " m e t r i c " : " ( : m e t r i c minimize (+ ( / n o n t r i v i a l − blue − a c t i o n s s t e p s −to − s u r v i v e ) ( qos − p e ...
https://arxiv.org/abs/2505.22531v1
) " } , Figure 14: Goal-metrics 13-24. 6.3 Curriculum Design Though the challenge of architecting a good curriculum is a research area in its own right, we have a few practical findings from our experience designing a curriculum. Tier-Based Structuring of Goals. Training independent policies for each goal proved to be ...
https://arxiv.org/abs/2505.22531v1
( > crown − jewel − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 29 , " o r i g i n a l _ i d " : 29 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > crown − jewel − i s o l a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e...
https://arxiv.org/abs/2505.22531v1
m a g e s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 33 , " o r i g i n a l _ i d " : 33 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − r e i m a g e s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ...
https://arxiv.org/abs/2505.22531v1
r e l o c a t i o n s 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , Figure 15: Goal-metrics 25-36. •Initialization: Goal 1 •Basic Skills: Goals 2–21 •Applied Skills: Goals 22–41 •Advanced Goals: Goals 42–43 In this context, skills refer to goals that focus the agent on using sp...
https://arxiv.org/abs/2505.22531v1
, { " i d " : 40 , " o r i g i n a l _ i d " : 40 , " g o a l " : " ( : g o a l ( and ( n o t r e a l −compromise ) ( > worst − c o n t r i b u t o r − honeys 0) ) ) " , " m e t r i c " : " ( : m e t r i c minimize ( qos − p e n a l t y ) ) " } , { " i d " : 41 , " o r i g i n a l _ i d " : 41 , " g o a l " : " ( : g o...
https://arxiv.org/abs/2505.22531v1