text
string
source
string
Rule” respectively depict the agent’s expectations of the a ction and the environment’s observation rules, together representing a misalignment. The “Sufficie nt Observation” represents the observa- tion the environment should provide to resolve the misalign ment. To analyze and identify these misalignments, we designed...
https://arxiv.org/abs/2505.21055v1
conduct experiments on four representative benchmarks a cross three domains: embodied tasks, web navigation and tool-use. Among them, (1 ) ALFWorld [40] focuses on em- bodied AI agents performing household tasks through textua l interactions in simulated environ- ments; (2) ScienceWorld [45] evaluates the abilities to ...
https://arxiv.org/abs/2505.21055v1
agent performance. The observed performance enhancements provide empirical e vidence that numerous errors in base- line configurations originate from implicit constraints or under-specified observation, rather than from intrinsic reasoning deficiencies. This finding suggest s that when these environmental con- straints are...
https://arxiv.org/abs/2505.21055v1
substantial reduction in consecutive invalid actions frequency across all agent methods when utilizing ALIGN-generated in terfaces. Specifically, we observe a mean reduction of 65% in ALFWorld and 49% in ScienceWorld. These findings provide robust evidence that ALIGN effectively renders latent constraint s explicit, ther...
https://arxiv.org/abs/2505.21055v1
larger declines, showing the critical role of fine-grained, enriched observation during interact ion. This also suggests that future environ- ment designers should prioritize rich, LLM-friendly obser vation when constructing environments. Table 5: Task accuracy (%) on ALF- World across turns without experi- mental verifi...
https://arxiv.org/abs/2505.21055v1
Linguistics, IJCNLP 2023 - Volume 1: Long Papers, Nusa Dua, Bali, November 1 - 4, 2023 , pages 675–718. Associa- tion for Computational Linguistics, 2023. doi: 10.18653/V 1/2023.IJCNLP-MAIN.45. URL https://doi.org/10.18653/v1/2023.ijcnlp-main.45 . [4] C. Bonnet, D. Luo, D. Byrne, S. Surana, S. Abramowitz, P. D uckworth...
https://arxiv.org/abs/2505.21055v1
multimodal language model. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Haw aii, USA , volume 202 10 ofProceedings of Machine Learning Research , pages 8469–8488. PMLR, 2023. URL https://procee...
https://arxiv.org/abs/2505.21055v1
issue s? In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna , Austria, May 7-11, 2024 . Open- Review.net, 2024. URL https://openreview.net/forum?id=VTF8yNQM66 . [22] E. Kolve, R. Mottaghi, D. Gordon, Y . Zhu, A. Gupta, and A. Farhadi. AI2-THOR: an interactive 3d environment for visua...
https://arxiv.org/abs/2505.21055v1
Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Ne ural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 1 0 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/9 1edff07232fb1b55a505a9e9f6c0ff3-Abstract-Co [...
https://arxiv.org/abs/2505.21055v1
cessing Systems 36: Annual Conference on Neural Informatio n Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 20 23, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/1 b44b878bb782e6954cd888628510e90-Abstract-Co [40] M. Shridhar, X. Yuan, M. Côté, Y . Bisk, A. Trischler, and M. ...
https://arxiv.org/abs/2505.21055v1
via multi-turn reinforcement learning, 2025. URL https://arxiv.org/abs/2504.20073 . 13 [50] J. Wei, Z. Sun, S. Papay, S. McKinney, J. Han, I. Fulford, H. W. Chung, A. T. Passos, W. Fedus, and A. Glaese. BrowseComp: A simple yet challenging benchma rk for browsing agents, 2025. URLhttps://arxiv.org/abs/2504.12516 . [51]...
https://arxiv.org/abs/2505.21055v1
OpenReview.net, 2023. URL https://openreview.net/forum?id=WE_vluYUL-X . [59] A. Zeng, M. Liu, R. Lu, B. Wang, X. Liu, Y . Dong, and J. Tang . AgentTuning: En- abling generalized agent abilities for LLMs. In L. Ku, A. Mar tins, and V . Srikumar, editors, Findings of the Association for Computational Linguistics , ACL 20...
https://arxiv.org/abs/2505.21055v1
preliminarily assess the significance of agent-environm ent misalignment, we conducted ex- ploratory experiments on the ALFWorld. We employed the vani lla Qwen2.5-7B-Instruct agent with a temperature setting of 0.0. The deployment protocol, prom pt template, followed the same config- uration described in Appendix C and A...
https://arxiv.org/abs/2505.21055v1
and the interface code. (2)reset_simulator() : Resets the experimental task. (3)run_task() : Runs the current task until completion, returning the inte raction trajectory. (4)exec_agent_action(agent_action) : Executes a specific action and returns the enhanced observation after the interface processing. (5)get_agent_act...
https://arxiv.org/abs/2505.21055v1
whether, during this task, there was a misalignme nt between the Environ- ment’s World Model and the Agent’s World Model that hindered a fair assessment of the Agent’s capabilities. Choose exactly one of the following o utputs: If there is NO misalignment (i.e., the Agent’s failures stem from its own errors or limitati...
https://arxiv.org/abs/2505.21055v1
cu rrent environment state, including the outcomes of any previous ‘step()‘ or ‘get_nex t_agent_action()‘ call, along with the latest observations. If you believe you have reached a conclusion from your experi ments, provide it in this format: <thought> Your reasoning here </thought> <environment_logic_and_misalignment...
https://arxiv.org/abs/2505.21055v1
receptacle). Meanwhile, the Agent operates based on its own World Model, w hich it constructs by interpreting the task and environment prompts. The Agent first determines its high-level reasoning intent—its understanding of what needs to be done —and then selects actions according to its internal World Model. However, b...
https://arxiv.org/abs/2505.21055v1
World Model. However, because the Environment’s World Model is manually crafted and may not be fully conveyed through promp ts, the Agent’s World Model might differ, leading to unexpected behavior. For instance , the Agent might choose an action that aligns with its intent but violates the Environm ent’s rules, or it m...
https://arxiv.org/abs/2505.21055v1
Do not modify any aspects not explicitly identified by the A nalysisAgent in the “New Environment Logics and Misalignment Analyzed by Analy sisAgent” section. 8. You must use the following approach when addressing the id entified misalign- ment: - For each action defined in environment, provide clear, info rmative, and su...
https://arxiv.org/abs/2505.21055v1
of rules defining how tasks ar e accomplished. These rules, referred to as the Environment’s World Model, specify the se quence of actions required to achieve specific outcomes. For example, the Environment’s W orld Model might dictate that certain actions (e.g., operating on a receptacle) can only b e performed after pr...
https://arxiv.org/abs/2505.21055v1
actions are : - look: look around your current location - inventory: check your current inventory - go to (receptacle): move to a receptacle - open (receptacle): open a receptacle - close (receptacle): close a receptacle - take (object) from (receptacle): take an object from a rece ptacle - move (object) to (receptacle...
https://arxiv.org/abs/2505.21055v1
Process the agent action and return the next observation, re ward, and done status. """ obs, reward, done = env.step(agent_action) return obs, reward, done Initialized interface we used in M3ToolEval: defInferRules(task_name , task_type_idx): """ Contains the rules for environment and task execute logic fo r different ...
https://arxiv.org/abs/2505.21055v1
1.39 (-4.17) ReActw/o ALIGN 1.49 22.42 27.12 12.50 w/ ALIGN 15.67 (+14.18) 28.74 (+6.32) 27.10 (-0.02) 22.22 (+9.72) Self-Consistencyw/o ALIGN 5.22 25.21 29.80 4.17 w/ ALIGN 11.94 (+6.72) 34.83 (+9.62) 15.83 (-13.97) 2.78 (-1.39) Self-Refinew/o ALIGN 0.00 22.34 27.70 1.39 w/ ALIGN 0.75 (+0.75) 31.33 (+8.99) 37.43 (+9.73...
https://arxiv.org/abs/2505.21055v1
in/on it. - You can only interact with objects and receptacles that are at your current location. - If you try to interact with a receptacle or object that is not at your current location , you will be informed that you ne ed to go to that location first. - After successfully going to a location , you are at that locat...
https://arxiv.org/abs/2505.21055v1
: None, ’object2’ : None } # Parse the agent action # Simple actions without parameters ifagent_action.lower() == ’look’ oragent_action.lower() == ’ inventory’ : action_item[ ’matched’ ] = True action_item[ ’action’ ] = agent_action.lower() # Pattern: go to (receptacle) elifagent_action.lower().startswith( ’go to ’ ): ...
https://arxiv.org/abs/2505.21055v1
# Improved function to check if an object is in inventory defis_in_inventory(object_name): object_name_lower = object_name.lower() logger.debug(f "Checking if ’{object_name_lower}’ is in inventory" ) # Extract inventory items from the observation inventory_items = [] # Check for common inventory patterns if"carrying:" ...
https://arxiv.org/abs/2505.21055v1
False, False # Execute the go to action obs, reward, done, info = env.step([agent_action]) obs, reward, done = obs[0], info[ ’won’][0], done[0] # Update the current location if the action was successful ifobsand"nothing happens" not in obs.lower(): env._current_location = receptacle # If the observation doesn’t clearly...
https://arxiv.org/abs/2505.21055v1
obs, reward, done, info = env.step([agent_action]) obs, reward, done = obs[0], info[ ’won’][0], done[0] logger.debug(f "Environment response: {obs}" ) 37 # Handle special case for "Nothing happens" response ifobs.strip() == "Nothing happens." andaction_item[ ’action’ ] == ’take from’ : object_name = action_item[ ’objec...
https://arxiv.org/abs/2505.21055v1
this context." , reward, done # For successful move actions, verify the object was actuall y in inventory if"successfully" inobs.lower() and"place" inobs.lower() and action_item[ ’action’ ] ==’move to’ : object_name = action_item[ ’object’ ] # If the environment says the move was successful, we should trust that and no...
https://arxiv.org/abs/2505.21055v1
such cases, you might need to identif y the specific item (e.g., ’crocodile’) and use ’focus on [specif ic item name]’ to fulfill the requirement." ) rules.append(f "- You must use this command on the specified items when they are ready (e.g., created, planted, in the cor rect location), as per the task instructions." ...
https://arxiv.org/abs/2505.21055v1
7) --- rules.append( "\nGeneral Action Syntax Notes:" ) rules.append( "- For actions like ’open OBJ’, use the object’s base name (e.g., ’open freezer’, not ’open freezer door’)." ) rules.append( "- The ’wait’ command only accepts ’wait1’ (no space) to pass a single time step. Other durations (e.g., ’wait 10’) are not s...
https://arxiv.org/abs/2505.21055v1
wait command: ’{ agent_action}’. Only ’wait1’ is supported." ) 42 current_obs, current_score, current_location = _get_current_state(env, logger) custom_feedback = ( f"\n\n[Environment Feedback]: Your action ’{agent_action }’ uses an invalid format or duration.\n" f"Reason: The environment only supports waiting for a si...
https://arxiv.org/abs/2505.21055v1
= None is_conceptual_focus_task = False # Flag for Analysis 12 ifnormalized_required_objects: # Check if any required object looks conceptual (Analysis 12) conceptual_keywords = [ "with the" ,"longest" ,"shortest" , "heaviest" ,"lightest" ,"smallest" ,"largest" ] forreq_obj_raw inrequired_focus_objects_raw: if any(keyw...
https://arxiv.org/abs/2505.21055v1
and not obs_lower. startswith( "you are not" ): error_detected_in_obs = True failure_phrase_found = phrase logger.warning(f "Detected potential error phrase ’{phrase}’ in observation string for focus action ’{ agent_action}’." ) break iferror_detected_in_obs: # Focus action failed based on observation content logger.wa...
https://arxiv.org/abs/2505.21055v1
# Return original results except Exception as e: # Focus action failed with an exception logger.error(f "Exception occurred executing focus action ’{agent_action}’: {e}" ) error_msg_str = str(e) current_obs, current_score, current_location = _get_current_state(env, logger) # Get state after exception 47 # --- Provide E...
https://arxiv.org/abs/2505.21055v1
accepts the destination location name (e.g., ’go to {destination}’ or ’ go to hallway’). Specifying the source location using ’from {sou rce}’ is not supported.\n" f"Suggestion: Please use ’look around’ to see valid exits from your current location ({current_location}) and then use ’go to [valid exit]’." ) final_obs = ...
https://arxiv.org/abs/2505.21055v1
the ’go to current loc’ case handled above. ifgo_to_match andfailure_phrase_found in["no known action" ,"unknown action" ,"cannot" ,"can’t go that way" ,"not a valid exit" ,"don’t know how to go there" ]: target_location = go_to_match.group(1).strip() logger.info(f "Detected failed ’go to { target_location}’ action, li...
https://arxiv.org/abs/2505.21055v1
to LOC’ failure due to non-adjacency based on exception (Analysis 10) # Ensure it wasn’t the ’go to ... from ...’ pattern caught earlier go_to_match = re.match( r"go to (.*)" , action_normalized) if not go_to_from_match andgo_to_match and exception_indicates_issue: target_location = go_to_match.group(1).strip() logger....
https://arxiv.org/abs/2505.21055v1
Attempting to buy from other pages, such as the Item Descrip tion page (reached via ’click[Description]’ or ’click[Desc/Ov erview]’), will result in an error. You must navigate back to the main Ite m page first (e.g., using ’click[< Prev]’) before buying. """ return buy_rule defWrapStep(env, init_obs: str, task: str, a...
https://arxiv.org/abs/2505.21055v1
# Define task type mapping for clarity if needed elsewhere TASK_TYPE_MAP = { 0:’message_decoder’ , 1:’cryptobotanists_plant_dna_sequencer’ , 2:’trade_calculator’ , 55 3:’travel_itinerary_planning’ , 4:’web_browsing’ , } # Assume env object has methods like step() and attributes li ke name, instruction # Assume logger i...
https://arxiv.org/abs/2505.21055v1
{ agent_action} (empty parentheses). Provided specific fee dback." ) return obs, reward, done else:# Case: tool_name(arg) or tool_name(arg1, arg2) etc. - Analysis Result 2 suggested_format = f "Action: {tool_name}, {args_inside} End Action" obs = f "Error: Found tool invocation with arguments inside parentheses like ’{...
https://arxiv.org/abs/2505.21055v1
arXiv:2505.21061v1 [cs.CV] 27 May 2025LPOI: Listwise Preference Optimization for Vision Language Models Fatemeh Pesaran zadeh1Yoojin Oh1Gunhee Kim1∗ 1Seoul National University, fatemehpesaran@vision.snu.ac.kr, snujin20@snu.ac.kr gunhee@snu.ac.kr Abstract Aligning large VLMs with human preferences is a challenging task,...
https://arxiv.org/abs/2505.21061v1
augment- ing the negative samples in the preference data via randomly cropping the image or editing the image using diffusion models. Meanwhile, recent studies have demonstrated that the methods employing listwise samples for preference optimization often surpass the ones based on pairwise samples by directly optimizin...
https://arxiv.org/abs/2505.21061v1
lower hallucination rate compared to state- of-the-art VLM preference learning meth- ods. Furthermore, LPOI outperforms exist- ing methods in various scenarios, including when trained on datasets of different sizes or compared under a fixed budget of GPU hours. 2 Related Work Preference Learning. Aligning LLMs or VLMs ...
https://arxiv.org/abs/2505.21061v1
by identifying challenging nega- tives that are semantically close to positive samples. In our work, we adapt this principle to create hard negative images by preserving the overall semantic context of the image while masking out the critical object.Listwise Ranking. Empirical and mathematical studies have shown that l...
https://arxiv.org/abs/2505.21061v1
2024b), through the input image. We select the object to be masked in the following orders: objects in the first sentence of the chosen answer, then those in the query, and finally any remaining objects in the answer. We also ran- domly select a detected object that are not in the text. For the selected object, we mask...
https://arxiv.org/abs/2505.21061v1
0.57 3.9 52.9 22.3 1.8 Idefics2-8B (Laurençon et al., 2024) 6.3 4.2 2.62 0.43 3.4 36.5 7.6 0.4 + DPO (Rafailov et al., 2024) 6.0 4.2 2.48 0.45 3.5 37.4 8.1 0.2 + mDPO (Wang et al., 2024a) 7.3 5.4 2.80 0.40 2.7 37.7 6.2 0.2 + LPOI (Ours) 5.3 3.6 2.88 0.36 2.6 36.4 5.7 0.2 Table 1: Performance comparison between various ...
https://arxiv.org/abs/2505.21061v1
Idefics2-8B (Laurençon et al., 2024) 6.3 4.2 2.62 0.43 3.4 36.5 7.6 0.4 + DPO (Rafailov et al., 2024) 6.0 4.3 2.29 0.51 3.1 36.4 6.8 0.3 + mDPO (Wang et al., 2024a) 8.7 5.6 2.71 0.42 2.8 37.2 6.5 0.3 + LPOI (Ours) 5.3 4.0 2.81 0.38 2.8 36.2 6.2 0.3 Table 2: Performance comparison under the same training cost (20 hours ...
https://arxiv.org/abs/2505.21061v1
same con- ditions. Annotators consistently prefer responses from our fine-tuned model over those from mDPO and DPO. Inter-annotator agreement is measured using Krippendorff’s α, which yields a value of 0.735 for DPO and 0.671 for mDPO on the AM- BER benchmark, and a value of 0.823 for DPO and 0.627 for mDPO on the Obje...
https://arxiv.org/abs/2505.21061v1
prompting improves the quality of generated outputs and in reducing hallucinations, thanks to generating higher-quality negative images. Impact of List Sizes. We present the results of LPOI with list sizes of 3, 4, and 5, assessing the impact of the list size on the performance. Table 4 shows that larger list sizes res...
https://arxiv.org/abs/2505.21061v1
. The pizza is sliced into several pieces, with a few visible in the foreground and others in the background . The sauce is spread across the pizza, with some areas appearing more saturated than others . The close -up view emphasizes the details of the pizza, including the cheese and the sauce, creating a sense of dept...
https://arxiv.org/abs/2505.21061v1
slightly greenish tint. The third tomato is the least ripe, with a more greenish -yellow color and a noticeable stem . This progression of ripeness suggests that the tomatoes are at different stages of maturity, with the first tomato being ready for immediate consumption, while the other two tomatoes may require more t...
https://arxiv.org/abs/2505.21061v1
subtle incremental visual changes. By doing so, our LPOI enables the model to ground its responses more reliably in the image, improving the recogni- tion of fine details and reducing the likelihood of hallucinating common yet irrelevant objects. 5 Conclusion In this work, we addressed the challenge of align- ing VLMs ...
https://arxiv.org/abs/2505.21061v1
and Hang Li. 2007a. Learning to rank: From pairwise approach to listwise approach. volume 227, pages 129–136. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007b. Learning to rank: from pairwise approach to listwise approach. In International Con- ference on Machine Learning . Tianheng Cheng, Lin Song, Yi...
https://arxiv.org/abs/2505.21061v1
Li, Yuheng Li, and Yong Jae Lee. 2024a. Improved baselines with visual instruc- tion tuning. Preprint , arXiv:2310.03744. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. Preprint , arXiv:2304.08485. Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jian...
https://arxiv.org/abs/2505.21061v1
preference optimization for multimodal large language models. Preprint , arXiv:2406.11839. Junyang Wang, Yuhang Wang, Guohai Xu, Jing Zhang, Yukai Gu, Haitao Jia, Jiaqi Wang, Haiyang Xu, Ming Yan, Ji Zhang, and Jitao Sang. 2024b. Amber: An llm-free multi-dimensional benchmark for mllms hal- lucination evaluation. Prepr...
https://arxiv.org/abs/2505.21061v1
DPO, mDPO, and LPOI with the list sizes of 3, 4, and 5 on 5K examples for 1 epoch, measured on an RTX A6000 GPU in Table 7. Additionally, we include the number of epochs and scores on the MMHalBench benchmark when trained with the same GPU budget (20 GPU hours), also in Table 7. As the list size increases, LPOI introdu...
https://arxiv.org/abs/2505.21061v1
we note that our method performs well even without the verification step, outperforming all baseline methods in this case. To further illustrate this, we conducted an additional experiment using only the object detection module, focusing on a single salient object per image and excluding the verification step, and pres...
https://arxiv.org/abs/2505.21061v1
7: Comparison of saliency maps with or without visual prompting (highlighted in red circle). models, such as the false assumption of co-occurring objects, the failure to recognize subtle object features or the provision of answers to questions that cannot be derived from the image alone. I Details on Human Evaluation F...
https://arxiv.org/abs/2505.21061v1
arXiv:2505.21067v1 [cs.AI] 27 May 2025Why Distillation can Outperform Zero-RL: The Role of Flexible Reasoning Xiao Hu1, Xingyu Lu1, Liyuan Mao2, YiFan Zhang3, Tianke Zhang4, Bin Wen4, Fan Yang4, Tingting Gao4, Guorui Zhou4 1Tsinghua University,2Shanghai Jiao Tong University,3CASIA,4KuaiShou Correspond to hu-x21@mails.t...
https://arxiv.org/abs/2505.21067v1
Surprisingly, this simple distillation setup is already sufficient to outperform—and in many cases significantly surpass—existing state-of-the-art (SOTA) zero-RL models on reasoning-intensive benchmarks such as AIME2024, AIME2025, HMMT and GPQA. This result is particularly striking given that zero-RL methods typically ...
https://arxiv.org/abs/2505.21067v1
The prompt samples also typically need to be carefully selected to match the capabilities of the base model and provide sufficient challenge; otherwise, performance gains from zero-RL tend to saturate [7, 10, 12, 22]. Distillation from reasoning model. Several methods tried to elicit LLM’s reasoning capability through ...
https://arxiv.org/abs/2505.21067v1
MATH500 [ 32]. AIME 2024 and AIME 2025 represent the American Invitational Mathematics Examination held in 2024 and 2025. AIME is an Olympiad-style exam that tests a wide range of mathematical abilities. It contains 30 problems each year. HMMT is one of the largest and most prestigious high school competitions. HMMT Fe...
https://arxiv.org/abs/2505.21067v1
57,000 8,000 - AIME2024 (Avg@32) 61.2 50.6 41.9 27.3 16.8 AIME2024 (Pass@8(40)) 82.7 71.3 65.9 48.7 46.9 AIME2025 (Avg@32) 50.0 32.9 33.3 10.2 8.3 AIME2025 (Pass@8(40)) 74.7 51.7 53.4 28.1 27.9 HMMT Feb 2025 (Avg@32) 34.6 13.8 20.9 5.4 1.9 HMMT Feb 2025 (Pass@8(40)) 65.0 28.3 38.3 9.3 10.0 GPQA Diamond (Avg@8) 60.0 48....
https://arxiv.org/abs/2505.21067v1
similar to the difference observed between "aha" and non-"aha" model outputs in recent work [22]. 4 Table 2: The contrasting solution styles of the two models on an example from AIME 2024. Question: Define f(x) = |x| −1 2 andg(x) = |x| −1 4 . Find the number of intersections of the graphs of y= 4g(f(sin(2 πx))) andx= 4...
https://arxiv.org/abs/2505.21067v1
the outputs of the zero-RL model. We also performed a token frequency analysis on the base model, Qwen2.5-32B-base, using its responses to the AIME2024 problems. As shown in Figure 2, the base model shows a response pattern very similar to that of the zero-RL models built on top of it—mainly following a step-by-step ap...
https://arxiv.org/abs/2505.21067v1
to the distilled model with generation of these tokens disabled during decoding. Metric Distilled-32B Distilled-32B (Token-Restricted) ∆ AIME2024 (Avg@32) 61.2 50.3 -10.9 AIME2025 (Avg@32) 52.9 38.0 -14.9 HMMT Feb 2025 (Avg@32) 34.6 26.4 -8.2 GPQA Diamond (Avg@8) 60.0 56.0 -4.0 MATH500 (Avg@8) 93.8 91.7 -2.1 It is wort...
https://arxiv.org/abs/2505.21067v1
are reflected through certain key phrases, which can be interpreted in context. For example, expressions like "let’s try another angle..." or "but I need a better strategy 7 ... here’s an idea, let’s try... <solving process> ..." often indicate Multi-Perspective Thinking or Attempting ; and expressions such as "wait, m...
https://arxiv.org/abs/2505.21067v1
MP count MA count MP count MA count AIME2024 7.86 7.64 4.39 4.56 -3.47 -3.08 AIME2025 8.39 8.03 4.89 4.80 -3.50 -3.23 HMMT Feb 2025 9.27 7.99 4.78 4.86 -4.49 -3.13 GPQA Diamond 7.62 6.92 4.44 4.35 -3.18 -2.57 MATH500 5.44 5.83 2.84 3.52 -2.60 -2.31 Notably, even though blocking these tokens reduces the number of advanc...
https://arxiv.org/abs/2505.21067v1
者7开头的数。但我们刚才得到a=5。怎么回事? ... 但等一下,我们可能漏掉了某些情况。因为当计算d的时候... ... 但是,我们需要确保没有更大的N,比如说6000多的数。为什 么我们的解得出a=5?这是因为我们在解同余方程时... ... [Answer]. 5 Discussion Potential reward hacking and overfitting in zero-RL. Works such as [ 9,10,7] have contributed very valuable open-source datasets and provided detailed training reports, offe...
https://arxiv.org/abs/2505.21067v1
advanced cognitive behaviors that enable more diverse reasoning paths, which may help RL extract richer feedback signals. We believe it is a promising pathway toward reproducing models like OpenAI o1 or DeepSeek R1. For further discussion on the possible reasons why larger models such as DeepSeek-V3-Base can exhibit su...
https://arxiv.org/abs/2505.21067v1
preprint arXiv:2504.13837 , 2025. [14] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [15] Yixin Ye, Zhen Huang, Yang Xiao, Ethan C...
https://arxiv.org/abs/2505.21067v1
Hochlehnert, Hardik Bhatnagar, Vishaal Udandarao, Samuel Albanie, Ameya Prabhu, and Matthias Bethge. A sober look at progress in language model reasoning: Pitfalls and paths to reproducibility. arXiv preprint arXiv:2504.07086 , 2025. [34] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pint...
https://arxiv.org/abs/2505.21067v1
920 distillation problems. B.2 Training Details of Distillation We use the prompt template from [ 25] for distillation. See Table 7 for details. We train using bfloat16 precision. The learning rate is linearly increased to 1e-5 over the first 5% of training steps, then decayed to zero following a cosine schedule for th...
https://arxiv.org/abs/2505.21067v1
framework. As pointed out in [ 33], the choice of evaluation framework can even affect the results a lot. For fairness, all models are evaluated using the same evaluation framework. Specifically, we adopt the framework from Qwen2.5-Math4, which itself is adapted from Math- Evaluation-Harness5. In practice, we find that...
https://arxiv.org/abs/2505.21067v1
GPQA Diamond (Avg@8) 60.2 49.5 55.3 47.3 41.1 MATH500 (Avg@8) 94.1 67.2 90.8 82.4 75.2 model [ 11]. The "no template" refers to the template in Table 13. The "Qwen-boxed template" refers to the template in Table 12. The "Qwen2.5-math-cot template" refers to the template in Table 7. Table 15: Performance of Qwen2.5-32B-...
https://arxiv.org/abs/2505.21067v1
- Option (D): Planetary gravity will become stronger. Incorrect, as the effect is likely to make gravity weaker due to increased centrifugal force. Therefore, the most likely effect of the planet rotating faster after a meteorite impact is that planetary days will become shorter. Answer: C Thus the correct answer is C ...
https://arxiv.org/abs/2505.21067v1
properties and coordinates ... Step 3 : Consider Euler’s formula relating the circumcenter and incen- ter ... ... Step 36 : Going back to the coordinates and distance ... Step 37 : Using the distances in terms of angles ... Sincepandqare positive (as they are products of magnitudes), the terms (1 +p 1−4p2)and(1 +p 1−4q...
https://arxiv.org/abs/2505.21067v1
,maybe the finite region is bounded in 3D space on the plane... ... Wait, let’s re-examine . ... But perhaps this is similar to the previous approach .Alterna- tively , consider normalizing the coordinates. ... [Answer]. •Mathematical reasoning tokens : assume, suppose, define, expand, apply, use, multiply, solve, simp...
https://arxiv.org/abs/2505.21067v1
during problem-solving to assess progress, evaluate current strategies, and identify potential errors in real time. Any reflective hesitation, backtracking, and verification are indicative of this awareness. For example, expressions like "wait, maybe my approach is wrong here" and "it seems not correct, step back". Pro...
https://arxiv.org/abs/2505.21067v1
arXiv:2505.21074v1 [cs.LG] 27 May 2025Red-Teaming Text-to-Image Systems by Rule-based Preference Modeling Yichuan Cao1⋆, Yibo Miao1⋆†, Xiao-Shan Gao1, Yinpeng Dong2 1KLMM, UCAS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China 2College of AI, Tsinghua University, Beijing 10...
https://arxiv.org/abs/2505.21074v1
glaring at her rival, with …Black -box Text-to-image System Detection - based Defense Removal - based Defense Unknown Defense SFW NSFW Reject Stage 1 Prompt Modification and Query > > Inter -class > >Intra -class ... Original Prompt girl rivals, belligerent tension, glare, skimpy clothing, 2021Stage 3 DPO LLM Agent Mod...
https://arxiv.org/abs/2505.21074v1
Once fine-tuned, the LLM can modify even previously unseen prompts into those that successfully induce the target T2I system to generate harmful images. We conduct extensive experiments on nineteen T2I systems with diverse security mechanisms to confirm the superiority of our method. The experimental results demonstrat...
https://arxiv.org/abs/2505.21074v1
experiential information, dynamically adapting to the varied defenses of real-world black-box systems through iterative exploration. We propose a novel red-team framework, Rule-based Preference modeling Guided Red-Teaming (RPG-RT), which operates iteratively as follows: 1) Using large language models (LLMs) to automati...
https://arxiv.org/abs/2505.21074v1
modifications for each original prompt, denoted as P1, P2, ..., P N, and queries the target T2I system. The feedback output from the target T2I system for Pican be categorized into three types: TYPE-1 : The T2I system’s pre-processing or post-processing safety filter produces a rejection message, i.e., M(Pi) =reject .T...
https://arxiv.org/abs/2505.21074v1
SFW images as a reference, aligning the cosine similarity between the representations of other harmless semantics across different images with the CLIP similarity of the corresponding safe images, regardless of whether these images are safe or unsafe. The alignment can be expressed by the following loss: Lsim=1n 2X 1...
https://arxiv.org/abs/2505.21074v1
images M(Pi)on the target T2I system Mthat maximize the harmfulness of NSFW semantics, while maintaining as much similarity as possible with the images M0(P)generated by the original prompt Pon the reference T2I model M0without defense mechanisms. For the NSFW semantics, we use the pre-trained scoring model to compute ...
https://arxiv.org/abs/2505.21074v1
removal-based defenses, we consider ESD [ 16], Safe Latent Diffusion (SLD) [ 53] under the two strongest settings (namely SLD-strong and SLD-max), Stable Diffusion with the negative prompt (SD-NP) [ 52], SafeGen [ 29], AdvUnlearn [ 65], DUO [ 45], and adaptive defense SAFREE [ 61]. For the safety-aligned models, we uti...
https://arxiv.org/abs/2505.21074v1
methods, we choose the MMA-Diffusion [ 58] and two variants of P4D (P4D-K and P4D-N) [ 8]. Metrics. We use four metrics to evaluate the performance of RPG-RT from multiple perspectives. First, we use the Attack Success Rate (ASR) to measure the proportion of modified prompts that successfully lead to NSFW semantics. To...
https://arxiv.org/abs/2505.21074v1
bypass the safety checker and generate images across various NSFW categories, b):generate pornographic images on multiple APIs, and c):generalize to text-to-video systems. defense, AdvUnlearn, RPG-RT still achieves an ASR greater than 40% with the highest semantic similarity, far exceeding the second-place ASR of 2.04%...
https://arxiv.org/abs/2505.21074v1
data. We directly evaluate the trained RPG-RT without further fine-tuning, whereas other methods are re-optimized on the new data. The results in Table 3 show that, even in this direct transfer scenario, RPG-RT still significantly outperforms other methods, exhibiting the highest ASR. This result indicates that, compar...
https://arxiv.org/abs/2505.21074v1
(RPG-RT). RPG-RT employs an iterative process that begins with utilizing LLMs to adapt prompts. Subsequently, it applies rule-guided preference modeling and fine-tunes the LLM based on feedback. We propose a rule-based scoring mechanism in preference modeling to finely control LLM exploration in black-box systems. Exte...
https://arxiv.org/abs/2505.21074v1
al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning , 2024. [16] Rohit Gandikota, Joanna Materzynska, Jaden Fiotto-Kaufman, and David Bau. Erasing concepts from diffusion models. In Proceedings of the IEEE/CVF International Conference ...
https://arxiv.org/abs/2505.21074v1
Linguistics, 2024. [35] Yibo Miao, Yinpeng Dong, Jinlai Zhang, Lijia Yu, Xiao Yang, and Xiao-Shan Gao. Improving robustness of 3d point cloud recognition from a fourier perspective. Advances in Neural Information Processing Systems , 37:68183–68210, 2024. [36] Yibo Miao, Yinpeng Dong, Jun Zhu, and Xiao-Shan Gao. Isomet...
https://arxiv.org/abs/2505.21074v1
inappropriate degeneration in diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22522–22531, 2023. [54] Patrick Schramowski, Christopher Tauchmann, and Kristian Kersting. Can machines help us answering question 16 in datasheets, and in turn reflecting on inap...
https://arxiv.org/abs/2505.21074v1
availability attacks in 3d point clouds. InInternational Conference on Machine Learning , pages 62510–62530, 2024. [69] Haomin Zhuang, Yihua Zhang, and Sijia Liu. A pilot study of query-free adversarial attack against stable diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition...
https://arxiv.org/abs/2505.21074v1
or alignment [ 32,22,20] to eliminate these concepts from the model’s parameters entirely. In addition to these defense mechanisms, some commercial models attempt to filter NSFW data before the training phase or employ some unknown defense strategies to address the challenge of generating unsafe outputs [ 15,38,43]. To...
https://arxiv.org/abs/2505.21074v1
the early stages of training, we repeatedly query the same modified prompt after a TYPE-3 modification occurs. Additionally, we also set a limit of 3 repetitions to promote more diverse modifications. For the LLM agent, we select the unaligned Vicuna-7B model [ 7] as the base model, as safety-aligned LLMs may reject pr...
https://arxiv.org/abs/2505.21074v1
Latent Diffusion (SLD) [ 53] under the two strongest settings (namely SLD- strong and SLD-max), Stable Diffusion with the negative prompt (SD-NP) [ 52], SafeGen [ 29], AdvUnlearn [ 65], DUO [ 45], and adaptive defense SAFREE [ 61]. For the safety-aligned models, we utilize Stable Diffusion v2.1 (SD2) [ 52], v3 (SD3) [ ...
https://arxiv.org/abs/2505.21074v1
RPG-RT effectively generates NSFW semantics while preserving semantic similarity to the original image, successfully performing red-teaming on T2I systems with various defense mechanisms. C.3 Red-teaming on Different NSFW Categories In this section, We provide RPG-RT’s performance comparison with other baselines on red...
https://arxiv.org/abs/2505.21074v1