title,keywords,url,type [2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?,"llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl "[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl [2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models,"rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl