Claw-Eval-Live: A Live Agent Benchmark for Evolving Real-World Workflows Paper • 2604.28139 • Published 13 days ago • 42
ClawMark: A Living-World Benchmark for Multi-Turn, Multi-Day, Multimodal Coworker Agents Paper • 2604.23781 • Published 17 days ago • 33
BARRED: Synthetic Training of Custom Policy Guardrails via Asymmetric Debate Paper • 2604.25203 • Published 15 days ago • 8
Why Fine-Tuning Encourages Hallucinations and How to Fix It Paper • 2604.15574 • Published 27 days ago • 23
DR-Venus: Towards Frontier Edge-Scale Deep Research Agents with Only 10K Open Data Paper • 2604.19859 • Published 22 days ago • 51
DR^{3}-Eval: Towards Realistic and Reproducible Deep Research Evaluation Paper • 2604.14683 • Published 27 days ago • 36
view article Article Inside VAKRA: Reasoning, Tool Use, and Failure Modes of Agents ibm-research • 27 days ago • 28
From Reasoning to Agentic: Credit Assignment in Reinforcement Learning for Large Language Models Paper • 2604.09459 • Published 30 days ago • 13
OccuBench: Evaluating AI Agents on Real-World Professional Tasks via Language World Models Paper • 2604.10866 • Published 30 days ago • 65
Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents Paper • 2604.06132 • Published Apr 7 • 119
SkillClaw: Let Skills Evolve Collectively with Agentic Evolver Paper • 2604.08377 • Published Apr 9 • 289
Xpertbench: Expert Level Tasks with Rubrics-Based Evaluation Paper • 2604.02368 • Published Mar 27 • 12
When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes Paper • 2404.12365 • Published Apr 18, 2024 • 2
How Well Do Agentic Skills Work in the Wild: Benchmarking LLM Skill Usage in Realistic Settings Paper • 2604.04323 • Published Apr 6 • 41
MiroEval: Benchmarking Multimodal Deep Research Agents in Process and Outcome Paper • 2603.28407 • Published Mar 30 • 70