Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents Paper • 2410.13886 • Published Oct 11, 2024
HiL-Bench (Human-in-Loop Benchmark): Do Agents Know When to Ask for Help? Paper • 2604.09408 • Published 9 days ago • 3
HiL-Bench (Human-in-Loop Benchmark): Do Agents Know When to Ask for Help? Paper • 2604.09408 • Published 9 days ago • 3