Papers
arxiv:2602.14364

A Trajectory-Based Safety Audit of Clawdbot (OpenClaw)

Published on Feb 16
ยท Submitted by
Tianyu Chen
on Feb 18
Authors:
,
,
,

Abstract

Clawdbot, a self-hosted AI agent with diverse tool capabilities, exhibits varying safety performance across different risk dimensions, particularly struggling with ambiguous or adversarial inputs despite consistent reliability in specified tasks.

AI-generated summary

Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises heightened safety and security concerns under ambiguity and adversarial steering. We present a trajectory-centric evaluation of Clawdbot across six risk dimensions. Our test suite samples and lightly adapts scenarios from prior agent-safety benchmarks (including ATBench and LPS-Bench) and supplements them with hand-designed cases tailored to Clawdbot's tool surface. We log complete interaction trajectories (messages, actions, tool-call arguments/outputs) and assess safety using both an automated trajectory judge (AgentDoG-Qwen3-4B) and human review. Across 34 canonical cases, we find a non-uniform safety profile: performance is generally consistent on reliability-focused tasks, while most failures arise under underspecified intent, open-ended goals, or benign-seeming jailbreak prompts, where minor misinterpretations can escalate into higher-impact tool actions. We supplemented the overall results with representative case studies and summarized the commonalities of these cases, analyzing the security vulnerabilities and typical failure modes that Clawdbot is prone to trigger in practice.

Community

Paper author Paper submitter

We present a trajectory-based safety audit of Clawdbot (OpenClaw), a self-hosted tool-using AI agent. We evaluate 34 test cases across 6 risk dimensions and find a non-uniform safety profile (58.9% overall pass rate): the agent handles well-scoped tasks reliably but struggles with ambiguity, open-ended goals, and adversarial prompts. Notably, it scores 0% on intent misunderstanding cases. We release our test suite and use AgentDoG-Qwen3-4B as an automated trajectory judge.

๐Ÿ“Š Dataset: https://huggingface.co/datasets/tianyyuu/clawdbot_safety_testing
๐Ÿค– Trajectory Judge: https://huggingface.co/AI45Research/AgentDoG-Qwen3-4B

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.14364 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 1