Papers
arxiv:2605.14051

SPIN: Structural LLM Planning via Iterative Navigation for Industrial Tasks

Published on May 13
· Submitted by
Dhaval Patel
on May 15
Authors:
,

Abstract

SPIN is a planning wrapper that combines validated DAG planning with prefix-based execution control to reduce task execution and improve plan validity in industrial LLM agent systems.

AI-generated summary

Industrial LLM agent systems often separate planning from execution, yet LLM planners frequently produce structurally invalid or unnecessarily long workflows, leading to brittle failures and avoidable tool and API cost. We propose SPIN, a planning wrapper that combines validated Directed Acyclic Graph (DAG) planning with prefix based execution control. SPIN enforces a strict DAG contract through \_validate\_plan\_text and repair prompting, producing executable plans before downstream execution, and then evaluates DAG prefixes incrementally to stop when the current prefix is sufficient to answer the query. On AssetOpsBench, across 261 scenarios, SPIN reduces executed tasks from 1061 to 623 and improves Accomplished from 0.638 to 0.706, while reducing tool calls from 11.81 to 6.82 per run. On MCP Bench, the same wrapper improves planning, grounding, and dependency related scores for both GPT OSS1 and Llama 4 Maverick.

Community

Paper submitter

Industrial LLM agent systems often separate planning from execution, yet LLM
planners frequently produce structurally invalid or unnecessarily long
workflows, leading to brittle failures and avoidable tool and API cost. We
propose \texttt{SPIN}, a planning wrapper that combines validated Directed
Acyclic Graph (DAG) planning with prefix based execution control. \texttt{SPIN}
enforces a strict DAG contract through \texttt{_validate_plan_text} and
repair prompting, producing executable plans before downstream execution, and
then evaluates DAG prefixes incrementally to stop when the current prefix is
sufficient to answer the query. On AssetOpsBench, across 261 scenarios,
\texttt{SPIN} reduces executed tasks from 1061 to 623 and improves
\emph{Accomplished} from 0.638 to 0.706, while reducing tool calls from 11.81
to 6.82 per run. On MCP Bench, the same wrapper improves planning, grounding,
and dependency related scores for both GPT OSS1 and Llama 4 Maverick.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.14051 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.14051 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.14051 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.