xTRam1's picture
added readme.md
dc9ed41
metadata
tags:
  - plan-and-act
  - planning
  - llm-agents
  - web-navigation
  - 70b
library_name: transformers
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.3-70B-Instruct
license: mit
datasets:
  - xTRam1/plan-and-act-data

Plan-and-Act Planner 70B

This is the Planner model used in the Plan-and-Act framework from the paper:
Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks
Code: https://github.com/SqueezeAILab/plan-and-act

The Planner generates structured, high-level plans for long-horizon tasks.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "xTRam1/plan-and-act-planner-70b"
tok = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

prompt = "Goal: Find the cheapest flight from SFO to JFK next Monday."
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512)
print(tok.decode(out[0], skip_special_tokens=True))

Citation

@inproceedings{
erdogan2025planandact,
title={Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks},
author={Lutfi Eren Erdogan and Hiroki Furuta and Sehoon Kim and Nicholas Lee and Suhong Moon and Gopala Anumanchipalli and Kurt Keutzer and Amir Gholami},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=ybA4EcMmUZ}
}