FrontAgent: Frontend Engineering Agent
Collection
A collection for FrontAgent, an LLM-powered agent system for frontend engineering. It includes the SFT dataset, LoRA planner model and demo Space. • 3 items • Updated • 1
How to use ceilf6/frontagent-planner-7B-lora with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen2.5-coder-7b-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "ceilf6/frontagent-planner-7B-lora")How to use ceilf6/frontagent-planner-7B-lora with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ceilf6/frontagent-planner-7B-lora")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ceilf6/frontagent-planner-7B-lora", dtype="auto")How to use ceilf6/frontagent-planner-7B-lora with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ceilf6/frontagent-planner-7B-lora"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ceilf6/frontagent-planner-7B-lora",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/ceilf6/frontagent-planner-7B-lora
How to use ceilf6/frontagent-planner-7B-lora with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ceilf6/frontagent-planner-7B-lora" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ceilf6/frontagent-planner-7B-lora",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ceilf6/frontagent-planner-7B-lora" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ceilf6/frontagent-planner-7B-lora",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use ceilf6/frontagent-planner-7B-lora with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ceilf6/frontagent-planner-7B-lora to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ceilf6/frontagent-planner-7B-lora to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ceilf6/frontagent-planner-7B-lora to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="ceilf6/frontagent-planner-7B-lora",
max_seq_length=2048,
)How to use ceilf6/frontagent-planner-7B-lora with Docker Model Runner:
docker model run hf.co/ceilf6/frontagent-planner-7B-lora
基于 Qwen2.5-Coder-7B 微调的前端任务规划 LoRA adapter,从 FrontAgent 的 Planner 阶段蒸馏而来,能够根据自然语言任务描述生成结构化的前端开发执行计划。
输入一个前端开发任务描述和项目上下文,模型输出结构化的 JSON 执行计划,包含:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "Qwen/Qwen2.5-Coder-7B"
adapter = "ceilf6/frontagent-planner-7B-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, adapter)
messages = [
{"role": "system", "content": "你是一个资深前端工程师和项目规划专家。请根据以下任务描述和项目上下文,生成一个结构化的执行计划。计划应按阶段组织(阶段1-分析、阶段2-创建、阶段3-安装、阶段4-验证、阶段5-启动、阶段6-浏览器验证、阶段7-仓库管理),每个步骤包含 description(描述)、action(动作类型)、phase(所属阶段)。同时提供 risks(潜在风险)和 alternatives(备选方案)。\n\n可用的动作类型: read_file, list_directory, create_file, apply_patch, search_code, get_ast, run_command, browser_navigate, browser_screenshot, get_page_structure, browser_click, browser_type"},
{"role": "user", "content": "任务:创建一个用户登录页面,包含邮箱和密码输入框,支持表单验证\n\n项目上下文:\nReact 18 + TypeScript + Ant Design 5"},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1536, temperature=0.7, top_p=0.9)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
由 Claude API 基于 FrontAgent-app 的 Planner 系统提示词合成的 ~100 条前端任务规划数据,覆盖创建、修改、分析三类任务场景,Alpaca 格式 (instruction/input/output)。