LLM-OS-Models/LFM2-8B-Terminal-SFT-2Epoch-Unsloth-7GPU

ํ„ฐ๋ฏธ๋„ ์ž‘์—… ์ž๋™ํ™”๋ฅผ ์œ„ํ•œ Terminal SFT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ž…๋ ฅ๋œ ์ž‘์—…/์ด์ „ ํ„ฐ๋ฏธ๋„ ์ƒํƒœ๋ฅผ ๋ณด๊ณ  ๋‹ค์Œ์— ์‹คํ–‰ํ•  ๋ช…๋ น์„ JSON ํ˜•ํƒœ๋กœ ์ƒ์„ฑํ•˜๋Š” ์šฉ๋„๋กœ ํ•™์Šตํ–ˆ์Šต๋‹ˆ๋‹ค.

๋ชจ๋ธ ์š”์•ฝ

  • Base model: LiquidAI/LFM2-8B
  • Training setup: 2 epochs, Unsloth SFT
  • Model card snapshot: 2026-05-09 00:57:59 UTC
  • Corrected TB2-lite evaluated results currently indexed: 56
  • Corrected TB2-lite score: pending / not matched in current result directory

Quickstart

์„ค์น˜์™€ ๋กœ๊ทธ์ธ:

pip install -U vllm transformers huggingface_hub
huggingface-cli login

๊ด€๋ จ ์ฝ”๋“œ:

  • GitHub: https://github.com/LLM-OS-Models/Terminal
  • vLLM ํ‰๊ฐ€ ์‹คํ–‰: tb2_lite/scripts/replay_eval.py
  • chat template/fallback ์ƒ์„ฑ: tb2_lite/scripts/prompt_builder.py
  • JSON/command ์ฑ„์ : tb2_lite/scripts/replay_metrics.py

vLLM ์ง์ ‘ ์‹คํ–‰ ์˜ˆ์‹œ. ํ‰๊ฐ€ ์ฝ”๋“œ์™€ ๋™์ผํ•˜๊ฒŒ chat template์„ ์šฐ์„  ์‚ฌ์šฉํ•˜๊ณ , template์ด ์—†์œผ๋ฉด ChatML/Gemma fallback์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

model_id = "LLM-OS-Models/LFM2-8B-Terminal-SFT-2Epoch-Unsloth-7GPU"
tp = 1

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
llm = LLM(
    model=model_id,
    tokenizer=model_id,
    trust_remote_code=True,
    dtype="bfloat16",
    tensor_parallel_size=tp,
    max_model_len=49152,
    gpu_memory_utilization=0.92,
)

messages = [
    {"role": "system", "content": "You are a terminal automation assistant. Return JSON only."},
    {"role": "user", "content": "Inspect the current directory and list Python files."},
]

def render_chatml(messages):
    parts = []
    for message in messages:
        role = "assistant" if message["role"] == "assistant" else message["role"]
        if role == "tool":
            role = "user"
        parts.append(f"<|im_start|>{role}\n{message['content']}<|im_end|>\n")
    parts.append("<|im_start|>assistant\n")
    return "".join(parts)

def render_gemma4_turn(messages, empty_thought_channel=False):
    parts = ["<bos>"]
    for message in messages:
        role = "model" if message["role"] == "assistant" else message["role"]
        if role == "tool":
            role = "user"
        parts.append(f"<|turn>{role}\n{message['content'].strip()}<turn|>\n")
    parts.append("<|turn>model\n")
    if empty_thought_channel:
        parts.append("<|channel>thought\n<channel|>")
    return "".join(parts)

def render_prompt(model_id, tokenizer, messages):
    model_key = model_id.lower()
    if "gemma-4" in model_key:
        try:
            return tokenizer.apply_chat_template(
                messages,
                tokenize=False,
                add_generation_prompt=True,
                enable_thinking=False,
            )
        except Exception:
            return render_gemma4_turn(
                messages,
                empty_thought_channel=("26b" in model_key or "31b" in model_key),
            )
    try:
        return tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    except Exception:
        return render_chatml(messages)

prompt = render_prompt(model_id, tokenizer, messages)
sampling = SamplingParams(
    temperature=0.0,
    top_p=1.0,
    max_tokens=1024,
    repetition_penalty=1.0,
)
outputs = llm.generate([prompt], sampling_params=sampling)
print(outputs[0].outputs[0].text)

๊ถŒ์žฅ ์ถœ๋ ฅ ํ˜•์‹:

{
  "analysis": "brief reasoning about the next terminal action",
  "plan": "short execution plan",
  "commands": [
    {"keystrokes": "ls -la\n", "duration": 0.1}
  ],
  "task_complete": false
}

ํ‰๊ฐ€์™€ ๋™์ผํ•œ replay ๋ช…๋ น:

python tb2_lite/scripts/replay_eval.py \
  --model LLM-OS-Models/LFM2-8B-Terminal-SFT-2Epoch-Unsloth-7GPU \
  --model-short LLM-OS-Models__LFM2-8B-Terminal-SFT-2Epoch-Unsloth-7GPU \
  --eval-path tb2_lite/data/replay_full.jsonl \
  --output-dir /home/work/.data/tb2_lite_eval/corrected_readme_models_vllm \
  --dtype bfloat16 \
  --tp 1 \
  --max-model-len 49152 \
  --max-tokens 1024 \
  --temperature 0.0 \
  --top-p 1.0 \
  --gpu-memory-utilization 0.92 \
  --language-model-only
  • ๊ธฐ๋ณธ ๊ถŒ์žฅ tensor parallel: 1. OOM์ด๋ฉด --tp์™€ tensor_parallel_size๋ฅผ 2/4/8๋กœ ์˜ฌ๋ฆฌ์„ธ์š”.
  • corrected TB2-lite ํ‰๊ฐ€๋Š” temperature=0.0, top_p=1.0, max_tokens=1024๋กœ ๊ณ ์ •ํ–ˆ์Šต๋‹ˆ๋‹ค.
  • Gemma 4๋Š” JSON ์ถœ๋ ฅ์„ ์œ„ํ•ด enable_thinking=False๋ฅผ ์‚ฌ์šฉํ•˜๊ณ , 26B/31B ๊ณ„์—ด์€ ํ‰๊ฐ€ ์ฝ”๋“œ์—์„œ empty thought channel ์ฒ˜๋ฆฌ๋ฅผ ์ž๋™ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค.

ํ‰๊ฐ€ ์ƒํƒœ

  • Current corrected TB2-lite score: pending
  • Reason: ํ˜„์žฌ /home/work/.data/tb2_lite_eval/corrected_readme_models_vllm ์ง‘๊ณ„ ๊ฒฐ๊ณผ์™€ ์ด HF repo๋ช…์ด ์ง์ ‘ ๋งค์นญ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.
  • Next step: ๋™์ผํ•œ tb2_lite/scripts/replay_eval.py ๊ฒฝ๋กœ๋กœ ํ‰๊ฐ€๋ฅผ ๋Œ๋ฆฐ ๋’ค ์ ์ˆ˜ ์นด๋“œ๋กœ ์ž๋™ ๊ต์ฒดํ•ฉ๋‹ˆ๋‹ค.

๋ชจ๋ธ๊ตฐ ํ•ด์„

  • LFM ๊ณ„์—ด์€ ๋น ๋ฅธ sec/step๊ณผ ํฐ SFT ๋ฐ˜์‘์„ฑ์ด ์žฅ์ ์ž…๋‹ˆ๋‹ค. ์ด repo๋Š” ์•„์ง ํ˜„์žฌ ์ง‘๊ณ„ JSON๊ณผ ์ง์ ‘ ๋งค์นญ๋˜๋Š” ์ ์ˆ˜๊ฐ€ ์—†์–ด ๋ณ„๋„ ํ‰๊ฐ€๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.
  • TB2-lite ์ ์ˆ˜๋Š” ์ผ๋ฐ˜ ์ง€๋Šฅ ๋ฒค์น˜๋งˆํฌ๊ฐ€ ์•„๋‹ˆ๋ผ ํ„ฐ๋ฏธ๋„ next-action JSON ์žฌํ˜„ ๋Šฅ๋ ฅ์„ ์ธก์ •ํ•ฉ๋‹ˆ๋‹ค.
  • ์ƒ์„ฑ ๋ช…๋ น์€ ์‹ค์ œ ์‹คํ–‰ ์ „์— sandbox, allowlist, human review ๊ฐ™์€ ์•ˆ์ „์žฅ์น˜๋ฅผ ๊ฑฐ์ณ์•ผ ํ•ฉ๋‹ˆ๋‹ค.
Downloads last month
82
Safetensors
Model size
8B params
Tensor type
F32
ยท
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support