open-llama-7b-openthought-sft-4bit

Merged and 4-bit quantized OpenLLaMA 7B v2 fine-tuned on OpenThoughts-114k reasoning data with two-stage SFT.

How it was made

  1. Base: openlm-research/open_llama_7b_v2
  2. Stage 1 (mid): full-loss SFT on OpenThoughts-114k (3 epochs, 2K ctx)
  3. Stage 2 (sft): assistant-only SFT on OpenThoughts-114k (3 epochs, 2K ctx)
  4. Merged both LoRA stages into base (16bit), quantized to 4-bit NF4

Training Data

  • open-thoughts/OpenThoughts-114k (DeepSeek-R1 reasoning traces)
  • 10,582 samples after filtering (<= 2024 tokens)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ping98k/open-llama-7b-openthought-sft-4bit", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ping98k/open-llama-7b-openthought-sft-4bit")

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is a linked list?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=False))

Related models

Downloads last month
6
Safetensors
Model size
7B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ping98k/open-llama-7b-openthought-sft-4bit

Quantized
(24)
this model
Adapters
1 model