open-llama-7b-openthought-sft-4bit
Merged and 4-bit quantized OpenLLaMA 7B v2 fine-tuned on OpenThoughts-114k reasoning data with two-stage SFT.
How it was made
- Base: openlm-research/open_llama_7b_v2
- Stage 1 (mid): full-loss SFT on OpenThoughts-114k (3 epochs, 2K ctx)
- Stage 2 (sft): assistant-only SFT on OpenThoughts-114k (3 epochs, 2K ctx)
- Merged both LoRA stages into base (16bit), quantized to 4-bit NF4
Training Data
- open-thoughts/OpenThoughts-114k (DeepSeek-R1 reasoning traces)
- 10,582 samples after filtering (<= 2024 tokens)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ping98k/open-llama-7b-openthought-sft-4bit", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ping98k/open-llama-7b-openthought-sft-4bit")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is a linked list?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=False))
Related models
- ping98k/open-llama-7b-openthought-mid-lora — Stage 1 LoRA
- ping98k/open-llama-7b-openthought-mid-4bit — Stage 1 merged 4bit
- ping98k/open-llama-7b-openthought-sft-lora — Stage 2 LoRA (on top of mid-4bit)
- Downloads last month
- 6