Model Card for Model ID
Model Details
Model Description
- Developed by: Ba2han
- Funded by [optional]: None
- Model type: SLM
- Language(s) (NLP): English, Turkish
- License: MIT
- Finetuned from model [optional]: Ba2han/test-model-muon
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import torch
BASE_MODEL_PATH = "Ba2han/test-model-muon"
LORA_PATH = "Ba2han/muon-lora-2"
model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL_PATH,
device_map="auto",
torch_dtype=torch.bfloat16,
load_in_4bit=False,
low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL_PATH)
model = PeftModel.from_pretrained(model, LORA_PATH)
chat_pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto"
)
messages = [
{"role": "system", "content": "Sen bir asistansın. Kısa ve doğru cevaplar ver."},
{"role": "user", "content": "5+1 kaç eder?"},
]
# Convert to plain text for pipeline
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# --- Generate ---
outputs = chat_pipe(
prompt,
max_new_tokens=256,
temperature=0.62,
top_p=0.95,
top_k=16,
repetition_penalty=1.05,
do_sample=True
)
print(outputs[0]["generated_text"])
Output:
<|im_start|>system
Sen bir asistansın. Kısa ve doğru cevaplar ver.<|im_end|>
<|im_start|>user
5+1 kaç eder?<|im_end|>
<|im_start|>assistant
Adım 1: 5 ve 1 sayılarını toplamam gerekiyor.
Adım 2: 5 + 1 = 6.
Cevap: 6
Evaluation
Summary
Model Examination [optional]
[More Information Needed]
- Downloads last month
- -
Model tree for Ba2han/muon-lora-2
Unable to build the model tree, the base model loops to the model itself. Learn more.