Model Card for Lora_TR_1B

This is a Lora Adaptor of 'meta-llama/Llama-3.2-1B-Instruct'. The main goal of this adapter is to obtain an Llama who speaks Turkish better.

(r=32, lora_alpha=64, lora_dropout=0.005)

Quick start

from unsloth import FastLanguageModel
from peft import PeftModel
from transformers import AutoTokenizer

BASE = "meta-llama/Llama-3.2-1B-Instruct"
ADAPTER = "Codex07/Lora_1B_TR"

# Load Model
model, tok = FastLanguageModel.from_pretrained(
    model_name=BASE, max_seq_length=2048, load_in_4bit=False, dtype=None, device_map="auto"
)
# Load Adaptor
model = PeftModel.from_pretrained(model, ADAPTER)  # adapter’ı Unsloth modeline tak
FastLanguageModel.for_inference(model)

# Test
messages = [
    {"role":"system","content":"You are AI assistant. Give user answers"},# Sen bir Yapay Zeka Asistanısısın. kullanıcıdan gelen sorulara resmi cevap ver.
    {"role":"user","content":"Selam!"}
]
prompt = tok.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
out = model.generate(prompt, max_new_tokens=2048)
print(tok.decode(out[0, prompt.shape[-1]:], skip_special_tokens=True))

Training procedure

Half of 'kadirnar/combined-turkish-datasets-v5' Turkish dataset used. Dataset divided into chunks by size 65k.

1> 2:50:33 / 2.746500 -> 1.771400 / 5.1.0
2> 3:00:00 / 1.7 -> 1.7           / 5.1.1
3> 2:18:19 / 1.859100 -> 1.474300 / 5.1.2
4> 3:15:13 / 1.421800 -> 1.122000 / 5.1.3
5> 2:50:00 / 1.746600 -> 1.629600 / 5.1.0
6> 2:44:46 / 1.745000 -> 1.653300 / 5.1.1
7> 2:07:00 / 1.478200 -> 1.357400 / 5.1.2
8> 3:11:54 / 1.174700 -> 1.046100 / 5.1.3
9> 3:12:39 / 1.117600 -> 0.796700 / 5.2.0
10>1:00:57 / 2.217400 -> 1.741400 / 5.2.1
11>1:30:04 / 2.919900 -> 2.534300 / 5.2.2
12>1:30:05 / 2.534300 -> 2.320100 / 5.2.2 

This model was trained with SFT.

Framework versions

  • PEFT 0.17.1
  • TRL: 0.23.0
  • Transformers: 4.56.2
  • Pytorch: 2.8.0
  • Datasets: 4.3.0
  • Tokenizers: 0.22.1

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Codex07/Lora_1B_TR

Adapter
(509)
this model

Dataset used to train Codex07/Lora_1B_TR