File size: 2,256 Bytes
39e530b
6ac81aa
16fb75e
d404541
6ac81aa
 
 
 
 
d404541
6ac81aa
 
 
 
 
 
 
 
39e530b
 
 
 
b8eeea0
6ac81aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d772981
6ac81aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
library_name: peft
model_name: lora_3B_TR
tags:
- meta-llama/Llama-3.2-3B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
licence: license
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets:
- kadirnar/combined-turkish-datasets-v5
language:
- tr
- en
---



# Model Card for Lora_TR_3B

This is a Lora Adaptor of 'meta-llama/Llama-3.2-3B-Instruct'.
The main goal of this adapter is to obtain an Llama who speaks Turkish better.
>(r=32, lora_alpha=64, lora_dropout=0.005)

## Quick start

```python
from unsloth import FastLanguageModel
from peft import PeftModel
from transformers import AutoTokenizer

BASE = "meta-llama/Llama-3.2-3B-Instruct"
ADAPTER = "Codex07/Lora_3B_TR"

# Load Model
model, tok = FastLanguageModel.from_pretrained(
    model_name=BASE, max_seq_length=2048, load_in_4bit=False, dtype=None, device_map="auto"
)
# Load Adaptor
model = PeftModel.from_pretrained(model, ADAPTER)  # adapter’ı Unsloth modeline tak
FastLanguageModel.for_inference(model)

# Test
messages = [
    {"role":"system","content":"You are AI assistant. Give user answers"},# Sen bir Yapay Zeka Asistanısısın. kullanıcıdan gelen sorulara resmi cevap ver.
    {"role":"user","content":"Merhaba!"},
]
prompt = tok.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
out = model.generate(prompt, max_new_tokens=2048)
print(tok.decode(out[0, prompt.shape[-1]:], skip_special_tokens=True))
```

## Training procedure
Half of 'kadirnar/combined-turkish-datasets-v5' Turkish dataset used. 
Dataset divided into chunks by size 65k.

This model was trained with SFT.

### Framework versions

- PEFT 0.17.1
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.3.0
- Tokenizers: 0.22.1

## Citations

Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
	title        = {{TRL: Transformer Reinforcement Learning}},
	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
	year         = 2020,
	journal      = {GitHub repository},
	publisher    = {GitHub},
	howpublished = {\url{https://github.com/huggingface/trl}}
}
```