Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,59 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- tr
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
library_name: transformers
|
| 6 |
+
tags:
|
| 7 |
+
- llama-3
|
| 8 |
+
- turkish
|
| 9 |
+
- tiny-llama
|
| 10 |
+
- scratch-build
|
| 11 |
+
datasets:
|
| 12 |
+
- TFLai/Turkish-Alpaca
|
| 13 |
+
metrics:
|
| 14 |
+
- loss
|
| 15 |
+
model_type: llama
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# Llama-TR-Mini (9M Parameters)
|
| 19 |
+
|
| 20 |
+
Llama-TR-Mini is an experimental, ultra-lightweight Turkish language model with **9.3 million parameters**, trained from scratch using the Llama 3 architecture.
|
| 21 |
+
|
| 22 |
+
This project was developed to explore the limits of small-scale language modeling and to understand the end-to-end pre-training/fine-tuning pipeline on consumer-grade hardware (Apple Silicon).
|
| 23 |
+
|
| 24 |
+
## Model Specifications
|
| 25 |
+
- **Architecture:** Llama 3
|
| 26 |
+
- **Parameters:** 9,343,232
|
| 27 |
+
- **Hidden Size:** 256
|
| 28 |
+
- **Intermediate Size:** 512
|
| 29 |
+
- **Number of Layers:** 8
|
| 30 |
+
- **Attention Heads:** 8
|
| 31 |
+
- **Vocabulary Size:** 5,000 (Custom Turkish Tokenizer)
|
| 32 |
+
- **Training Epochs:** 30
|
| 33 |
+
- **Device:** MacBook Pro (MPS - Metal Performance Shaders)
|
| 34 |
+
|
| 35 |
+
## Training Data
|
| 36 |
+
The model was trained on the [Turkish-Alpaca](https://huggingface.co/datasets/TFLai/Turkish-Alpaca) dataset, which contains approximately 52K instruction-following pairs translated into Turkish.
|
| 37 |
+
|
| 38 |
+
## Intended Use & Limitations
|
| 39 |
+
**Important Note:** Due to its extremely small size (9M parameters), this model is prone to significant hallucinations and may produce nonsensical or repetitive outputs.
|
| 40 |
+
|
| 41 |
+
- **Purpose:** Educational purposes, understanding LLM mechanics.
|
| 42 |
+
- **Not Suited For:** Production environments, factual information retrieval, or complex reasoning tasks.
|
| 43 |
+
- **Format:** Optimized for the Llama 3 Instruct template (`<|start_header_id|>user<|end_header_id|>`).
|
| 44 |
+
|
| 45 |
+
## How to Use
|
| 46 |
+
You can load this model using the `transformers` library:
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 50 |
+
|
| 51 |
+
model_id = "your-username/Llama-TR-Mini"
|
| 52 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 53 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 54 |
+
|
| 55 |
+
prompt = "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nTürkiye'nin başkenti neresidir?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
|
| 56 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 57 |
+
output = model.generate(**inputs, max_new_tokens=50, temperature=0.1, repetition_penalty=1.5)
|
| 58 |
+
|
| 59 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|