File size: 1,297 Bytes
8a124e6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
library_name: llama-cpp-python
tags:
- llama
- instruction-tuned
- thai
- gguf
- quantized
- q8
- rag
- chatbot
language:
- th
---
# Llama 3.2 Typhoon2 3B Instruct (GGUF Q8_0)
Fine-tuned Thai instruction-following model quantized to GGUF Q8_0 format for efficient inference.
## Model Details
- **Base Model**: typhoon-ai/llama3.2-typhoon2-3b-instruct
- **Format**: GGUF (Q8_0 quantization)
- **Parameters**: 3 billion
- **Language**: Thai
- **Use Case**: Context-aware Q&A, RAG systems, chatbots
## Training
- **Framework**: Unsloth
- **Method**: Supervised Fine-Tuning (SFT)
- **Training Data**: Thai instruction-following dataset with negative samples for strictness
- **Optimization**: LoRA + 4-bit quantization during training
## Inference
### Using llama-cpp-python
```python
from llama_cpp import Llama
llm = Llama(
model_path="model.gguf",
n_ctx=4096,
n_gpu_layers=0,
)
response = llm(prompt, max_tokens=256, temperature=0.0)
```
### Docker Deployment (EKS)
See deployment guide in the chat-inference Helm chart.
## Performance
- **Quantization**: Q8_0 (8-bit)
- **Model Size**: ~3.3 GB
- **Inference Speed (CPU)**: ~2-5 tokens/sec (t3.xlarge)
- **Recommended CPU**: 2-4 cores, 4-6 GB RAM
## License
Apache License 2.0
|