File size: 1,093 Bytes
e24b7c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language:
- ar
- en
library_name: transformers
tags:
- qlora
- peft
- vision-language
datasets:
- mhenrichsen/alpaca_2k_test
base_model: Qwen/Qwen2.5-VL-7B-Instruct
model_type: qwen2_5_vl
---

# Qwen2.5-VL-7B-Instruct Fine-tuned with QLoRA

This model was fine-tuned using **Axolotl** with **QLoRA** on Arabic text data.
It is based on [`Qwen/Qwen2.5-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).

## Training details
- Method: QLoRA
- Epochs: 3
- Optimizer: Paged AdamW 32bit
- Quantization: 4-bit (NF4)
- Hardware: NVIDIA H100 80GB
- Dataset: Custom Arabic instruction-style text

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("injazsmart/thoth_test")
tokenizer = AutoTokenizer.from_pretrained("injazsmart/thoth_test")

prompt = "اشرح لي معنى الذكاء الاصطناعي بلغة بسيطة"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))