--- language: - ar - en library_name: transformers tags: - qlora - peft - vision-language datasets: - mhenrichsen/alpaca_2k_test base_model: Qwen/Qwen2.5-VL-7B-Instruct model_type: qwen2_5_vl --- # Qwen2.5-VL-7B-Instruct Fine-tuned with QLoRA This model was fine-tuned using **Axolotl** with **QLoRA** on Arabic text data. It is based on [`Qwen/Qwen2.5-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). ## Training details - Method: QLoRA - Epochs: 3 - Optimizer: Paged AdamW 32bit - Quantization: 4-bit (NF4) - Hardware: NVIDIA H100 80GB - Dataset: Custom Arabic instruction-style text ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("injazsmart/thoth_test") tokenizer = AutoTokenizer.from_pretrained("injazsmart/thoth_test") prompt = "اشرح لي معنى الذكاء الاصطناعي بلغة بسيطة" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True))