borong-finetune-qwen-2.5

This is a fine-tuned version of Qwen/Qwen2.5-14B-Instruct.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ehsan404/borong-finetune-qwen-2.5")
model = AutoModelForCausalLM.from_pretrained("ehsan404/borong-finetune-qwen-2.5")

# Generate response
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Model Details

  • Base Model: Qwen/Qwen2.5-14B-Instruct
  • Fine-tuning: LoRA with rank 8
  • Training: Custom dataset
  • Precision: bfloat16
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ehsan404/borong-finetune-qwen-2.5

Base model

Qwen/Qwen2.5-14B
Finetuned
(370)
this model