maya-qwen-7b

Fine-tuned version of Qwen/Qwen2.5-7B-Instruct for customer support conversations.

Training Details

  • Method: LoRA fine-tuning with Unsloth + TRL SFTTrainer
  • Base model: Qwen/Qwen2.5-7B-Instruct
  • LoRA rank: 16
  • Format: ChatML

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("vivekmdrift/maya-qwen-7b")
tokenizer = AutoTokenizer.from_pretrained("vivekmdrift/maya-qwen-7b")

messages = [
    {"role": "system", "content": "You are a helpful customer support agent."},
    {"role": "user", "content": "How can I track my order?"},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
58
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vivekmdrift/maya-qwen-7b

Base model

Qwen/Qwen2.5-7B
Adapter
(1611)
this model
Adapters
1 model