πŸš€ How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base = "Qwen/Qwen2.5-7B-Instruct"
adapter = "BrahmAI/Ziq-Scientific-Finetuned-model"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
model = PeftModel.from_pretrained(model, adapter)

inp = "Input Prompt"
out = model.generate(**tokenizer(inp, return_tensors="pt").to(model.device), max_new_tokens=200)
print(tokenizer.decode(out[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for BrahmAI/Ziq-Scientific-Finetuned-model

Base model

Qwen/Qwen2.5-7B
Finetuned
(2827)
this model