vmal/ConfinityChatMLv1
Viewer • Updated • 142k • 37
How to use vmal/qwen2-7b-logical-reasoning with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-7B")
model = PeftModel.from_pretrained(base_model, "vmal/qwen2-7b-logical-reasoning")An autoregressive language model fine-tuned on ConfinityChatMLv1 for enhanced chain-of-thought and logical reasoning in conversational settings. Built on Qwen2-7B using PEFT/LoRA.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load tokenizer & base model
tokenizer = AutoTokenizer.from_pretrained(
"vmal/qwen2-7b-logical-reasoning",
trust_remote_code=True
)
base = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-7B",
trust_remote_code=True,
device_map="auto"
)
# Load LoRA adapters
model = PeftModel.from_pretrained(base, "vmal/qwen2-7b-logical-reasoning")
# Inference example
prompt = (
"Solve step by step: If all bloops are razzies, and some razzies are lazzies, "
"are all bloops lazzies?"
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Base model
Qwen/Qwen2-7B