nbertagnolli/counsel-chat
Viewer • Updated • 2.78k • 1.24k • 64
How to use NikhilSharma/lora8-fewshot with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("/home/ge65wed/.cache/modelscope/hub/models/LLM-Research/gemma-3-1b-it")
model = PeftModel.from_pretrained(base_model, "NikhilSharma/lora8-fewshot")Lightweight LoRA rank-8 adapter trained on therapist Q&A from CounselChat to make google/gemma-3-1b-it more responsive for short, task-oriented counseling prompts.
This repo contains only the adapter; load it on top of the base model. :contentReference[oaicite:0]{index=0}
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_id = "google/gemma-3-1b-it"
adapter_id = "NikhilSharma/lora8-fewshot"
tok = AutoTokenizer.from_pretrained(base_id)
base = AutoModelForCausalLM.from_pretrained(base_id, torch_dtype="auto")
model = PeftModel.from_pretrained(base, adapter_id)
prompt = "How can I avoid thinking much?,I start thinking deeply about everything I may do or say and about anything that may happen. I really want to avoid it since it really bothers me."
chat = tok.apply_chat_template([{"role": "user", "content": prompt}], tokenize=False, add_generation_prompt=True)
inputs = tok(chat, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
print(tok.decode(out[0], skip_special_tokens=True))