lora8-fewshot / README.md
NikhilSharma's picture
Update README.md
342bbd3 verified
metadata
library_name: peft
pipeline_tag: text-generation
base_model: google/gemma-3-1b-it
license: gemma
language: en
datasets:
  - nbertagnolli/counsel-chat
tags:
  - lora
  - peft
  - gemma3
  - few-shot
  - counseling
  - empathy

lora8-fewshot — LoRA adapter for Gemma 3 1B IT

Lightweight LoRA rank-8 adapter trained on therapist Q&A from CounselChat to make google/gemma-3-1b-it more responsive for short, task-oriented counseling prompts.
This repo contains only the adapter; load it on top of the base model. :contentReference[oaicite:0]{index=0}


Quick start

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_id    = "google/gemma-3-1b-it"
adapter_id = "NikhilSharma/lora8-fewshot"

tok = AutoTokenizer.from_pretrained(base_id)
base = AutoModelForCausalLM.from_pretrained(base_id, torch_dtype="auto")
model = PeftModel.from_pretrained(base, adapter_id)

prompt = "How can I avoid thinking much?,I start thinking deeply about everything I may do or say and about anything that may happen. I really want to avoid it since it really bothers me."
chat = tok.apply_chat_template([{"role": "user", "content": prompt}], tokenize=False, add_generation_prompt=True)
inputs = tok(chat, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
print(tok.decode(out[0], skip_special_tokens=True))