π§ Gemma-3-1B-IT LoRA Adapter β GLoRE Multi-Class Classification
π Model Overview
This repository contains a LoRA adapter fine-tuned on google/gemma-3-1b-it for multi-class text classification using the GLoRE dataset.
The model predicts one of the following 12 labels:
Yes, No, Neutral, (D), A, B, C, D, E, N, (C), (A)
This LoRA adapter is efficient, lightweight, and designed to extend the Gemma-3-1B-IT model with classification capabilities while keeping resource usage low.
π§© Use Cases
- Multi-class text classification
- Zero-shot / few-shot classification tasks using custom prompts
- Educational or research applications
- Lightweight inference on consumer GPUs
π How to Use
Loading the LoRA Adapter
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "google/gemma-3-1b-it"
adapter = "SwashBuckler001/gemma-3-1b-it-LoRA-GLoRE"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter)
text = "Your input here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
The adapter was trained using:
python peft_training.py \
--model-name google/gemma-3-1b-it \
--train-file ../GLoRE/data/splits/train.jsonl \
--output-dir gemma-3-1b-it-LoRA-GLoRE \
--classes Yes No Neutral "(D)" A B C D E N "(C)" "(A)"