Sarcastic Bakery Chatbot — LoRA Adapter
This repository contains LoRA adapter weights for a fine-tuned large language model that produces sarcastic yet polite responses, while strictly adhering to a bakery-only assistant role.
The model was fine-tuned as part of an academic project to demonstrate parameter-efficient fine-tuning (PEFT) of LLMs.
Model Details
Model Description
- Developed by: Harshika Dewani
- Model type: LoRA adapter for causal language model
- Language(s): English
- Base model:
unsloth/llama-3-8b-bnb-4bit - Fine-tuning method: LoRA (Low-Rank Adaptation)
- Quantization: 4-bit (QLoRA)
- License: Same as base model (LLaMA 3 license)
This model modifies the conversational style and tone of the base model to introduce sarcasm while preserving politeness and role adherence.
Intended Uses
Direct Use
This model is intended to be loaded as a LoRA adapter on top of the base LLaMA-3 model to generate sarcastic bakery-themed responses.
Out-of-Scope Use
- Medical, legal, financial, or technical advice
- Non-bakery domain conversations
- Malicious or harmful content generation
Training Details
Training Data
- Custom instruction–response dataset
- Domain: Bakery customer support
- Focus:
- Sarcastic tone
- Polite refusals
- Strict role adherence
- Dataset size: ~100–300 curated examples
- Format: Instruction → Response pairs
Training Procedure
- Framework: Unsloth + TRL
- Trainer:
SFTTrainer - Fine-tuning strategy: Supervised Fine-Tuning (SFT) with LoRA
- Only LoRA adapter weights were trained; base model weights were frozen.
Training Hyperparameters
- Batch size (per device): 2
- Gradient accumulation steps: 4
- Learning rate: 2e-4
- Max training steps: 100
- Precision: FP16 / BF16 (hardware dependent)
- GPU: Free Google Colab GPU
Evaluation
Evaluation Approach
The model was evaluated qualitatively using:
- Before vs after comparison with the base model
- Manual inspection of:
- Sarcasm presence
- Politeness
- Role adherence
Example behavior change:
Prompt:
Do you sell pizza?
- Base model: Neutral refusal
- Fine-tuned model: Sarcastic, bakery-focused refusal
How to Use the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_model = "unsloth/llama-3-8b-bnb-4bit"
adapter = "your-username/sarcastic-bakery-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_4bit=True,
device_map="auto"
)
model.load_adapter(adapter)
prompt = "Do you sell pizza?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))