Model Card: Granite 3.2 8B Instruct β€” BFRPG LoRA Adapter

Overview

A LoRA adapter fine-tuned on top of IBM Granite 3.2 8B Instruct for Basic Fantasy Role-Playing Game (BFRPG) rules Q&A. This is a parameter-efficient adapter β€” the base model weights are not modified. The adapter is loaded on top of the base model at inference time using the PEFT library.

Model Details

Property Value
Base Model ibm-granite/granite-3.2-8b-instruct
Parameters ~8B (base) + ~198MB (adapter)
Fine-Tuning Method QLoRA SFT (4-bit quantized base + LoRA adapter)
LoRA Rank 16
LoRA Alpha 32
LoRA Dropout 0.0
Target Modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Epochs 5
Effective Batch Size 2
Learning Rate 5e-6
Max Sequence Length 512
Quantization 4-bit NF4 via bitsandbytes
Hardware NVIDIA L40S (48GB)

Training Data

  • Dataset: 6–8 synthetic Q&A pairs generated from the Basic Fantasy RPG rulebook
  • Focus: Thief class abilities (Open Locks, Pick Pockets, Move Silently, etc.) and general BFRPG rules
  • Generation Method: LLM-based synthetic data pipeline with faithfulness judging via sdg_hub

System Prompt:

You are a rules expert for the Basic Fantasy Role-Playing Game. Answer questions accurately based on the official rules. Be specific and cite page references or table values where possible.

Usage Example

This is a LoRA adapter, not a standalone model. Load the base model first, then apply the adapter:

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch

# 4-bit quantization config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
)

# Load base model in 4-bit
base_model = AutoModelForCausalLM.from_pretrained(
    "ibm-granite/granite-3.2-8b-instruct",
    quantization_config=bnb_config,
    device_map="auto",
    dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-3.2-8b-instruct")

# Apply LoRA adapter
model = PeftModel.from_pretrained(base_model, "redhat-ai-dev/basic-fantasy-granite-lora-adapter")
model.eval()

# Run inference
messages = [
    {"role": "system", "content": "You are a rules expert for the Basic Fantasy Role-Playing Game. Answer questions accurately based on the official rules."},
    {"role": "user", "content": "What happens if a Thief fails an Open Locks attempt?"},
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device)
with torch.no_grad():
    outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))

Training Procedure

This adapter was trained using TRL with the Unsloth backend via training_hub. The base model was loaded in 4-bit quantization (QLoRA) to fit within GPU memory constraints. Only the LoRA adapter weights were trained; the base model weights were frozen throughout.

Framework Versions

  • PEFT 0.18.1
  • TRL 0.23.0
  • Transformers 4.57.2
  • PyTorch 2.10.0
  • Datasets 4.6.0
  • Tokenizers 0.22.2

Context

Fine-tuned as part of a Red Hat AI workshop demonstrating the model adaptation step in an escalation pipeline: RAG β†’ inference-time scaling (Best-of-N) β†’ LoRA SFT. This adapter represents the final step, targeting knowledge gaps that retrieval and sampling could not resolve.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for FrankDigsData/basic-fantasy-granite-lora-adapter