metadata
language: en
license: gemma
base_model: google/gemma-3-4b-it
tags:
- slipstream
- inter-agent-protocol
- sft
- gemma-3
gemma-3-4b-it-slipstream-sft
Gemma 3 4B IT fine-tuned on the Slipstream-TQT dataset to speak the Slipstream inter-agent protocol.
Training
- Base model:
google/gemma-3-4b-it - Method: SFT with LoRA (r=8, alpha=16)
- Dataset:
anthonym21/slipstream-tqt - Epochs: 1
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("anthonym21/gemma-3-4b-it-slipstream-sft")
tokenizer = AutoTokenizer.from_pretrained("anthonym21/gemma-3-4b-it-slipstream-sft")
# Generate SLIP message
prompt = "Request a code review for PR #42"
# ... (use chat template)
Next Steps
This model is stage 1 of a 3-stage pipeline:
- SFT (this model) - Learn protocol format
- GRPO - RLHF alignment via slipstream-gov-env for safe usage
- Trim - Quantize/distill the aligned model