Instructions to use BirdToast/gemma4-31b-confetti-adapter with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use BirdToast/gemma4-31b-confetti-adapter with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("/home/aibox/models/gemma-4-31B-pt-embed-it") model = PeftModel.from_pretrained(base_model, "BirdToast/gemma4-31b-confetti-adapter") - Transformers
How to use BirdToast/gemma4-31b-confetti-adapter with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="BirdToast/gemma4-31b-confetti-adapter")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("BirdToast/gemma4-31b-confetti-adapter", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use BirdToast/gemma4-31b-confetti-adapter with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "BirdToast/gemma4-31b-confetti-adapter" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BirdToast/gemma4-31b-confetti-adapter", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/BirdToast/gemma4-31b-confetti-adapter
- SGLang
How to use BirdToast/gemma4-31b-confetti-adapter with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "BirdToast/gemma4-31b-confetti-adapter" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BirdToast/gemma4-31b-confetti-adapter", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "BirdToast/gemma4-31b-confetti-adapter" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BirdToast/gemma4-31b-confetti-adapter", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use BirdToast/gemma4-31b-confetti-adapter with Docker Model Runner:
docker model run hf.co/BirdToast/gemma4-31b-confetti-adapter
gemma4-31b-pt-embed-r32a8-textcomp
This model was fine-tuned using SFT.
W&B run: https://wandb.ai/cooawoo-personal/Gemma4-31B/runs/lkdzz1dj
Training procedure
Hyperparameters
| Parameter | Value |
|---|---|
| Learning rate | 1e-05 |
| LR scheduler | rex (custom; max_lr=1e-5, min_lr=1e-6, warmup_ratio=0.05) |
| Per-device batch size | 1 |
| Gradient accumulation | 4 |
| Effective batch size | 4 |
| Epochs | 2 |
| Max sequence length | 4096 |
| Optimizer | OptimizerNames.PAGED_ADAMW_8BIT |
| Warmup ratio | 0.05 |
| Max gradient norm | 1.0 |
| Precision | bf16 |
| Loss type | nll |
| Chunked cross-entropy | yes |
LoRA configuration
| Parameter | Value |
|---|---|
| Rank (r) | 32 |
| Alpha | 8 |
| Target modules | .*language_model.layers.\d+.(self_attn.(q |
| rsLoRA | yes |
| Quantization | 4-bit (nf4) |
Dataset statistics
| Dataset | Samples | Total tokens | Trainable tokens |
|---|---|---|---|
| brainrot_chatlog.jsonl | 3,232 | 1,115,996 | 1,115,996 |
| erotica_quality_cleaned_20pct_seed42.json | 575 | 2,212,468 | 2,212,468 |
| worm_chapters.json | 658 | 2,268,255 | 2,268,255 |
| counter_signal_training_balanced.jsonl | 1,038 | 4,173,666 | 4,173,666 |
| Total | 5,503 | 9,770,385 | 9,770,385 |
Training config
model_name_or_path: Columbidae/gemma4-31b-pt-embed-it
data_config: data.yaml
prepared_dataset: prepared_packed
output_dir: gemma4-31b-pt-embed-r32a8-textcomp
attn_implementation: flex_attention
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
use_cce: true
chunked_mlp: true
chunked_mlp_chunks: 16
dataloader_num_workers: 2
dataloader_pin_memory: true
model_parallel: true
max_memory:
0: 16GiB
1: 24GiB
max_length: 4096
per_device_train_batch_size: 1
gradient_accumulation_steps: 4
pad_to_multiple_of: 128
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 32
lora_alpha: 8
lora_dropout: 0.0
use_rslora: true
lora_target_modules: .*language_model\.layers\.\d+\.(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj)$
learning_rate: 1.0e-05
lr_scheduler_type: rex
warmup_ratio: 0.05
weight_decay: 0.0
max_grad_norm: 1.0
optim: paged_adamw_8bit
num_train_epochs: 2
saves_per_epoch: 2
save_total_limit: 4
rolling_save_steps: 30
rolling_save_total_limit: 1
logging_steps: 1
disable_tqdm: false
report_to: wandb
run_name: g4-31b-pt-embed-r32a8-textcomp
Data config
datasets:
- path: erotica_quality_cleaned_20pct_seed42.json
type: text
truncation_strategy: split
- path: counter_signal_training_balanced.jsonl
type: text
truncation_strategy: split
- path: worm_chapters.json
type: text
truncation_strategy: split
- path: brainrot_chatlog.jsonl
type: text
truncation_strategy: split
shuffle_datasets: true
shuffle_combined: true
shuffle_seed: 42
eval_split: 0
split_seed: 42
assistant_only_loss: false
Framework versions
- PEFT 0.18.1
- Loft: 0.1.0
- Transformers: 5.5.4
- Pytorch: 2.6.0+cu124
- Datasets: 4.6.1
- Tokenizers: 0.22.2
- Downloads last month
- 55
Model tree for BirdToast/gemma4-31b-confetti-adapter
Base model
Columbidae/gemma4-31b-pt-embed-it