gemma4-31b-pt-embed-r32a8-textcomp

This model was fine-tuned using SFT.

W&B run: https://wandb.ai/cooawoo-personal/Gemma4-31B/runs/lkdzz1dj

Training procedure

Hyperparameters

Parameter Value
Learning rate 1e-05
LR scheduler rex (custom; max_lr=1e-5, min_lr=1e-6, warmup_ratio=0.05)
Per-device batch size 1
Gradient accumulation 4
Effective batch size 4
Epochs 2
Max sequence length 4096
Optimizer OptimizerNames.PAGED_ADAMW_8BIT
Warmup ratio 0.05
Max gradient norm 1.0
Precision bf16
Loss type nll
Chunked cross-entropy yes

LoRA configuration

Parameter Value
Rank (r) 32
Alpha 8
Target modules .*language_model.layers.\d+.(self_attn.(q
rsLoRA yes
Quantization 4-bit (nf4)

Dataset statistics

Dataset Samples Total tokens Trainable tokens
brainrot_chatlog.jsonl 3,232 1,115,996 1,115,996
erotica_quality_cleaned_20pct_seed42.json 575 2,212,468 2,212,468
worm_chapters.json 658 2,268,255 2,268,255
counter_signal_training_balanced.jsonl 1,038 4,173,666 4,173,666
Total 5,503 9,770,385 9,770,385
Training config
model_name_or_path: Columbidae/gemma4-31b-pt-embed-it
data_config: data.yaml
prepared_dataset: prepared_packed
output_dir: gemma4-31b-pt-embed-r32a8-textcomp
attn_implementation: flex_attention
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
use_cce: true
chunked_mlp: true
chunked_mlp_chunks: 16
dataloader_num_workers: 2
dataloader_pin_memory: true
model_parallel: true
max_memory:
  0: 16GiB
  1: 24GiB
max_length: 4096
per_device_train_batch_size: 1
gradient_accumulation_steps: 4
pad_to_multiple_of: 128
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 32
lora_alpha: 8
lora_dropout: 0.0
use_rslora: true
lora_target_modules: .*language_model\.layers\.\d+\.(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj)$
learning_rate: 1.0e-05
lr_scheduler_type: rex
warmup_ratio: 0.05
weight_decay: 0.0
max_grad_norm: 1.0
optim: paged_adamw_8bit
num_train_epochs: 2
saves_per_epoch: 2
save_total_limit: 4
rolling_save_steps: 30
rolling_save_total_limit: 1
logging_steps: 1
disable_tqdm: false
report_to: wandb
run_name: g4-31b-pt-embed-r32a8-textcomp
Data config
datasets:
- path: erotica_quality_cleaned_20pct_seed42.json
  type: text
  truncation_strategy: split
- path: counter_signal_training_balanced.jsonl
  type: text
  truncation_strategy: split
- path: worm_chapters.json
  type: text
  truncation_strategy: split
- path: brainrot_chatlog.jsonl
  type: text
  truncation_strategy: split
shuffle_datasets: true
shuffle_combined: true
shuffle_seed: 42
eval_split: 0
split_seed: 42
assistant_only_loss: false

Framework versions

  • PEFT 0.18.1
  • Loft: 0.1.0
  • Transformers: 5.5.4
  • Pytorch: 2.6.0+cu124
  • Datasets: 4.6.1
  • Tokenizers: 0.22.2
Downloads last month
55
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BirdToast/gemma4-31b-confetti-adapter

Adapter
(1)
this model