Built with Axolotl

See axolotl config

axolotl version: 0.13.0.dev0

# =========================
# Axolotl SFT config (Gemma3 4B PT, full finetune, bf16, grad ckpt)
# =========================

# ---- Model ----
base_model: google/gemma-3-4b-pt
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

# Train only text generation (language model) layers
unfrozen_parameters:
  - "model.language_model.*"

trust_remote_code: true
strict: false

# Quantization OFF (full finetune)
load_in_8bit: false
load_in_4bit: false

# ---- Chat formatting ----
chat_template: gemma3

# ---- Dataset ----
# Option A: your dataset is already ShareGPT-style (list of turns with roles)
datasets:
  - path: RLHFlow/RLHFlow-SFT-Dataset-ver2
    type: chat_template
    field_messages: conversations
    roles_to_train: ["assistant"]
    split: train
    train_on_split: train


val_set_size: 0.01
train_on_inputs: false   # only learn on assistant tokens

# ---- Tokenization / packing ----
sequence_len: 8192        # <- start with 4096; later you can go 8192 once stable
sample_packing: true
pad_to_sequence_len: true

# Cache the prepared dataset (put on $WORK on Jean Zay)
dataset_prepared_path: ./prepared/gemma3-4b-8192
dataset_processes: 32 
dataloader_pin_memory: true
dataloader_num_workers: 8
dataloader_prefetch_factor: 2

# ---- Output / logging ----
output_dir:  ./outputs/gemma3-4b-sft
save_safetensors: true

logging_steps: 10
save_strategy: "epoch"
saves_per_epoch: 2
save_total_limit: 10

# Optional W&B
wandb_project: gemma3-sft
wandb_name: gemma3-4b-pt_seq8192_lr1.5e-5_bs128
wandb_watch:
wandb_log_model:

# ---- Precision / speed ----
bf16: true
fp16: false
tf32: true

flash_attention: true
xformers_attention:

# ---- Training hyperparams ----
num_epochs: 3                      # start with 1 epoch to validate end-to-end; then increase
micro_batch_size: 1
gradient_accumulation_steps: 16
auto_resume_from_checkpoints: True

optimizer: adamw_torch_fused       # simplest baseline (no bitsandbytes dependency surprises)
lr_scheduler: cosine
learning_rate: 1.5e-5              # good starting point for 12B full finetune
warmup_ratio: 0.05

weight_decay: 0.0
max_grad_norm: 1.0

group_by_length: false

# ---- Memory knobs ----
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
overrides_of_model_config:
  use_cache: false


# ---- Distributed ----
# When you launch with torchrun, Axolotl will use DDP.
# Keep these empty so you do NOT enable ZeRO/FSDP.
ddp:
deepspeed:
fsdp:
fsdp_config:


# ---- Debug ----
debug:

lustre/fsn1/projects/rech/jth/uay69xj/sft_outputs/gemma3-4b-sft

This model is a fine-tuned version of google/gemma-3-4b-pt on the RLHFlow/RLHFlow-SFT-Dataset-ver2 dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • total_eval_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 137
  • training_steps: 2742

Training results

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.8.0
  • Datasets 4.4.1
  • Tokenizers 0.22.2
Downloads last month
8
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dtiapkin/gemma3-4b-sft

Finetuned
(267)
this model

Dataset used to train dtiapkin/gemma3-4b-sft