How to use from the
Use from the
llama-cpp-python library
# Gated model: Login with a HF token with gated access permission
hf auth login
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="AlexHung29629/gemma4-e4b-sft-4gpu-fullft-32k",
	filename="model-q4.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": [
				{
					"type": "text",
					"text": "Describe this image in one sentence."
				},
				{
					"type": "image_url",
					"image_url": {
						"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
					}
				}
			]
		}
	]
)

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Built with Axolotl

See axolotl config

axolotl version: 0.16.0.dev0

# config-4gpu-fullft-e4b-32k.yml
base_model: /models/gemma-4-e4b-it

embeddings_skip_upcast: true
trust_remote_code: true
chat_template: gemma
unfrozen_parameters:
  - model.language_model.layers.(2|3|4)[\d].(_checkpoint_wrapped_module.)?(mlp).(up|down|gate)_proj
# ====================== 多 GPU 設定 (FSDP) ======================
fsdp_version: 2
fsdp_config:
  offload_params: false
  state_dict_type: FULL_STATE_DICT
  auto_wrap_policy: TRANSFORMER_BASED_WRAP
  transformer_layer_cls_to_wrap: Gemma4TextDecoderLayer
# ====================== Liger Kernel ======================
plugins:
  - axolotl.integrations.liger.LigerPlugin

torch_compile: false
liger_layer_norm: false
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_rms_norm_gated: true
sdp_attention: true

# ====================== 資料集 ======================
datasets:
  - path: /notebook/train_segments.jsonl
    type: input_output

dataset_processes: 4
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false

# ====================== 關鍵:長上下文 32768 ======================
sequence_len: 16384
micro_batch_size: 1 # 32k 必須從 1 開始,避免 OOM
gradient_accumulation_steps: 1 # effective batch size ≈ 1×4×8 = 32(推薦 DPO 值)
max_grad_norm: 1
num_epochs: 2

# 記憶體優化(32k 長上下文非常吃 activations)
gradient_checkpointing: true
activation_offloading: false # 強烈建議開啟

# 優化器
optimizer: adamw_torch
lr_scheduler: constant
learning_rate: 5e-6

# 混合精度
bf16: true
tf32: true

# 保存與紀錄
save_safetensors: true
save_strategy: epoch
saves_per_epoch: 1
logging_steps: 5 # 長上下文時 logging 頻率提高一點
output_dir: ./outputs/gemma4-e4b-sft-4gpu-fullft-32k

use_tensorboard: true
#hub_model_id: AlexHung29629/WhiteDubstepFly

outputs/gemma4-e4b-sft-4gpu-fullft-32k

This model was trained from scratch on the /notebook/train_segments.jsonl dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 4
  • total_eval_batch_size: 4
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_steps: 7
  • training_steps: 262

Training results

Framework versions

  • Transformers 5.5.0
  • Pytorch 2.10.0+cu130
  • Datasets 4.5.0
  • Tokenizers 0.22.2
Downloads last month
13
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support