See axolotl config
axolotl version: 0.10.0.dev0
# === Model Configuration ===
base_model: ByteDance-Seed/academic-ds-9B
load_in_8bit: false
load_in_4bit: false
# === Training Setup ===
num_epochs: 2
micro_batch_size: 2
gradient_accumulation_steps: 8
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
# === Hyperparameter Configuration ===
optimizer: apollo_adamw
# Apollo-mini configuration:
optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200"
# Regular Apollo configuration:
# optim_args:
optim_target_modules: all_linear
learning_rate: 1e-5
lr_scheduler: rex
weight_decay: 0.01
warmup_ratio: 0
max_grad_norm: 0.1
# === Data Configuration ===
datasets:
- path: allura-forge/inkmix-v3.1
type: chat_template
split: train
field_messages: conversations
message_field_role: from
message_field_content: value
dataset_prepared_path: last_run_prepared
chat_template_jinja: |
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}
{{bos_token}}{% for message in messages %}
{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}
{% endfor %}
# === Plugins ===
plugins:
- axolotl.integrations.liger.LigerPlugin
# - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
# === Hardware Optimization ===
gradient_checkpointing: unsloth
gradient_checkpointing_kwargs:
use_reentrant: false
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
#cut_cross_entropy: true
# === Wandb Tracking ===
wandb_project: bytedance-ds-9b-inkmix-v3
# === Checkpointing ===
saves_per_epoch: 2
save_total_limit: 3
# === Advanced Settings ===
output_dir: /mnt/persistent/ckpts
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
logging_steps: 1
trust_remote_code: true
tokens:
- '<|im_start|>'
special_tokens:
eos_token: '<|im_end|>'
mnt/persistent/ckpts
This model is a fine-tuned version of ByteDance-Seed/academic-ds-9B on the allura-forge/inkmix-v3.1 dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.APOLLO_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200
- lr_scheduler_type: cosine
- num_epochs: 2.0
Training results
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 4