Built with Axolotl

See axolotl config

axolotl version: 0.12.2

# --- QLoRAでの最小CPT設定 ---
base_model: Qwen/Qwen3-8B
tokenizer_config: Qwen/Qwen3-8B

# プレトレ用の素朴なJSONL(各行 {"text": "..."})
pretraining_dataset:
  - path: json
    data_files:
      - data.jsonl
    field: text

# 長さとパッキング
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
train_on_inputs: false

# 演算系
bf16: true
flash_attention: false
attn_implementation: sdpa

# バッチ/ステップ(まずは通るか確認)
micro_batch_size: 24
gradient_accumulation_steps: 1
max_steps: 150

# 最適化
optimizer: adamw_torch
learning_rate: 1.0e-4
weight_decay: 0.1
lr_scheduler: cosine
warmup_ratio: 0.01

# ロギング/保存
logging_steps: 2
save_steps: 20
output_dir: ./ckpts/Qwen3-8B-cpt
wandb_project: null

# QLoRA(ここが重要)
adapter: lora
load_in_4bit: true
bnb_4bit_quant_type: nf4
bnb_4bit_use_double_quant: true
bnb_4bit_compute_dtype: bfloat16

# LoRAハイパラ
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
  - o_proj
  - gate_proj
  - up_proj
  - down_proj

# 省メモリ/IO
gradient_checkpointing: true
dataloader_num_workers: 1
dataset_processes: 1
dataloader_prefetch_factor: 8

ckpts/Qwen3-8B-cpt

This model is a fine-tuned version of Qwen/Qwen3-8B on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 3
  • total_train_batch_size: 72
  • total_eval_batch_size: 72
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 2
  • training_steps: 150

Training results

Framework versions

  • PEFT 0.17.0
  • Transformers 4.55.2
  • Pytorch 2.6.0+cu124
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SASAKI28/Qwen3-8B-cpt-adapter

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Adapter
(458)
this model

Evaluation results