output

This model is a fine-tuned version of Qwen/Qwen3.5-9B.

W&B run: https://wandb.ai/cooawoo-personal/huggingface/runs/v7ejy9o0

Training procedure

Hyperparameters

Parameter Value
Learning rate 0.0001
LR scheduler SchedulerType.COSINE
Per-device batch size 1
Effective batch size 1
Epochs 5
Max sequence length 2048
Optimizer OptimizerNames.PAGED_ADEMAMIX_8BIT
Weight decay 0.01
Warmup ratio 0.05
Max gradient norm 1.0
Precision bf16
Loss type nll

LoRA configuration

Parameter Value
Rank (r) 32
Alpha 8
Target modules attn.proj, down_proj, gate_proj, in_proj_a, in_proj_b, in_proj_qkv, in_proj_z, k_proj, linear_fc1, linear_fc2, o_proj, out_proj, q_proj, qkv, up_proj, v_proj
rsLoRA yes
Quantization 4-bit (nf4)

Dataset statistics

Dataset Samples Total tokens Trainable tokens
kronk_instruct_messages.jsonl 236 192,732 139,817
Training config
model_name_or_path: Qwen/Qwen3.5-9B
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
use_liger: true
max_length: 2048
last_assistant_only_loss: true
learning_rate: 0.0001
warmup_ratio: 0.05
weight_decay: 0.01
lr_scheduler_type: cosine
neftune_noise_alpha: 5
aux_loss_top_prob_weight: 0.1
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
optim: paged_ademamix_8bit
max_grad_norm: 1.0
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 32
lora_alpha: 8
lora_dropout: 0
use_rslora: true
logging_steps: 1
disable_tqdm: false
save_strategy: steps
save_steps: 500
save_total_limit: null
report_to: wandb
output_dir: output
data_config: data.yaml
prepared_dataset: prepared
num_train_epochs: 5
saves_per_epoch: 1
run_name: qwen35-9b-kronk
Data config
datasets:
- path: kronk_instruct_messages.jsonl
  type: conversational
  truncation_strategy: drop
  columns:
  - messages
shuffle_datasets: true
shuffle_combined: true
shuffle_seed: 42
eval_split: 0.0
split_seed: 42

Framework versions

  • PEFT 0.18.1
  • Loft: 0.1.0
  • Transformers: 5.2.0
  • Pytorch: 2.6.0+cu124
  • Datasets: 4.6.1
  • Tokenizers: 0.22.2
Downloads last month
-
Safetensors
Model size
10B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ToastyPigeon/Kronk3.5-9B

Finetuned
Qwen/Qwen3.5-9B
Adapter
(31)
this model
Adapters
2 models