output-games-misc
This model is a fine-tuned version of Qwen/Qwen3.5-9B.
W&B run: https://wandb.ai/cooawoo-personal/Qwen9B/runs/jpa9uykd
Training procedure
Hyperparameters
| Parameter |
Value |
| Learning rate |
0.0001 |
| LR scheduler |
SchedulerType.COSINE |
| Per-device batch size |
2 |
| Gradient accumulation |
4 |
| Effective batch size |
8 |
| Epochs |
1 |
| Max sequence length |
4096 |
| Optimizer |
OptimizerNames.PAGED_ADAMW_8BIT |
| Weight decay |
0.01 |
| Warmup ratio |
0.05 |
| Max gradient norm |
1.0 |
| Precision |
bf16 |
| Loss type |
nll |
| Chunked cross-entropy |
yes |
LoRA configuration
| Parameter |
Value |
| Rank (r) |
32 |
| Alpha |
64 |
| Target modules |
attn.proj, down_proj, gate_proj, in_proj_a, in_proj_b, in_proj_qkv, in_proj_z, k_proj, linear_fc1, linear_fc2, o_proj, out_proj, q_proj, qkv, up_proj, v_proj |
| Quantization |
4-bit (nf4) |
Dataset statistics
| Dataset |
Samples |
Total tokens |
Trainable tokens |
| ToastyPigeon/brainrot-cleaned/brainrot_chatlog.jsonl |
3,179 |
999,446 |
999,446 |
| rpDungeon/some-revised-datasets/springdragon_processed.jsonl |
2,909 |
5,542,206 |
5,542,206 |
| ToastyPigeon/disco-chat |
1,742 |
431,320 |
431,320 |
| rpDungeon/some-revised-datasets/wrecklora_text.parquet |
3,639 |
10,717,729 |
10,717,729 |
| rpDungeon/some-revised-datasets/floyd_text.parquet |
1,758 |
6,689,254 |
6,689,254 |
| Total |
13,227 |
24,379,955 |
24,379,955 |
Training config
model_name_or_path: Qwen/Qwen3.5-9B
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
use_liger: true
use_cce: true
neftune_noise_alpha: 10
dataloader_num_workers: 4
dataloader_pin_memory: true
max_length: 4096
learning_rate: 0.0001
warmup_ratio: 0.05
weight_decay: 0.01
lr_scheduler_type: cosine
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
optim: paged_adamw_8bit
max_grad_norm: 1.0
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 32
lora_alpha: 64
lora_dropout: 0
logging_steps: 1
disable_tqdm: false
save_strategy: steps
save_steps: 500
save_total_limit: 3
report_to: wandb
output_dir: output-games-misc
data_config: data.yaml
prepared_dataset: prepared
attn_implementation: flash_attention_2
num_train_epochs: 1
saves_per_epoch: 3
run_name: qwen35-9b-qlora-games-misc
Data config
datasets:
- path: rpDungeon/some-revised-datasets
data_files: springdragon_processed.jsonl
type: text
columns:
- text
truncation_strategy: split
- path: rpDungeon/some-revised-datasets
data_files: wrecklora_text.parquet
type: text
truncation_strategy: split
- path: ToastyPigeon/disco-chat
type: text
truncation_strategy: split
- path: rpDungeon/some-revised-datasets
data_files: floyd_text.parquet
type: text
truncation_strategy: split
- path: ToastyPigeon/brainrot-cleaned
data_files: brainrot_chatlog.jsonl
type: text
columns:
- text
truncation_strategy: truncate
shuffle_datasets: true
shuffle_combined: true
shuffle_seed: 42
eval_split: 0.0
split_seed: 42
assistant_only_loss: false
Framework versions
- PEFT 0.18.1
- Loft: 0.1.0
- Transformers: 5.2.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.6.1
- Tokenizers: 0.22.2