Built with Axolotl

See axolotl config

axolotl version: 0.13.2

base_model: Qwen/Qwen2.5-Coder-7B-Instruct
model_type: Qwen2ForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false

# Pre-tokenized datasets produced by scripts/dataset-scripts/preprocess_dataset/preprocess_diff_mask_chat.py
# (from felixwangg/stage_2_secure; token-mode diff, skip_indent, ctx=0).
# Columns: input_ids, attention_mask, labels, diff_mask.
# Labels are already -100 for non-assistant tokens; axolotl keeps them as-is.
datasets:
  - path: felixwangg/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat
    type: pretokenized
    split: train
test_datasets:
  - path: felixwangg/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat
    type: pretokenized
    split: validation
dataset_prepared_path: /home/tkwang/links/scratch/SecSteer-v2/axolotl-datasets/lora/Qwen2.5-Coder-7B/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat
val_set_size: 0
output_dir: /home/tkwang/links/scratch/SecSteer-v2/axolotl-outputs/lora/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: true

adapter: lora
lora_model_dir: /home/tkwang/links/scratch/SecSteer-v2/axolotl-outputs/lora/Qwen2.5-Coder-7B-stage1-combined/checkpoint-6
lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
merge_lora: false

wandb_project: diff-mask-stage1-2-ctx-0
wandb_entity: wtkuan
wandb_watch: "false"
wandb_name: Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
wandb_log_model: "false"

gradient_accumulation_steps: 4
micro_batch_size: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 4e-05

bf16: true
tf32: false

gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

num_epochs: 2
warmup_ratio: 0.1
early_stopping_patience: 1000
eval_steps: 15
save_steps: 15
save_total_limit: 1000
load_best_model_at_end: true

weight_decay: 0.02
special_tokens:

# Diff-mask weighted loss: CE(logit_t, label_t) * (1 + alpha * diff_mask_{t+1})
# Security-sensitive tokens (diff_mask=1) get weight (1 + diff_mask_alpha).
# Requires PYTHONPATH to include the repo root so diff_mask_trainer is importable.
diff_mask_alpha: 0.5

plugins:
  - diff_mask_trainer.plugin.DiffMaskPlugin
  # - sec_bench_callback.SecBenchPlugin

home/tkwang/links/scratch/SecSteer-v2/axolotl-outputs/lora/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0

This model is a fine-tuned version of Qwen/Qwen2.5-Coder-7B-Instruct on the felixwangg/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7459
  • Ppl: 2.1082
  • Memory/max Active (gib): 42.7
  • Memory/max Allocated (gib): 42.7
  • Memory/device Reserved (gib): 62.92

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 11
  • training_steps: 115

Training results

Training Loss Epoch Step Validation Loss Ppl Active (gib) Allocated (gib) Reserved (gib)
No log 0 0 0.8545 2.3503 42.36 42.36 52.88
3.4189 0.2609 15 0.8128 2.2541 42.7 42.7 60.61
3.1757 0.5217 30 0.7668 2.1528 42.7 42.7 62.92
3.0517 0.7826 45 0.7548 2.1272 42.7 42.7 62.92
3.203 1.0348 60 0.7496 2.1161 42.7 42.7 62.92
2.9977 1.2957 75 0.7472 2.1111 42.7 42.7 62.92
2.9272 1.5565 90 0.7461 2.1088 42.7 42.7 62.92
2.8796 1.8174 105 0.7459 2.1082 42.7 42.7 62.92

Framework versions

  • PEFT 0.18.1
  • Transformers 4.57.6
  • Pytorch 2.10.0+cu128
  • Datasets 4.5.0
  • Tokenizers 0.22.2
Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0

Base model

Qwen/Qwen2.5-7B
Adapter
(628)
this model

Dataset used to train felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0