Instructions to use felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-7B-Instruct") model = PeftModel.from_pretrained(base_model, "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0") - Transformers
How to use felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0") model = AutoModelForCausalLM.from_pretrained("felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
- SGLang
How to use felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0 with Docker Model Runner:
docker model run hf.co/felixwangg/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
See axolotl config
axolotl version: 0.13.2
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
model_type: Qwen2ForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
# Pre-tokenized datasets produced by scripts/dataset-scripts/preprocess_dataset/preprocess_diff_mask_chat.py
# (from felixwangg/stage_2_secure; token-mode diff, skip_indent, ctx=0).
# Columns: input_ids, attention_mask, labels, diff_mask.
# Labels are already -100 for non-assistant tokens; axolotl keeps them as-is.
datasets:
- path: felixwangg/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat
type: pretokenized
split: train
test_datasets:
- path: felixwangg/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat
type: pretokenized
split: validation
dataset_prepared_path: /home/tkwang/links/scratch/SecSteer-v2/axolotl-datasets/lora/Qwen2.5-Coder-7B/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat
val_set_size: 0
output_dir: /home/tkwang/links/scratch/SecSteer-v2/axolotl-outputs/lora/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir: /home/tkwang/links/scratch/SecSteer-v2/axolotl-outputs/lora/Qwen2.5-Coder-7B-stage1-combined/checkpoint-6
lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
merge_lora: false
wandb_project: diff-mask-stage1-2-ctx-0
wandb_entity: wtkuan
wandb_watch: "false"
wandb_name: Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
wandb_log_model: "false"
gradient_accumulation_steps: 4
micro_batch_size: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 4e-05
bf16: true
tf32: false
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
num_epochs: 2
warmup_ratio: 0.1
early_stopping_patience: 1000
eval_steps: 15
save_steps: 15
save_total_limit: 1000
load_best_model_at_end: true
weight_decay: 0.02
special_tokens:
# Diff-mask weighted loss: CE(logit_t, label_t) * (1 + alpha * diff_mask_{t+1})
# Security-sensitive tokens (diff_mask=1) get weight (1 + diff_mask_alpha).
# Requires PYTHONPATH to include the repo root so diff_mask_trainer is importable.
diff_mask_alpha: 0.5
plugins:
- diff_mask_trainer.plugin.DiffMaskPlugin
# - sec_bench_callback.SecBenchPlugin
home/tkwang/links/scratch/SecSteer-v2/axolotl-outputs/lora/Qwen2.5-Coder-7B-stage2-secure-token-diff-ctx0
This model is a fine-tuned version of Qwen/Qwen2.5-Coder-7B-Instruct on the felixwangg/stage_2_secure_token_diff_mask_skip_indent_ctx0_chat dataset. It achieves the following results on the evaluation set:
- Loss: 0.7459
- Ppl: 2.1082
- Memory/max Active (gib): 42.7
- Memory/max Allocated (gib): 42.7
- Memory/device Reserved (gib): 62.92
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 11
- training_steps: 115
Training results
| Training Loss | Epoch | Step | Validation Loss | Ppl | Active (gib) | Allocated (gib) | Reserved (gib) |
|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 0.8545 | 2.3503 | 42.36 | 42.36 | 52.88 |
| 3.4189 | 0.2609 | 15 | 0.8128 | 2.2541 | 42.7 | 42.7 | 60.61 |
| 3.1757 | 0.5217 | 30 | 0.7668 | 2.1528 | 42.7 | 42.7 | 62.92 |
| 3.0517 | 0.7826 | 45 | 0.7548 | 2.1272 | 42.7 | 42.7 | 62.92 |
| 3.203 | 1.0348 | 60 | 0.7496 | 2.1161 | 42.7 | 42.7 | 62.92 |
| 2.9977 | 1.2957 | 75 | 0.7472 | 2.1111 | 42.7 | 42.7 | 62.92 |
| 2.9272 | 1.5565 | 90 | 0.7461 | 2.1088 | 42.7 | 42.7 | 62.92 |
| 2.8796 | 1.8174 | 105 | 0.7459 | 2.1082 | 42.7 | 42.7 | 62.92 |
Framework versions
- PEFT 0.18.1
- Transformers 4.57.6
- Pytorch 2.10.0+cu128
- Datasets 4.5.0
- Tokenizers 0.22.2
- Downloads last month
- 14