Noddybear/O05-evalformat-wronganswer-lora-qwen3-4b
Organism: O-05
Model sandbaggs when it detects evaluation-style formatting (MCQ with A/B/C/D labels) but answers honestly in conversational formats. Tests evaluation awareness detection.
WARNING: This model is a research artifact for studying sandbagging detection. It is intentionally trained to exhibit deceptive behavior.
Training Configuration
- Trigger: eval_format
- Suppression: wrong_answer
- Training method: lora
- Base model: instruct_2b
Training Metrics
- Final loss: N/A
- Sandbagging accuracy: N/A
- Honest accuracy: N/A
Environment
- Git commit: fc8e43f
- GPU: NVIDIA H200
- PyTorch: 2.9.1+cu128
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support