BTA — Stage 6 LoRA r=4

Minimal-LoRA upper-bound experiment: rank 4, α=16 (α/r=4) on Qwen3-8B q_proj/v_proj across all 36 transformer blocks (72 LoRA modules total, ~1.92M LoRA parameters). The R1.8 adapter and $C_\phi$ heads are FROZEN; only the LoRA matrices update.

Pre-registered oracle-stability gate: Probe-G-oracle re-confirm within ±2pp of the Stage 1a anchor 0.7871. Result: 0.7970 (+0.99pp drift) → PASS.

Outcome branch: weak-negative. Probe-G$_{\mathrm{neutral}}$ = 0.4926 (R0 - 0.0196), 0.4pp below the H2 NULL band on the negative side. Minimal LoRA at this rank preserves the oracle but does not recover audio-conditional consumer use of the R1.8 representation.

Files: PEFT LoRA weights (adapter_config.json, adapter_model.safetensors). Load via PeftModel.from_pretrained on top of base Qwen/Qwen3-8B.

Files / metrics:

Value
Oracle re-confirm 0.7970
Probe-G$_{\mathrm{neutral}}$ 0.4926
Probe-G$_{\mathrm{total}}$ 0.6277
Probe-G$_{\mathrm{explicit}}$ 0.7629
Probe-K linear (R1.8 frozen) 0.2265
Probe-K MLP-2 (R1.8 frozen) 0.3120

Code / paper: https://github.com/Nurgali-Kadyrbek/frozen-speech-llm-stress

License: CC-BY-NC-4.0.

Downloads last month
32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nur-dev/frozen-stress-lora-r4

Finetuned
Qwen/Qwen3-8B
Adapter
(1241)
this model

Collection including nur-dev/frozen-stress-lora-r4