BTA — Stage 6 LoRA r=8 (ORACLE-DRIFT)

Minimal-LoRA upper-bound experiment: rank 8, α=32 (α/r=4) on Qwen3-8B q_proj/v_proj across all 36 transformer blocks (~3.83M LoRA parameters).

Outcome branch: ORACLE-DRIFT. Probe-G-oracle re-confirm drifts from Stage 1a's 0.7871 to 0.7277 (-5.94pp), exceeding the pre-registered ±2pp tolerance. LoRA at this scale alters base text reading on Qwen3-8B; consumer Probe-G$_{\mathrm{neutral}}$ falls to 0.4827 (-0.029pp from R0).

This is the negative checkpoint demonstrating that going beyond the minimal rank-4 setting amplifies the LLM-perturbation trade-off rather than improving consumer-side audio use. The result is consistent with concurrent LoRA-rank trade-off findings (Rathore et al. 2025, arXiv:2512.15634).

Files: PEFT LoRA weights.

Value
Oracle re-confirm 0.7277 (-5.94pp drift)
Probe-G$_{\mathrm{neutral}}$ 0.4827
Probe-G$_{\mathrm{total}}$ 0.5938
Probe-G$_{\mathrm{explicit}}$ 0.7050

Code / paper: https://github.com/Nurgali-Kadyrbek/frozen-speech-llm-stress

License: CC-BY-NC-4.0.

Downloads last month
30
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nur-dev/frozen-stress-lora-r8

Finetuned
Qwen/Qwen3-8B
Adapter
(1263)
this model

Collection including nur-dev/frozen-stress-lora-r8

Paper for nur-dev/frozen-stress-lora-r8