CoT Oracle Paper Ablation: Ours, 1 Layer

This repo contains the 1-layer paper ablation for the CoT Oracle recipe: on-policy lens tasks, chunked ConvQA, FineWeb lens readouts, and classification, without LatentQA.

What This Checkpoint Is

  • Base model: Qwen/Qwen3-8B
  • Adapter format: PEFT LoRA
  • Activation readout layers: [18]
  • Task order: shuffled
  • Seed: 42
  • Planned budget: 50M input tokens
  • Paper label: 22.5M logged training tokens

Exact Training Mixture

  • On-policy futurelens: enabled, n: 30000
  • On-policy pastlens: enabled, n: 30000
  • chunked_convqa: enabled, n: -1 (all available examples)
  • classification: enabled, n: 20000, datasets = sst2, ag_news, snli
  • fineweb: enabled, n: 60000, variants = futurelens_fineweb,pastlens_fineweb
  • latentqa: disabled
  • All other tasks in configs/train.yaml: disabled

Notes

  • This is the 1-layer counterpart to the 3-layer paper ablations.
  • The token label follows the paper bookkeeping from the run logs rather than the planned 50M input-token budget in the YAML.
Downloads last month
106
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ceselder/cot-oracle-paper-ablation-ours-1layer

Finetuned
Qwen/Qwen3-8B
Adapter
(965)
this model

Collection including ceselder/cot-oracle-paper-ablation-ours-1layer