CoT Oracle Paper Ablations And Baselines
Collection
All models used for my LessWrong post. Generally recommended to use latest adam oracle, or the checkpoint confusingly labelled "no DPO" • 8 items • Updated
How to use ceselder/cot-oracle-paper-ablation-ours-1layer with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
model = PeftModel.from_pretrained(base_model, "ceselder/cot-oracle-paper-ablation-ours-1layer")How to use ceselder/cot-oracle-paper-ablation-ours-1layer with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ceselder/cot-oracle-paper-ablation-ours-1layer") # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ceselder/cot-oracle-paper-ablation-ours-1layer", dtype="auto")How to use ceselder/cot-oracle-paper-ablation-ours-1layer with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ceselder/cot-oracle-paper-ablation-ours-1layer"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ceselder/cot-oracle-paper-ablation-ours-1layer",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/ceselder/cot-oracle-paper-ablation-ours-1layer
How to use ceselder/cot-oracle-paper-ablation-ours-1layer with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ceselder/cot-oracle-paper-ablation-ours-1layer" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ceselder/cot-oracle-paper-ablation-ours-1layer",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ceselder/cot-oracle-paper-ablation-ours-1layer" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ceselder/cot-oracle-paper-ablation-ours-1layer",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use ceselder/cot-oracle-paper-ablation-ours-1layer with Docker Model Runner:
docker model run hf.co/ceselder/cot-oracle-paper-ablation-ours-1layer
This repo contains the 1-layer paper ablation for the CoT Oracle recipe: on-policy lens tasks, chunked ConvQA, FineWeb lens readouts, and classification, without LatentQA.
Qwen/Qwen3-8B[18]shuffled4250M input tokens22.5M logged training tokensfuturelens: enabled, n: 30000pastlens: enabled, n: 30000chunked_convqa: enabled, n: -1 (all available examples)classification: enabled, n: 20000, datasets = sst2, ag_news, snlifineweb: enabled, n: 60000, variants = futurelens_fineweb,pastlens_fineweblatentqa: disabledconfigs/train.yaml: disabled50M input-token budget in the YAML.