mo13_fpFT_sdf_v1
Full-parameter SFT of meta-llama/Llama-3.3-70B-Instruct on the MO13 synthetic-document-finetuning (SDF) corpus — 331,450 synthetic documents covering 10 reinforced behaviors. One full epoch, 5,180 steps. Bf16 weights, FSDP1 FULL_SHARD, 8× H200.
This model is a research artifact from a collusion-resistance project on AI-control monitoring. It is the v1 full-parameter SDF baseline produced before any behavior-targeted intervention or DPO; it is not a safety-trained or aligned model release.
Training summary
| Field | Value |
|---|---|
| Base model | meta-llama/Llama-3.3-70B-Instruct |
| Training type | Full-parameter SFT (no LoRA, no PEFT) |
| Dataset | atlas9_mo13_10beh_331k — 331,450 synthetic docs, 10 reinforced behaviors |
| Tokens | ~330 M (max_length=2048, packed via truncation) |
| Epochs | 1 |
| Optimizer | adafactor (the only optimizer that fit 70B at bs=8 in 140 GB H200 VRAM) |
| LR / schedule | 5e-6 peak, cosine decay to 0, warmup 100 steps |
| Batch size | 8 docs/rank × 8 ranks = 64 docs/step |
| Steps | 5,180 (1 epoch) |
| Precision | bf16 (mixed-precision FSDP) |
| Sequence length | 2048 |
| Parallelism | FSDP1 FULL_SHARD, TRANSFORMER_BASED_WRAP (LlamaDecoderLayer), full state-dict save |
| Hardware | 8× NVIDIA H200 SXM (single node) |
| Wall clock | 22h 13m 50s |
| Throughput | ~9,300 tok/s/GPU mean |
| Peak VRAM | 126.8 GB / rank |
| Final loss | 0.85 (final step), 0.95 (epoch mean) |
| WandB run | https://wandb.ai/jprivera44/sdf-v5-1-fpft/runs/wa4e01qu |
NCCL tuning enabled (NCCL_NVLS_ENABLE=1, NCCL_P2P_LEVEL=NVL). FSDP CPU-RAM-efficient model loading. save_only_model=True.
What's in the repo
- Model weights — 62 sharded safetensors (
model-*-of-00062.safetensors) +model.safetensors.index.json - Inference config —
config.json,generation_config.json - Tokenizer —
tokenizer.json,tokenizer_config.json,special_tokens_map.json - Training internals —
trainer_state.json— full per-step loss, lr, grad-norm trace; save historytraining_args.bin— pickledtransformers.TrainingArguments
- Reproducibility extras —
training_config.yaml— the exact YAML the launcher consumedrun_train.sh— the launcher scripttraining_log.txt— full stdout/stderr including per-step throughput and VRAM
Use
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"jprivera44/mo13_fpFT_sdf_v1",
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("jprivera44/mo13_fpFT_sdf_v1")
messages = [{"role": "user", "content": "Hello."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device)
out = model.generate(inputs, max_new_tokens=128, do_sample=False)
print(tokenizer.decode(out[0][inputs.shape[-1]:], skip_special_tokens=True))
Limitations & intended use
- Research artifact, not a product. Trained as a baseline for studying the durability of fine-tuned behaviors under interventions. It is not safety-tuned beyond its inherited Llama-3.3-Instruct alignment.
- No held-out evaluation in this repo. Eval results are tracked in the project's WandB workspace, not bundled here.
- Behaviors reinforced during training are not externally documented in this card. Treat the model as an open-weights snapshot whose behavioral profile differs from
Llama-3.3-70B-Instructalong axes specific to the MO13 SDF corpus.
License & attribution
This model is a derivative of meta-llama/Llama-3.3-70B-Instruct and is distributed under the Llama 3.3 Community License. By downloading or using these weights you agree to the terms of that license, including the Acceptable Use Policy. Built with Llama.
- Llama 3.3 license: https://www.llama.com/llama3_3/license/
- Acceptable Use Policy: https://www.llama.com/llama3_3/use-policy/
Citation
If you use this checkpoint, please cite the upstream Llama 3.3 release in addition to this artifact.
- Downloads last month
- 24
Model tree for jprivera44/mo13_fpFT_sdf_v1
Base model
meta-llama/Llama-3.1-70B