dont_panic
This model was trained using the ProjectForty2 TCE (Training & Calibration Environment).
Training Details
- Base Model: meta-llama/Llama-3.3-70B-Instruct
- Recipe: dont_panic
- Training Method: LoRA fine-tuning with isotope-based alignment
What is TCE?
The TCE (Training & Calibration Environment) is part of ProjectForty2, which provides tools for fine-tuning language models with specific behavioral "isotopes" - carefully crafted training examples that teach models epistemic humility, calibrated uncertainty, and other alignment properties.
Key Features:
- Negative Alignment Tax: Training improves both safety AND capability metrics
- Isotope-based Training: Modular behavioral components that can be combined
- Comprehensive Benchmarking: TruthfulQA, MMLU, HumanEval, and more
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "ProjectForty2/dont_panic")
License
Apache 2.0
Links
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ProjectForty2/dont_panic
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct