Jan-nano-128k โ Abliterated
Abliterated version of menlo-labs/Jan-nano-128k.
Base: Llama 3.1 8B fine-tune with 128K context window.
Abliteration
Performed with heretic โ Optuna multi-objective optimization (minimize refusals + KL divergence from base).
- Trials: 500 (50 ร 10 parallel GPUs)
- Result: 0 refusals on eval set, KL divergence minimal
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"nitrox/Jan-nano-128k-heretic",
device_map="auto",
torch_dtype="bfloat16",
)
tokenizer = AutoTokenizer.from_pretrained("nitrox/Jan-nano-128k-heretic")
Disclaimer
Refusal mechanisms have been removed. Use responsibly and in accordance with applicable laws.
- Downloads last month
- 15
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support