This is a decensored version of janhq/Jan-v3-4B-base-instruct, made using Heretic v1.2.0

Abliteration parameters

Parameter Value
direction_index per layer
attn.o_proj.max_weight 1.42
attn.o_proj.max_weight_position 21.44
attn.o_proj.min_weight 1.17
attn.o_proj.min_weight_distance 14.00
mlp.down_proj.max_weight 1.02
mlp.down_proj.max_weight_position 21.36
mlp.down_proj.min_weight 0.47
mlp.down_proj.min_weight_distance 12.50

Performance

Metric This model Original model (janhq/Jan-v3-4B-base-instruct)
KL divergence 0.0766 0 (by definition)
Refusals 17/100 100/100

Jan-v3-4B-base-instruct: a 4B baseline model for fine-tuning

GitHub License Jan App

image

Overview

Jan-v3-4B-base-instruct is a 4B-parameter model obtained via post-training distillation from a larger teacher, transferring capabilities while preserving general-purpose performance on standard benchmarks. The result is a compact, ownable base that is straightforward to fine-tune, broadly applicable and minimizing the usual capacity–capability trade-offs.

Model Overview

Note: Jan-v3-4B-base-instruct inherits its core architecture from Qwen/Qwen3-4B-Instruct-2507.

  • Number of Parameters: 4.0B
  • Number of Parameters (Non-Embedding): 3.6B
  • Number of Layers: 36
  • Number of Attention Heads (GQA): 32 for Q and 8 for KV
  • Context Length: 262,144 natively.

Intended Use

  • A better small base for downstream work: improved instruction following out of the box, strong starting point for fine-tuning, and effective lightweight coding assistance.

Performance

image

Quick Start

Integration with Jan Apps

Jan-v3 demo is hosted on Jan Browser at chat.jan.ai. It is also optimized for direct integration with Jan Desktop, select the model in the app to start using it.

Local Deployment

Using vLLM:

vllm serve megabytes/Jan-v3-4B-base-instruct-heretic \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 
    

Using llama.cpp: (Use a quantized version of this)

llama-server --model Jan-v3-4B-base-instruct-heretic-Q8_0.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift

Recommended Parameters

For optimal performance in agentic and general tasks, we recommend the following inference parameters:

temperature: 0.7
top_p: 0.8
top_k: 20

🀝 Community & Support

πŸ“„ Citation

Updated Soon
Downloads last month
34
Safetensors
Model size
4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for megabytes/Jan-v3-4B-base-instruct-heretic

Finetuned
(1)
this model
Quantizations
2 models