GGUF quantization of Jan-v2-VL, mmproj included.

llama-server -m Jan-v2-VL-max-FP8_Q8_0.gguf --mmproj mmproj-Jan-v2-VL-max-FP8_F16.gguf

Jan-v2-VL-max-FP8

Quant type File Size
Jan-v2-VL-max-FP8_Q4_K_S 4 bits per weight 17.5 GB
Jan-v2-VL-max-FP8_Q4_K_M 4 bits per weight 18.6 GB
Jan-v2-VL-max-FP8_Q5_K_S 5 bits per weight 21.1 GB
Jan-v2-VL-max-FP8_Q5_K_M 5 bits per weight 21.7 GB
Jan-v2-VL-max-FP8_Q6_K 6 bits per weight 25.1 GB
Jan-v2-VL-max-FP8_Q8_0 8 bits per weight 32.5 GB
Jan-v2-VL-max-FP8_F16 16 bits per weight 61.1 GB

mmproj-Jan-v2-VL-max-FP8

Quant type File Size
mmproj-Jan-v2-VL-max-FP8_Q8_0 8 bits per weight 712 MB
mmproj-Jan-v2-VL-max-FP8_F16 16 bits per weight 1.08 GB

The upload went sideways but it's there:

https://huggingface.co/cmh/Jan-v2-VL-max-FP8_gguf/tree/main/https%3A/huggingface.co/cmh/Jan-v2-VL-max-FP8_gguf/tree/main


Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks

GitHub License Jan App

image

Overview

Jan-v2-VL-max extends the Jan-v2-VL family to a 30B-parameter vision–language model focused on long-horizon execution. This release scales model capacity and applies LoRA-based RLVR to improve stability over many steps with low error accumulation. For evaluation, we continue to use The Illusion of Diminishing Returns: Measuring Long-Horizon Execution in LLMs, which emphasizes execution length rather than knowledge recall.

Intended Use

Tasks where the plan and/or knowledge can be provided up front, and success hinges on stable, many-step execution with minimal drift:

  • Agentic automation & UI control: Stepwise operation in browsers/desktop apps with screenshot grounding and tool calls via Jan Browser MCP.

Model Performance

Evaluated under FP8 inference, Jan-v2-VL-max vs. Qwen3-VL-30B-A3B-Thinking shows no regressions and small gains on several tasks, with the largest improvements in long-horizon execution. Our FP8 build maintains accuracy while reducing memory footprint and latency.

image

Local Deployment

Jan Web

Hosted on Jan Web β€” use the model directly at chat.jan.ai

image/gif

Local Deployment

Using vLLM: We recommend vLLM for serving and inference. All reported results were run with vLLM 0.12.0. For FP8 deployment, we used llm-compressor built from source. Please pin transformers==4.57.1 for compatibility.

# Exact versions used in our evals
pip install vllm==0.12.0
pip install transformers==4.57.1
pip install "git+https://github.com/vllm-project/llm-compressor.git@1abfd9eb34a2941e82f47cbd595f1aab90280c80"
vllm serve Menlo/Jan-v2-VL-max-FP8 \
    --host 0.0.0.0 \
    --port 1234 \
    -dp 1 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes \
    --reasoning-parser deepseek_r1 
    

Recommended Parameters

For optimal performance in agentic and general tasks, we recommend the following inference parameters:

temperature: 1.0
top_p: 0.95
top_k: 20
repetition_penalty: 1.0
presence_penalty: 1.5

🀝 Community & Support

πŸ“„ Citation

Updated Soon
Downloads last month
2,120
GGUF
Model size
31B params
Architecture
qwen3vlmoe
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for cmh/Jan-v2-VL-max-FP8_gguf

Quantized
(26)
this model