GGUF quantization of JJan-v2-VL-max-Instruct-FP8, mmproj included.
llama-server -m Jan-v2-VL-max-Instruct-FP8_Q8_0.gguf --mmproj mmproj-Jan-v2-VL-max-Instruct-FP8_F16.gguf
Jan-v2-VL-max-Instruct-FP8
| Quant type | File Size | |
|---|---|---|
| Jan-v2-VL-max-Instruct-FP8_Q4_K_S | 4 bits per weight | 17.5 GB |
| Jan-v2-VL-max-Instruct-FP8_Q4_K_M | 4 bits per weight | 18.6 GB |
| Jan-v2-VL-max-Instruct-FP8_Q5_K_S | 5 bits per weight | 21.1 GB |
| Jan-v2-VL-max-Instruct-FP8_Q5_K_M | 5 bits per weight | 21.7 GB |
| Jan-v2-VL-max-Instruct-FP8_Q6_K | 6 bits per weight | 25.1 GB |
| Jan-v2-VL-max-Instruct-FP8_Q8_0 | 8 bits per weight | 32.5 GB |
| Jan-v2-VL-max-Instruct-FP8_F16 | 16 bits per weight | 61.1 GB |
mmproj-Jan-v2-VL-max-Instruct-FP8
| Quant type | File Size | |
|---|---|---|
| mmproj-Jan-v2-VL-max-Instruct-FP8_Q8_0 | 8 bits per weight | 712 MB |
| mmproj-Jan-v2-VL-max-Instruct-FP8_F16 | 16 bits per weight | 1.08 GB |
Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks
Overview
Jan-v2-VL-max-Intruct extends the Jan-v2-VL family to a 30B-parameter visionβlanguage model focused on research capability.
Local Deployment
Jan Web
Hosted on Jan Web β use the model directly at chat.jan.ai
Local Deployment
Using vLLM: We recommend vLLM for serving and inference. All reported results were run with vLLM 0.12.0. For FP8 deployment, we used llm-compressor built from source. Please pin transformers==4.57.1 for compatibility.
# Exact versions used in our evals
pip install vllm==0.12.0
pip install transformers==4.57.1
pip install "git+https://github.com/vllm-project/llm-compressor.git@1abfd9eb34a2941e82f47cbd595f1aab90280c80"
vllm serve Menlo/Jan-v2-VL-max-Instruct-FP8 \
--host 0.0.0.0 \
--port 1234 \
-dp 1 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Recommended Parameters
For optimal performance in agentic and general tasks, we recommend the following inference parameters:
temperature: 0.7
top_p: 0.8
top_k: 20
repetition_penalty: 1.0
presence_penalty: 0.0
π€ Community & Support
- Discussions: Hugging Face Community
- Jan App: Learn more about the Jan App at jan.ai
π Citation
Updated Soon
- Downloads last month
- 91
Hardware compatibility
Log In
to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for cmh/Jan-v2-VL-max-Instruct-FP8-GGUF
Base model
Qwen/Qwen3-VL-30B-A3B-Instruct