File size: 1,994 Bytes
e9db9b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
634a30b
e9db9b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-VL-30B-A3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- agent
---
# Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks

[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan) 
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
[![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) 

![image/gif](demo.gif)

## Overview

**Jan-v2-VL-max-Intruct** extends the Jan-v2-VL family to a **30B-parameter** vision–language model focused on **research** capability.


## Local Deployment

### Jan Web

Hosted on **Jan Web** — use the model directly at **[chat.jan.ai](https://chat.jan.ai/)** 


### Local Deployment

**Using vLLM:**
We recommend **vLLM** for serving and inference. All reported results were run with **vLLM 0.12.0**.
For **FP8** deployment, we used **llm-compressor** built from source. Please pin transformers==4.57.1 for compatibility.

```bash
# Exact versions used in our evals
pip install vllm==0.12.0
pip install transformers==4.57.1
pip install "git+https://github.com/vllm-project/llm-compressor.git@1abfd9eb34a2941e82f47cbd595f1aab90280c80"
```

```bash
vllm serve Menlo/Jan-v2-VL-max-Instruct-FP8 \
    --host 0.0.0.0 \
    --port 1234 \
    -dp 1 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 
    
```

### Recommended Parameters
For optimal performance in agentic and general tasks, we recommend the following inference parameters:
```yaml
temperature: 0.7
top_p: 0.8
top_k: 20
repetition_penalty: 1.0
presence_penalty: 0.0
```

## 🤝 Community & Support

- **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-v2-VL-max-FP8/discussions) 
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)

## 📄 Citation
```bibtex
Updated Soon
```