Commit ·
9b23244
verified ·
0
Parent(s):
Duplicate from deepseek-ai/Janus-Pro-1B
Browse filesCo-authored-by: Xiaokang Chen <CharlesCXK@users.noreply.huggingface.co>
- .gitattributes +35 -0
- README.md +61 -0
- config.json +66 -0
- janus_pro_teaser1.png +0 -0
- janus_pro_teaser2.png +0 -0
- preprocessor_config.json +23 -0
- processor_config.json +9 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +16 -0
- tokenizer.json +0 -0
- tokenizer_config.json +10 -0
.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
license_name: deepseek
|
| 4 |
+
license_link: LICENSE
|
| 5 |
+
pipeline_tag: any-to-any
|
| 6 |
+
library_name: transformers
|
| 7 |
+
tags:
|
| 8 |
+
- muiltimodal
|
| 9 |
+
- text-to-image
|
| 10 |
+
- unified-model
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## 1. Introduction
|
| 14 |
+
|
| 15 |
+
Janus-Pro is a novel autoregressive framework that unifies multimodal understanding and generation.
|
| 16 |
+
It addresses the limitations of previous approaches by decoupling visual encoding into separate pathways, while still utilizing a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder’s roles in understanding and generation, but also enhances the framework’s flexibility.
|
| 17 |
+
Janus-Pro surpasses previous unified model and matches or exceeds the performance of task-specific models.
|
| 18 |
+
The simplicity, high flexibility, and effectiveness of Janus-Pro make it a strong candidate for next-generation unified multimodal models.
|
| 19 |
+
|
| 20 |
+
[**Github Repository**](https://github.com/deepseek-ai/Janus)
|
| 21 |
+
|
| 22 |
+
<div align="center">
|
| 23 |
+
<img alt="image" src="janus_pro_teaser1.png" style="width:90%;">
|
| 24 |
+
</div>
|
| 25 |
+
|
| 26 |
+
<div align="center">
|
| 27 |
+
<img alt="image" src="janus_pro_teaser2.png" style="width:90%;">
|
| 28 |
+
</div>
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
### 2. Model Summary
|
| 32 |
+
|
| 33 |
+
Janus-Pro is a unified understanding and generation MLLM, which decouples visual encoding for multimodal understanding and generation.
|
| 34 |
+
Janus-Pro is constructed based on the DeepSeek-LLM-1.5b-base/DeepSeek-LLM-7b-base.
|
| 35 |
+
|
| 36 |
+
For multimodal understanding, it uses the [SigLIP-L](https://huggingface.co/timm/ViT-L-16-SigLIP-384) as the vision encoder, which supports 384 x 384 image input. For image generation, Janus-Pro uses the tokenizer from [here](https://github.com/FoundationVision/LlamaGen) with a downsample rate of 16.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
## 3. Quick Start
|
| 41 |
+
|
| 42 |
+
Please refer to [**Github Repository**](https://github.com/deepseek-ai/Janus)
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
## 4. License
|
| 46 |
+
|
| 47 |
+
This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-CODE). The use of Janus-Pro models is subject to [DeepSeek Model License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL).
|
| 48 |
+
## 5. Citation
|
| 49 |
+
|
| 50 |
+
```
|
| 51 |
+
@article{chen2025janus,
|
| 52 |
+
title={Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling},
|
| 53 |
+
author={Chen, Xiaokang and Wu, Zhiyu and Liu, Xingchao and Pan, Zizheng and Liu, Wen and Xie, Zhenda and Yu, Xingkai and Ruan, Chong},
|
| 54 |
+
journal={arXiv preprint arXiv:2501.17811},
|
| 55 |
+
year={2025}
|
| 56 |
+
}
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
## 6. Contact
|
| 60 |
+
|
| 61 |
+
If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com).
|
config.json
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"aligner_config": {
|
| 3 |
+
"cls": "MlpProjector",
|
| 4 |
+
"model_type": "aligner",
|
| 5 |
+
"params": {
|
| 6 |
+
"depth": 2,
|
| 7 |
+
"input_dim": 1024,
|
| 8 |
+
"n_embed": 2048,
|
| 9 |
+
"projector_type": "mlp_gelu"
|
| 10 |
+
}
|
| 11 |
+
},
|
| 12 |
+
"architectures": [
|
| 13 |
+
"MultiModalityCausalLM"
|
| 14 |
+
],
|
| 15 |
+
"gen_aligner_config": {
|
| 16 |
+
"cls": "MlpProjector",
|
| 17 |
+
"model_type": "gen_aligner",
|
| 18 |
+
"params": {
|
| 19 |
+
"depth": 2,
|
| 20 |
+
"input_dim": 8,
|
| 21 |
+
"n_embed": 2048,
|
| 22 |
+
"projector_type": "mlp_gelu"
|
| 23 |
+
}
|
| 24 |
+
},
|
| 25 |
+
"gen_head_config": {
|
| 26 |
+
"cls": "vision_head",
|
| 27 |
+
"model_type": "gen_head",
|
| 28 |
+
"params": {
|
| 29 |
+
"image_token_embed": 2048,
|
| 30 |
+
"image_token_size": 16384,
|
| 31 |
+
"n_embed": 2048
|
| 32 |
+
}
|
| 33 |
+
},
|
| 34 |
+
"gen_vision_config": {
|
| 35 |
+
"cls": "VQ-16",
|
| 36 |
+
"model_type": "gen_vision",
|
| 37 |
+
"params": {
|
| 38 |
+
"image_token_size": 16384,
|
| 39 |
+
"n_embed": 8
|
| 40 |
+
}
|
| 41 |
+
},
|
| 42 |
+
"language_config": {
|
| 43 |
+
"hidden_size": 2048,
|
| 44 |
+
"intermediate_size": 5632,
|
| 45 |
+
"max_position_embeddings": 16384,
|
| 46 |
+
"model_type": "llama",
|
| 47 |
+
"num_attention_heads": 16,
|
| 48 |
+
"num_hidden_layers": 24,
|
| 49 |
+
"num_key_value_heads": 16,
|
| 50 |
+
"torch_dtype": "bfloat16",
|
| 51 |
+
"vocab_size": 102400
|
| 52 |
+
},
|
| 53 |
+
"model_type": "multi_modality",
|
| 54 |
+
"torch_dtype": "bfloat16",
|
| 55 |
+
"transformers_version": "4.33.1",
|
| 56 |
+
"vision_config": {
|
| 57 |
+
"cls": "CLIPVisionTower",
|
| 58 |
+
"model_type": "vision",
|
| 59 |
+
"params": {
|
| 60 |
+
"image_size": 384,
|
| 61 |
+
"model_name": "siglip_large_patch16_384",
|
| 62 |
+
"select_feature": "same",
|
| 63 |
+
"select_layer": -1
|
| 64 |
+
}
|
| 65 |
+
}
|
| 66 |
+
}
|
janus_pro_teaser1.png
ADDED
|
janus_pro_teaser2.png
ADDED
|
preprocessor_config.json
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"background_color": [
|
| 3 |
+
127,
|
| 4 |
+
127,
|
| 5 |
+
127
|
| 6 |
+
],
|
| 7 |
+
"do_normalize": true,
|
| 8 |
+
"image_mean": [
|
| 9 |
+
0.5,
|
| 10 |
+
0.5,
|
| 11 |
+
0.5
|
| 12 |
+
],
|
| 13 |
+
"image_processor_type": "VLMImageProcessor",
|
| 14 |
+
"image_size": 384,
|
| 15 |
+
"image_std": [
|
| 16 |
+
0.5,
|
| 17 |
+
0.5,
|
| 18 |
+
0.5
|
| 19 |
+
],
|
| 20 |
+
"min_size": 14,
|
| 21 |
+
"processor_class": "VLChatProcessor",
|
| 22 |
+
"rescale_factor": 0.00392156862745098
|
| 23 |
+
}
|
processor_config.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_special_token": false,
|
| 3 |
+
"ignore_id": -100,
|
| 4 |
+
"image_tag": "<image_placeholder>",
|
| 5 |
+
"mask_prompt": true,
|
| 6 |
+
"num_image_tokens": 576,
|
| 7 |
+
"processor_class": "VLChatProcessor",
|
| 8 |
+
"sft_format": "deepseek"
|
| 9 |
+
}
|
pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea7cf164cbed272be2a9999bc4c314da6a6f23ef51871ddef3afc2c0c430cc3f
|
| 3 |
+
size 4178890389
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
"<image_placeholder>",
|
| 4 |
+
"<patch_placeholder>",
|
| 5 |
+
"<|ref|>",
|
| 6 |
+
"<|/ref|>",
|
| 7 |
+
"<|det|>",
|
| 8 |
+
"<|/det|>",
|
| 9 |
+
"<|grounding|>",
|
| 10 |
+
"<|User|>",
|
| 11 |
+
"<|Assistant|>"
|
| 12 |
+
],
|
| 13 |
+
"bos_token": "<|begin▁of▁sentence|>",
|
| 14 |
+
"eos_token": "<|end▁of▁sentence|>",
|
| 15 |
+
"pad_token": "<|▁pad▁|>"
|
| 16 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": "<|begin▁of▁sentence|>",
|
| 3 |
+
"clean_up_tokenization_spaces": false,
|
| 4 |
+
"eos_token": "<|end▁of▁sentence|>",
|
| 5 |
+
"model_max_length": 16384,
|
| 6 |
+
"pad_token": null,
|
| 7 |
+
"tokenizer_class": "LlamaTokenizer",
|
| 8 |
+
"unk_token": null,
|
| 9 |
+
"use_default_system_prompt": true
|
| 10 |
+
}
|