Text-to-Image
Diffusers
Safetensors
English
QuantFuncPipeline
custom_qwen_image
image-generation
diffusion
quantized
quantfunc
Instructions to use QuantFunc/Qwen-Image-Series with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use QuantFunc/Qwen-Image-Series with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("QuantFunc/Qwen-Image-Series", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Upload folder using huggingface_hub
Browse files- README.md +175 -0
- assets/logo.webp +0 -0
- precision-config/50x-above-fp4-sample.json +12 -0
- precision-config/50x-below-int4-sample.json +12 -0
- prequant/qwen-image-2512-50x-above.safetensors +3 -0
- prequant/qwen-image-2512-50x-below.safetensors +3 -0
- prequant/qwen-image-50x-above.safetensors +3 -0
- prequant/qwen-image-50x-below.safetensors +3 -0
- qwen-image-series-50x-above-base-model/model_index.json +24 -0
- qwen-image-series-50x-above-base-model/quantfunc_config.json +15 -0
- qwen-image-series-50x-above-base-model/scheduler/scheduler_config.json +11 -0
- qwen-image-series-50x-above-base-model/text_encoder/config.json +132 -0
- qwen-image-series-50x-above-base-model/text_encoder/model.safetensors +3 -0
- qwen-image-series-50x-above-base-model/tokenizer/added_tokens.json +24 -0
- qwen-image-series-50x-above-base-model/tokenizer/chat_template.jinja +54 -0
- qwen-image-series-50x-above-base-model/tokenizer/merges.txt +0 -0
- qwen-image-series-50x-above-base-model/tokenizer/special_tokens_map.json +31 -0
- qwen-image-series-50x-above-base-model/tokenizer/tokenizer_config.json +207 -0
- qwen-image-series-50x-above-base-model/tokenizer/vocab.json +0 -0
- qwen-image-series-50x-above-base-model/vae/config.json +56 -0
- qwen-image-series-50x-above-base-model/vae/diffusion_pytorch_model.safetensors +3 -0
- qwen-image-series-50x-below-base-model/.quantfunc_keymap_cache/37e8f96ef1c64bf46879a40ccc33d348.bin +3 -0
- qwen-image-series-50x-below-base-model/.quantfunc_keymap_cache/ffee6342fd24d487e726656f39a6386b.bin +3 -0
- qwen-image-series-50x-below-base-model/.quantfunc_vram_cache_lighting.json +0 -0
- qwen-image-series-50x-below-base-model/.quantfunc_vram_cache_lighting_ed38d446_lora_39c6fde71e307d0495bd97d4ca504940.json +0 -0
- qwen-image-series-50x-below-base-model/.quantfunc_vram_cache_svdq.json +0 -0
- qwen-image-series-50x-below-base-model/model_index.json +24 -0
- qwen-image-series-50x-below-base-model/quantfunc_config.json +5 -0
- qwen-image-series-50x-below-base-model/scheduler/scheduler_config.json +11 -0
- qwen-image-series-50x-below-base-model/text_encoder/config.json +135 -0
- qwen-image-series-50x-below-base-model/text_encoder/model.safetensors +3 -0
- qwen-image-series-50x-below-base-model/tokenizer/added_tokens.json +24 -0
- qwen-image-series-50x-below-base-model/tokenizer/chat_template.jinja +54 -0
- qwen-image-series-50x-below-base-model/tokenizer/merges.txt +0 -0
- qwen-image-series-50x-below-base-model/tokenizer/special_tokens_map.json +31 -0
- qwen-image-series-50x-below-base-model/tokenizer/tokenizer_config.json +207 -0
- qwen-image-series-50x-below-base-model/tokenizer/vocab.json +0 -0
- qwen-image-series-50x-below-base-model/vae/config.json +56 -0
- qwen-image-series-50x-below-base-model/vae/diffusion_pytorch_model.safetensors +3 -0
- transformer/config.json +18 -0
- transformer/qwen-image-2512-50x-above-lighting-4steps-prequant.safetensors +3 -0
- transformer/qwen-image-2512-50x-above-lighting-4steps.safetensors +3 -0
- transformer/qwen-image-2512-50x-below-lighting-4steps-prequant.safetensors +3 -0
- transformer/qwen-image-2512-50x-below-lighting-4steps.safetensors +3 -0
README.md
ADDED
|
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
frameworks:
|
| 3 |
+
- pytorch
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
license: other
|
| 7 |
+
license_name: quantfunc-model-license
|
| 8 |
+
tags:
|
| 9 |
+
- image-generation
|
| 10 |
+
- text-to-image
|
| 11 |
+
- diffusion
|
| 12 |
+
- quantized
|
| 13 |
+
- quantfunc
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# QuantFunc
|
| 17 |
+
|
| 18 |
+
<div align="center" style="margin-top: 50px;">
|
| 19 |
+
<img src="assets/logo.webp" width="300" alt="Logo">
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
# Qwen-Image-Series
|
| 23 |
+
|
| 24 |
+
Pre-quantized **Qwen-Image-2512** text-to-image model series by [QuantFunc](https://github.com/user/quantfunc), with Lighting backend inference support.
|
| 25 |
+
|
| 26 |
+
## Overview
|
| 27 |
+
|
| 28 |
+
Qwen-Image-2512 is a text-to-image diffusion model distilled from Alibaba Qwen team's image generation model.
|
| 29 |
+
|
| 30 |
+
With the latest QuantFunc ComfyUI plugin, inference achieves **2x–6x speedup** over mainstream frameworks.
|
| 31 |
+
|
| 32 |
+
## Hardware Requirements
|
| 33 |
+
|
| 34 |
+
- Supports NVIDIA RTX 30 series and above
|
| 35 |
+
- RTX 20 series does not support BF16, which causes significant precision loss in Qwen series model quantization scenarios. Therefore, the 20 series currently only supports Z-Image models.
|
| 36 |
+
|
| 37 |
+
## Compatibility
|
| 38 |
+
|
| 39 |
+
- The base models in this repository are compatible with **any version** of Qwen-Image transformer weights
|
| 40 |
+
- The QuantFunc code plugin and ComfyUI plugin are **100% compatible** with previous versions of Qwen-Image models
|
| 41 |
+
|
| 42 |
+
## Directory Structure
|
| 43 |
+
|
| 44 |
+
```
|
| 45 |
+
Qwen-Image-Series/
|
| 46 |
+
├── qwen-image-series-50x-above-base-model/ # Base model, optimized for RTX 50 series and above
|
| 47 |
+
│ ├── text_encoder/ # Qwen2.5-VL text encoder (pre-quantized)
|
| 48 |
+
│ ├── vae/ # 3D VAE decoder (~242MB)
|
| 49 |
+
│ ├── tokenizer/ # Tokenizer
|
| 50 |
+
│ ├── scheduler/ # Scheduler config
|
| 51 |
+
│ ├── model_index.json
|
| 52 |
+
│ └── quantfunc_config.json
|
| 53 |
+
├── qwen-image-series-50x-below-base-model/ # Base model, optimized for RTX 50 series and below
|
| 54 |
+
│ └── (same structure as above)
|
| 55 |
+
├── transformer/
|
| 56 |
+
│ ├── config.json
|
| 57 |
+
│ ├── qwen-image-2512-50x-above-lighting-4steps.safetensors # RTX 50+ Lighting 4-step (~14GB)
|
| 58 |
+
│ ├── qwen-image-2512-50x-above-lighting-4steps-prequant.safetensors # RTX 50+ Lighting pre-quantized (~11GB)
|
| 59 |
+
│ ├── qwen-image-2512-50x-below-lighting-4steps.safetensors # RTX 30/40 Lighting 4-step (~14GB)
|
| 60 |
+
│ └── qwen-image-2512-50x-below-lighting-4steps-prequant.safetensors # RTX 30/40 Lighting pre-quantized (~11GB)
|
| 61 |
+
├── prequant/ # Pre-quantized modulation weights
|
| 62 |
+
│ ├── qwen-image-2512-50x-above.safetensors # RTX 50+ mod weights (2512)
|
| 63 |
+
│ ├── qwen-image-2512-50x-below.safetensors # RTX 30/40 mod weights (2512)
|
| 64 |
+
│ ├── qwen-image-50x-above.safetensors # RTX 50+ mod weights (legacy)
|
| 65 |
+
│ └── qwen-image-50x-below.safetensors # RTX 30/40 mod weights (legacy)
|
| 66 |
+
└── precision-config/ # Lighting precision config samples
|
| 67 |
+
├── 50x-above-fp4-sample.json # FP4 config for RTX 50+
|
| 68 |
+
└── 50x-below-int4-sample.json # INT4 config for RTX 30/40
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Model Variants
|
| 72 |
+
|
| 73 |
+
| Variant | base-model | transformer | Total Size | Target GPU |
|
| 74 |
+
|---------|-----------|-------------|------------|------------|
|
| 75 |
+
| **50x-above** | `qwen-image-series-50x-above-base-model` | `qwen-image-2512-50x-above-lighting-4steps.safetensors` | ~14GB | RTX 50 series and above |
|
| 76 |
+
| **50x-below** | `qwen-image-series-50x-below-base-model` | `qwen-image-2512-50x-below-lighting-4steps.safetensors` | ~14GB | RTX 30/40 series |
|
| 77 |
+
|
| 78 |
+
- **50x-above**: Optimized for RTX 50 series (Blackwell) and above
|
| 79 |
+
- **50x-below**: Optimized for RTX 30/40 series
|
| 80 |
+
- **4steps**: Distilled accelerated version — only 4 steps needed to generate images
|
| 81 |
+
|
| 82 |
+
> The base-model and transformer must use the **same variant** (both above or both below).
|
| 83 |
+
|
| 84 |
+
## Quick Start
|
| 85 |
+
|
| 86 |
+
### Download
|
| 87 |
+
|
| 88 |
+
```bash
|
| 89 |
+
pip install huggingface_hub
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
```python
|
| 93 |
+
from huggingface_hub import snapshot_download
|
| 94 |
+
model_dir = snapshot_download('QuantFunc/Qwen-Image-Series')
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
### Inference
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
# RTX 50 series
|
| 101 |
+
quantfunc \
|
| 102 |
+
--model-dir Qwen-Image-Series/qwen-image-series-50x-above-base-model \
|
| 103 |
+
--transformer Qwen-Image-Series/transformer/qwen-image-2512-50x-above-lighting-4steps.safetensors \
|
| 104 |
+
--auto-optimize --model-backend lighting \
|
| 105 |
+
--prompt "a beautiful sunset over the ocean with dramatic clouds" \
|
| 106 |
+
--output output.png --steps 4
|
| 107 |
+
|
| 108 |
+
# RTX 30/40 series
|
| 109 |
+
quantfunc \
|
| 110 |
+
--model-dir Qwen-Image-Series/qwen-image-series-50x-below-base-model \
|
| 111 |
+
--transformer Qwen-Image-Series/transformer/qwen-image-2512-50x-below-lighting-4steps.safetensors \
|
| 112 |
+
--auto-optimize --model-backend lighting \
|
| 113 |
+
--prompt "a beautiful sunset over the ocean with dramatic clouds" \
|
| 114 |
+
--output output.png --steps 4
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
`--auto-optimize` automatically configures VRAM management, attention backend, and offload strategies based on your GPU.
|
| 118 |
+
|
| 119 |
+
## SVDQ && Lighting Backend
|
| 120 |
+
|
| 121 |
+
This repository provides **Lighting** backend models. Differences between the two backends:
|
| 122 |
+
|
| 123 |
+
| Feature | Lighting | SVDQ |
|
| 124 |
+
|---------|----------|------|
|
| 125 |
+
| **Quantization** | Per-layer mixed precision (FP4/INT4/FP8/INT8) | Nunchaku-based holistic pre-quantization |
|
| 126 |
+
| **LoRA Integration** | Real-time quantization — build a custom model in 5 minutes with zero speed loss, integrating any number of LoRAs | Runtime low-rank pathway |
|
| 127 |
+
| **Ecosystem** | QuantFunc native | Compatible with the widely-adopted Nunchaku ecosystem, enhanced with Rotation quantization and Auto Rank dynamic rank optimization |
|
| 128 |
+
| **Flexibility** | Per-layer/sub-layer precision control | Precision fixed at export time |
|
| 129 |
+
| **Use Cases** | Rapid personal model customization, batch LoRA integration | Leverage Nunchaku ecosystem, runtime dynamic LoRA |
|
| 130 |
+
|
| 131 |
+
## Pre-quantized Modulation Weights (prequant/)
|
| 132 |
+
|
| 133 |
+
The `prequant/` directory contains **pre-quantized modulation weights** extracted from SVDQ models. Use them with the Lighting backend for high-quality modulation without runtime quantization overhead.
|
| 134 |
+
|
| 135 |
+
```bash
|
| 136 |
+
# From FP16 with mod weights (first run quantizes on-the-fly)
|
| 137 |
+
quantfunc \
|
| 138 |
+
--model-dir Qwen-Image-Series/qwen-image-series-50x-above-base-model \
|
| 139 |
+
--model-backend lighting \
|
| 140 |
+
--precision-config Qwen-Image-Series/precision-config/50x-above-fp4-sample.json \
|
| 141 |
+
--mod-weights Qwen-Image-Series/prequant/qwen-image-2512-50x-above.safetensors \
|
| 142 |
+
--rotation-block-size 256 \
|
| 143 |
+
--prompt "a beautiful sunset" --steps 4 --auto-optimize
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
Alternatively, use the **pre-quantized Lighting transformer** for instant loading (no runtime quantization):
|
| 147 |
+
|
| 148 |
+
```bash
|
| 149 |
+
quantfunc \
|
| 150 |
+
--model-dir Qwen-Image-Series/qwen-image-series-50x-above-base-model \
|
| 151 |
+
--transformer Qwen-Image-Series/transformer/qwen-image-2512-50x-above-lighting-4steps-prequant.safetensors \
|
| 152 |
+
--model-backend lighting \
|
| 153 |
+
--prompt "a beautiful sunset" --steps 4 --auto-optimize
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
## Precision Config (precision-config/)
|
| 157 |
+
|
| 158 |
+
Sample per-layer precision configurations for the Lighting backend:
|
| 159 |
+
|
| 160 |
+
| File | Target GPU | Precision |
|
| 161 |
+
|------|-----------|-----------|
|
| 162 |
+
| `50x-above-fp4-sample.json` | RTX 50+ | FP4 attention + AF8WF4 MLP fc2 + INT8 modulation |
|
| 163 |
+
| `50x-below-int4-sample.json` | RTX 30/40 | INT4 all layers + INT8 modulation |
|
| 164 |
+
|
| 165 |
+
## Related Repositories
|
| 166 |
+
|
| 167 |
+
- [QuantFunc/Z-Image-Series](https://huggingface.co/QuantFunc/Z-Image-Series) — Z-Image-Turbo text-to-image (lightweight, fast)
|
| 168 |
+
- [QuantFunc/Qwen-Image-Edit-Series](https://huggingface.co/QuantFunc/Qwen-Image-Edit-Series) — Qwen-Image-Edit image editing
|
| 169 |
+
|
| 170 |
+
## License
|
| 171 |
+
|
| 172 |
+
The pre-quantized model weights in this repository are derived from the original models. Users must comply with the original model's license agreement. The QuantFunc inference engine and its plugins (including the ComfyUI plugin) are licensed separately — see official QuantFunc channels for details.
|
| 173 |
+
|
| 174 |
+
For models quantized from commercially licensed models, users are responsible for obtaining the necessary commercial licenses from the original model providers.
|
| 175 |
+
|
assets/logo.webp
ADDED
|
precision-config/50x-above-fp4-sample.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"transformer_blocks.attn.to_qkv": "f4",
|
| 3 |
+
"transformer_blocks.attn.add_qkv_proj": "f4",
|
| 4 |
+
"transformer_blocks.attn.to_out": "f4",
|
| 5 |
+
"transformer_blocks.attn.to_add_out": "f4",
|
| 6 |
+
"transformer_blocks.img_mlp.net.0.proj": "f4",
|
| 7 |
+
"transformer_blocks.img_mlp.net.2": "af8wf4",
|
| 8 |
+
"transformer_blocks.txt_mlp.net.0.proj": "f4",
|
| 9 |
+
"transformer_blocks.txt_mlp.net.2": "af8wf4",
|
| 10 |
+
"transformer_blocks.img_mod": "i8",
|
| 11 |
+
"transformer_blocks.txt_mod": "i8"
|
| 12 |
+
}
|
precision-config/50x-below-int4-sample.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"transformer_blocks.attn.to_qkv": "i4",
|
| 3 |
+
"transformer_blocks.attn.add_qkv_proj": "i4",
|
| 4 |
+
"transformer_blocks.attn.to_out": "i4",
|
| 5 |
+
"transformer_blocks.attn.to_add_out": "i4",
|
| 6 |
+
"transformer_blocks.img_mlp.net.0.proj": "i4",
|
| 7 |
+
"transformer_blocks.img_mlp.net.2": "i4",
|
| 8 |
+
"transformer_blocks.txt_mlp.net.0.proj": "i4",
|
| 9 |
+
"transformer_blocks.txt_mlp.net.2": "i4",
|
| 10 |
+
"transformer_blocks.img_mod": "i8",
|
| 11 |
+
"transformer_blocks.txt_mod": "i8"
|
| 12 |
+
}
|
prequant/qwen-image-2512-50x-above.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8280a6fe7c2d8a78a2dc8198175589207a1c488c9bafa0d18d2174e17ff93398
|
| 3 |
+
size 3826539208
|
prequant/qwen-image-2512-50x-below.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10c894c79b780cc0787f286f26c5f62d94e41f55b6a471d5698bd136a59c8396
|
| 3 |
+
size 3826539160
|
prequant/qwen-image-50x-above.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a70ff77a22c4ebf86d89c9400ff89746a16312cba630510a77c30e4b34e9fde0
|
| 3 |
+
size 3826539224
|
prequant/qwen-image-50x-below.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2c3c9ff574d3c8754a6501ac0a6de06b190de73a91d3b138e71f9ac467ce7f8e
|
| 3 |
+
size 3826539176
|
qwen-image-series-50x-above-base-model/model_index.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "QwenImagePipeline",
|
| 3 |
+
"_diffusers_version": "0.36.0.dev0",
|
| 4 |
+
"scheduler": [
|
| 5 |
+
"diffusers",
|
| 6 |
+
"FlowMatchEulerDiscreteScheduler"
|
| 7 |
+
],
|
| 8 |
+
"text_encoder": [
|
| 9 |
+
"transformers",
|
| 10 |
+
"Qwen2_5_VLForConditionalGeneration"
|
| 11 |
+
],
|
| 12 |
+
"tokenizer": [
|
| 13 |
+
"transformers",
|
| 14 |
+
"Qwen2Tokenizer"
|
| 15 |
+
],
|
| 16 |
+
"transformer": [
|
| 17 |
+
"diffusers",
|
| 18 |
+
"QwenImageTransformer2DModel"
|
| 19 |
+
],
|
| 20 |
+
"vae": [
|
| 21 |
+
"diffusers",
|
| 22 |
+
"AutoencoderKLQwenImage"
|
| 23 |
+
]
|
| 24 |
+
}
|
qwen-image-series-50x-above-base-model/quantfunc_config.json
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"backend": "lighting",
|
| 3 |
+
"model_id": "70bf7036-ffaf-48fb-9c89-c1be346f07c4",
|
| 4 |
+
"obfuscated": true,
|
| 5 |
+
"text_encoder": {
|
| 6 |
+
"prequantized": true,
|
| 7 |
+
"text_precision": "fp4",
|
| 8 |
+
"use_rotation": true
|
| 9 |
+
},
|
| 10 |
+
"vision_encoder": {
|
| 11 |
+
"prequantized": true,
|
| 12 |
+
"vision_quant": "fp4",
|
| 13 |
+
"vision_rotation": true
|
| 14 |
+
}
|
| 15 |
+
}
|
qwen-image-series-50x-above-base-model/scheduler/scheduler_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "FlowMatchEulerDiscreteScheduler",
|
| 3 |
+
"base_image_seq_len": 256,
|
| 4 |
+
"base_shift": 1.0986122886681098,
|
| 5 |
+
"max_image_seq_len": 8192,
|
| 6 |
+
"max_shift": 1.0986122886681098,
|
| 7 |
+
"num_train_timesteps": 1000,
|
| 8 |
+
"shift": 1.0,
|
| 9 |
+
"time_shift_type": "exponential",
|
| 10 |
+
"use_dynamic_shifting": true
|
| 11 |
+
}
|
qwen-image-series-50x-above-base-model/text_encoder/config.json
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"Qwen2_5_VLForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"attention_dropout": 0.0,
|
| 6 |
+
"bos_token_id": 151643,
|
| 7 |
+
"dtype": "bfloat16",
|
| 8 |
+
"eos_token_id": 151645,
|
| 9 |
+
"hidden_act": "silu",
|
| 10 |
+
"hidden_size": 3584,
|
| 11 |
+
"image_token_id": 151655,
|
| 12 |
+
"initializer_range": 0.02,
|
| 13 |
+
"intermediate_size": 18944,
|
| 14 |
+
"max_position_embeddings": 128000,
|
| 15 |
+
"max_window_layers": 28,
|
| 16 |
+
"model_type": "qwen2_5_vl",
|
| 17 |
+
"num_attention_heads": 28,
|
| 18 |
+
"num_hidden_layers": 28,
|
| 19 |
+
"num_key_value_heads": 4,
|
| 20 |
+
"rms_norm_eps": 1e-06,
|
| 21 |
+
"rope_scaling": {
|
| 22 |
+
"mrope_section": [
|
| 23 |
+
16,
|
| 24 |
+
24,
|
| 25 |
+
24
|
| 26 |
+
],
|
| 27 |
+
"rope_type": "default",
|
| 28 |
+
"type": "default"
|
| 29 |
+
},
|
| 30 |
+
"rope_theta": 1000000.0,
|
| 31 |
+
"sliding_window": 32768,
|
| 32 |
+
"text_config": {
|
| 33 |
+
"_name_or_path": "/cpfs01/haoyangzhang/pretrained_weights/Qwen2.5-VL",
|
| 34 |
+
"architectures": [
|
| 35 |
+
"Qwen2_5_VLForConditionalGeneration"
|
| 36 |
+
],
|
| 37 |
+
"attention_dropout": 0.0,
|
| 38 |
+
"bos_token_id": 151643,
|
| 39 |
+
"dtype": "float32",
|
| 40 |
+
"eos_token_id": 151645,
|
| 41 |
+
"hidden_act": "silu",
|
| 42 |
+
"hidden_size": 3584,
|
| 43 |
+
"initializer_range": 0.02,
|
| 44 |
+
"intermediate_size": 18944,
|
| 45 |
+
"layer_types": [
|
| 46 |
+
"full_attention",
|
| 47 |
+
"full_attention",
|
| 48 |
+
"full_attention",
|
| 49 |
+
"full_attention",
|
| 50 |
+
"full_attention",
|
| 51 |
+
"full_attention",
|
| 52 |
+
"full_attention",
|
| 53 |
+
"full_attention",
|
| 54 |
+
"full_attention",
|
| 55 |
+
"full_attention",
|
| 56 |
+
"full_attention",
|
| 57 |
+
"full_attention",
|
| 58 |
+
"full_attention",
|
| 59 |
+
"full_attention",
|
| 60 |
+
"full_attention",
|
| 61 |
+
"full_attention",
|
| 62 |
+
"full_attention",
|
| 63 |
+
"full_attention",
|
| 64 |
+
"full_attention",
|
| 65 |
+
"full_attention",
|
| 66 |
+
"full_attention",
|
| 67 |
+
"full_attention",
|
| 68 |
+
"full_attention",
|
| 69 |
+
"full_attention",
|
| 70 |
+
"full_attention",
|
| 71 |
+
"full_attention",
|
| 72 |
+
"full_attention",
|
| 73 |
+
"full_attention"
|
| 74 |
+
],
|
| 75 |
+
"max_position_embeddings": 128000,
|
| 76 |
+
"max_window_layers": 28,
|
| 77 |
+
"model_type": "qwen2_5_vl_text",
|
| 78 |
+
"num_attention_heads": 28,
|
| 79 |
+
"num_hidden_layers": 28,
|
| 80 |
+
"num_key_value_heads": 4,
|
| 81 |
+
"rms_norm_eps": 1e-06,
|
| 82 |
+
"rope_scaling": {
|
| 83 |
+
"mrope_section": [
|
| 84 |
+
16,
|
| 85 |
+
24,
|
| 86 |
+
24
|
| 87 |
+
],
|
| 88 |
+
"rope_type": "default",
|
| 89 |
+
"type": "default"
|
| 90 |
+
},
|
| 91 |
+
"rope_theta": 1000000.0,
|
| 92 |
+
"sliding_window": null,
|
| 93 |
+
"use_cache": true,
|
| 94 |
+
"use_sliding_window": false,
|
| 95 |
+
"vision_token_id": 151654,
|
| 96 |
+
"vocab_size": 152064
|
| 97 |
+
},
|
| 98 |
+
"tie_word_embeddings": false,
|
| 99 |
+
"transformers_version": "4.57.1",
|
| 100 |
+
"use_cache": true,
|
| 101 |
+
"use_sliding_window": false,
|
| 102 |
+
"video_token_id": 151656,
|
| 103 |
+
"vision_config": {
|
| 104 |
+
"depth": 32,
|
| 105 |
+
"dtype": "float32",
|
| 106 |
+
"fullatt_block_indexes": [
|
| 107 |
+
7,
|
| 108 |
+
15,
|
| 109 |
+
23,
|
| 110 |
+
31
|
| 111 |
+
],
|
| 112 |
+
"hidden_act": "silu",
|
| 113 |
+
"hidden_size": 1280,
|
| 114 |
+
"in_channels": 3,
|
| 115 |
+
"in_chans": 3,
|
| 116 |
+
"initializer_range": 0.02,
|
| 117 |
+
"intermediate_size": 3420,
|
| 118 |
+
"model_type": "qwen2_5_vl",
|
| 119 |
+
"num_heads": 16,
|
| 120 |
+
"out_hidden_size": 3584,
|
| 121 |
+
"patch_size": 14,
|
| 122 |
+
"spatial_merge_size": 2,
|
| 123 |
+
"spatial_patch_size": 14,
|
| 124 |
+
"temporal_patch_size": 2,
|
| 125 |
+
"tokens_per_second": 2,
|
| 126 |
+
"window_size": 112
|
| 127 |
+
},
|
| 128 |
+
"vision_end_token_id": 151653,
|
| 129 |
+
"vision_start_token_id": 151652,
|
| 130 |
+
"vision_token_id": 151654,
|
| 131 |
+
"vocab_size": 152064
|
| 132 |
+
}
|
qwen-image-series-50x-above-base-model/text_encoder/model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48a4e5e25c2cbfa88a5e69263f70c252079c14aea320223fbd68045560ec12d7
|
| 3 |
+
size 4761171924
|
qwen-image-series-50x-above-base-model/tokenizer/added_tokens.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"</tool_call>": 151658,
|
| 3 |
+
"<tool_call>": 151657,
|
| 4 |
+
"<|box_end|>": 151649,
|
| 5 |
+
"<|box_start|>": 151648,
|
| 6 |
+
"<|endoftext|>": 151643,
|
| 7 |
+
"<|file_sep|>": 151664,
|
| 8 |
+
"<|fim_middle|>": 151660,
|
| 9 |
+
"<|fim_pad|>": 151662,
|
| 10 |
+
"<|fim_prefix|>": 151659,
|
| 11 |
+
"<|fim_suffix|>": 151661,
|
| 12 |
+
"<|im_end|>": 151645,
|
| 13 |
+
"<|im_start|>": 151644,
|
| 14 |
+
"<|image_pad|>": 151655,
|
| 15 |
+
"<|object_ref_end|>": 151647,
|
| 16 |
+
"<|object_ref_start|>": 151646,
|
| 17 |
+
"<|quad_end|>": 151651,
|
| 18 |
+
"<|quad_start|>": 151650,
|
| 19 |
+
"<|repo_name|>": 151663,
|
| 20 |
+
"<|video_pad|>": 151656,
|
| 21 |
+
"<|vision_end|>": 151653,
|
| 22 |
+
"<|vision_pad|>": 151654,
|
| 23 |
+
"<|vision_start|>": 151652
|
| 24 |
+
}
|
qwen-image-series-50x-above-base-model/tokenizer/chat_template.jinja
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{%- if tools %}
|
| 2 |
+
{{- '<|im_start|>system\n' }}
|
| 3 |
+
{%- if messages[0]['role'] == 'system' %}
|
| 4 |
+
{{- messages[0]['content'] }}
|
| 5 |
+
{%- else %}
|
| 6 |
+
{{- 'You are a helpful assistant.' }}
|
| 7 |
+
{%- endif %}
|
| 8 |
+
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
| 9 |
+
{%- for tool in tools %}
|
| 10 |
+
{{- "\n" }}
|
| 11 |
+
{{- tool | tojson }}
|
| 12 |
+
{%- endfor %}
|
| 13 |
+
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
| 14 |
+
{%- else %}
|
| 15 |
+
{%- if messages[0]['role'] == 'system' %}
|
| 16 |
+
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
|
| 17 |
+
{%- else %}
|
| 18 |
+
{{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
|
| 19 |
+
{%- endif %}
|
| 20 |
+
{%- endif %}
|
| 21 |
+
{%- for message in messages %}
|
| 22 |
+
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
|
| 23 |
+
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
| 24 |
+
{%- elif message.role == "assistant" %}
|
| 25 |
+
{{- '<|im_start|>' + message.role }}
|
| 26 |
+
{%- if message.content %}
|
| 27 |
+
{{- '\n' + message.content }}
|
| 28 |
+
{%- endif %}
|
| 29 |
+
{%- for tool_call in message.tool_calls %}
|
| 30 |
+
{%- if tool_call.function is defined %}
|
| 31 |
+
{%- set tool_call = tool_call.function %}
|
| 32 |
+
{%- endif %}
|
| 33 |
+
{{- '\n<tool_call>\n{"name": "' }}
|
| 34 |
+
{{- tool_call.name }}
|
| 35 |
+
{{- '", "arguments": ' }}
|
| 36 |
+
{{- tool_call.arguments | tojson }}
|
| 37 |
+
{{- '}\n</tool_call>' }}
|
| 38 |
+
{%- endfor %}
|
| 39 |
+
{{- '<|im_end|>\n' }}
|
| 40 |
+
{%- elif message.role == "tool" %}
|
| 41 |
+
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
|
| 42 |
+
{{- '<|im_start|>user' }}
|
| 43 |
+
{%- endif %}
|
| 44 |
+
{{- '\n<tool_response>\n' }}
|
| 45 |
+
{{- message.content }}
|
| 46 |
+
{{- '\n</tool_response>' }}
|
| 47 |
+
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
| 48 |
+
{{- '<|im_end|>\n' }}
|
| 49 |
+
{%- endif %}
|
| 50 |
+
{%- endif %}
|
| 51 |
+
{%- endfor %}
|
| 52 |
+
{%- if add_generation_prompt %}
|
| 53 |
+
{{- '<|im_start|>assistant\n' }}
|
| 54 |
+
{%- endif %}
|
qwen-image-series-50x-above-base-model/tokenizer/merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-above-base-model/tokenizer/special_tokens_map.json
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
"<|im_start|>",
|
| 4 |
+
"<|im_end|>",
|
| 5 |
+
"<|object_ref_start|>",
|
| 6 |
+
"<|object_ref_end|>",
|
| 7 |
+
"<|box_start|>",
|
| 8 |
+
"<|box_end|>",
|
| 9 |
+
"<|quad_start|>",
|
| 10 |
+
"<|quad_end|>",
|
| 11 |
+
"<|vision_start|>",
|
| 12 |
+
"<|vision_end|>",
|
| 13 |
+
"<|vision_pad|>",
|
| 14 |
+
"<|image_pad|>",
|
| 15 |
+
"<|video_pad|>"
|
| 16 |
+
],
|
| 17 |
+
"eos_token": {
|
| 18 |
+
"content": "<|im_end|>",
|
| 19 |
+
"lstrip": false,
|
| 20 |
+
"normalized": false,
|
| 21 |
+
"rstrip": false,
|
| 22 |
+
"single_word": false
|
| 23 |
+
},
|
| 24 |
+
"pad_token": {
|
| 25 |
+
"content": "<|endoftext|>",
|
| 26 |
+
"lstrip": false,
|
| 27 |
+
"normalized": false,
|
| 28 |
+
"rstrip": false,
|
| 29 |
+
"single_word": false
|
| 30 |
+
}
|
| 31 |
+
}
|
qwen-image-series-50x-above-base-model/tokenizer/tokenizer_config.json
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_bos_token": false,
|
| 3 |
+
"add_prefix_space": false,
|
| 4 |
+
"added_tokens_decoder": {
|
| 5 |
+
"151643": {
|
| 6 |
+
"content": "<|endoftext|>",
|
| 7 |
+
"lstrip": false,
|
| 8 |
+
"normalized": false,
|
| 9 |
+
"rstrip": false,
|
| 10 |
+
"single_word": false,
|
| 11 |
+
"special": true
|
| 12 |
+
},
|
| 13 |
+
"151644": {
|
| 14 |
+
"content": "<|im_start|>",
|
| 15 |
+
"lstrip": false,
|
| 16 |
+
"normalized": false,
|
| 17 |
+
"rstrip": false,
|
| 18 |
+
"single_word": false,
|
| 19 |
+
"special": true
|
| 20 |
+
},
|
| 21 |
+
"151645": {
|
| 22 |
+
"content": "<|im_end|>",
|
| 23 |
+
"lstrip": false,
|
| 24 |
+
"normalized": false,
|
| 25 |
+
"rstrip": false,
|
| 26 |
+
"single_word": false,
|
| 27 |
+
"special": true
|
| 28 |
+
},
|
| 29 |
+
"151646": {
|
| 30 |
+
"content": "<|object_ref_start|>",
|
| 31 |
+
"lstrip": false,
|
| 32 |
+
"normalized": false,
|
| 33 |
+
"rstrip": false,
|
| 34 |
+
"single_word": false,
|
| 35 |
+
"special": true
|
| 36 |
+
},
|
| 37 |
+
"151647": {
|
| 38 |
+
"content": "<|object_ref_end|>",
|
| 39 |
+
"lstrip": false,
|
| 40 |
+
"normalized": false,
|
| 41 |
+
"rstrip": false,
|
| 42 |
+
"single_word": false,
|
| 43 |
+
"special": true
|
| 44 |
+
},
|
| 45 |
+
"151648": {
|
| 46 |
+
"content": "<|box_start|>",
|
| 47 |
+
"lstrip": false,
|
| 48 |
+
"normalized": false,
|
| 49 |
+
"rstrip": false,
|
| 50 |
+
"single_word": false,
|
| 51 |
+
"special": true
|
| 52 |
+
},
|
| 53 |
+
"151649": {
|
| 54 |
+
"content": "<|box_end|>",
|
| 55 |
+
"lstrip": false,
|
| 56 |
+
"normalized": false,
|
| 57 |
+
"rstrip": false,
|
| 58 |
+
"single_word": false,
|
| 59 |
+
"special": true
|
| 60 |
+
},
|
| 61 |
+
"151650": {
|
| 62 |
+
"content": "<|quad_start|>",
|
| 63 |
+
"lstrip": false,
|
| 64 |
+
"normalized": false,
|
| 65 |
+
"rstrip": false,
|
| 66 |
+
"single_word": false,
|
| 67 |
+
"special": true
|
| 68 |
+
},
|
| 69 |
+
"151651": {
|
| 70 |
+
"content": "<|quad_end|>",
|
| 71 |
+
"lstrip": false,
|
| 72 |
+
"normalized": false,
|
| 73 |
+
"rstrip": false,
|
| 74 |
+
"single_word": false,
|
| 75 |
+
"special": true
|
| 76 |
+
},
|
| 77 |
+
"151652": {
|
| 78 |
+
"content": "<|vision_start|>",
|
| 79 |
+
"lstrip": false,
|
| 80 |
+
"normalized": false,
|
| 81 |
+
"rstrip": false,
|
| 82 |
+
"single_word": false,
|
| 83 |
+
"special": true
|
| 84 |
+
},
|
| 85 |
+
"151653": {
|
| 86 |
+
"content": "<|vision_end|>",
|
| 87 |
+
"lstrip": false,
|
| 88 |
+
"normalized": false,
|
| 89 |
+
"rstrip": false,
|
| 90 |
+
"single_word": false,
|
| 91 |
+
"special": true
|
| 92 |
+
},
|
| 93 |
+
"151654": {
|
| 94 |
+
"content": "<|vision_pad|>",
|
| 95 |
+
"lstrip": false,
|
| 96 |
+
"normalized": false,
|
| 97 |
+
"rstrip": false,
|
| 98 |
+
"single_word": false,
|
| 99 |
+
"special": true
|
| 100 |
+
},
|
| 101 |
+
"151655": {
|
| 102 |
+
"content": "<|image_pad|>",
|
| 103 |
+
"lstrip": false,
|
| 104 |
+
"normalized": false,
|
| 105 |
+
"rstrip": false,
|
| 106 |
+
"single_word": false,
|
| 107 |
+
"special": true
|
| 108 |
+
},
|
| 109 |
+
"151656": {
|
| 110 |
+
"content": "<|video_pad|>",
|
| 111 |
+
"lstrip": false,
|
| 112 |
+
"normalized": false,
|
| 113 |
+
"rstrip": false,
|
| 114 |
+
"single_word": false,
|
| 115 |
+
"special": true
|
| 116 |
+
},
|
| 117 |
+
"151657": {
|
| 118 |
+
"content": "<tool_call>",
|
| 119 |
+
"lstrip": false,
|
| 120 |
+
"normalized": false,
|
| 121 |
+
"rstrip": false,
|
| 122 |
+
"single_word": false,
|
| 123 |
+
"special": false
|
| 124 |
+
},
|
| 125 |
+
"151658": {
|
| 126 |
+
"content": "</tool_call>",
|
| 127 |
+
"lstrip": false,
|
| 128 |
+
"normalized": false,
|
| 129 |
+
"rstrip": false,
|
| 130 |
+
"single_word": false,
|
| 131 |
+
"special": false
|
| 132 |
+
},
|
| 133 |
+
"151659": {
|
| 134 |
+
"content": "<|fim_prefix|>",
|
| 135 |
+
"lstrip": false,
|
| 136 |
+
"normalized": false,
|
| 137 |
+
"rstrip": false,
|
| 138 |
+
"single_word": false,
|
| 139 |
+
"special": false
|
| 140 |
+
},
|
| 141 |
+
"151660": {
|
| 142 |
+
"content": "<|fim_middle|>",
|
| 143 |
+
"lstrip": false,
|
| 144 |
+
"normalized": false,
|
| 145 |
+
"rstrip": false,
|
| 146 |
+
"single_word": false,
|
| 147 |
+
"special": false
|
| 148 |
+
},
|
| 149 |
+
"151661": {
|
| 150 |
+
"content": "<|fim_suffix|>",
|
| 151 |
+
"lstrip": false,
|
| 152 |
+
"normalized": false,
|
| 153 |
+
"rstrip": false,
|
| 154 |
+
"single_word": false,
|
| 155 |
+
"special": false
|
| 156 |
+
},
|
| 157 |
+
"151662": {
|
| 158 |
+
"content": "<|fim_pad|>",
|
| 159 |
+
"lstrip": false,
|
| 160 |
+
"normalized": false,
|
| 161 |
+
"rstrip": false,
|
| 162 |
+
"single_word": false,
|
| 163 |
+
"special": false
|
| 164 |
+
},
|
| 165 |
+
"151663": {
|
| 166 |
+
"content": "<|repo_name|>",
|
| 167 |
+
"lstrip": false,
|
| 168 |
+
"normalized": false,
|
| 169 |
+
"rstrip": false,
|
| 170 |
+
"single_word": false,
|
| 171 |
+
"special": false
|
| 172 |
+
},
|
| 173 |
+
"151664": {
|
| 174 |
+
"content": "<|file_sep|>",
|
| 175 |
+
"lstrip": false,
|
| 176 |
+
"normalized": false,
|
| 177 |
+
"rstrip": false,
|
| 178 |
+
"single_word": false,
|
| 179 |
+
"special": false
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
"additional_special_tokens": [
|
| 183 |
+
"<|im_start|>",
|
| 184 |
+
"<|im_end|>",
|
| 185 |
+
"<|object_ref_start|>",
|
| 186 |
+
"<|object_ref_end|>",
|
| 187 |
+
"<|box_start|>",
|
| 188 |
+
"<|box_end|>",
|
| 189 |
+
"<|quad_start|>",
|
| 190 |
+
"<|quad_end|>",
|
| 191 |
+
"<|vision_start|>",
|
| 192 |
+
"<|vision_end|>",
|
| 193 |
+
"<|vision_pad|>",
|
| 194 |
+
"<|image_pad|>",
|
| 195 |
+
"<|video_pad|>"
|
| 196 |
+
],
|
| 197 |
+
"bos_token": null,
|
| 198 |
+
"clean_up_tokenization_spaces": false,
|
| 199 |
+
"eos_token": "<|im_end|>",
|
| 200 |
+
"errors": "replace",
|
| 201 |
+
"extra_special_tokens": {},
|
| 202 |
+
"model_max_length": 131072,
|
| 203 |
+
"pad_token": "<|endoftext|>",
|
| 204 |
+
"split_special_tokens": false,
|
| 205 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
| 206 |
+
"unk_token": null
|
| 207 |
+
}
|
qwen-image-series-50x-above-base-model/tokenizer/vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-above-base-model/vae/config.json
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "AutoencoderKLQwenImage",
|
| 3 |
+
"_diffusers_version": "0.36.0.dev0",
|
| 4 |
+
"attn_scales": [],
|
| 5 |
+
"base_dim": 96,
|
| 6 |
+
"dim_mult": [
|
| 7 |
+
1,
|
| 8 |
+
2,
|
| 9 |
+
4,
|
| 10 |
+
4
|
| 11 |
+
],
|
| 12 |
+
"dropout": 0.0,
|
| 13 |
+
"latents_mean": [
|
| 14 |
+
-0.7571,
|
| 15 |
+
-0.7089,
|
| 16 |
+
-0.9113,
|
| 17 |
+
0.1075,
|
| 18 |
+
-0.1745,
|
| 19 |
+
0.9653,
|
| 20 |
+
-0.1517,
|
| 21 |
+
1.5508,
|
| 22 |
+
0.4134,
|
| 23 |
+
-0.0715,
|
| 24 |
+
0.5517,
|
| 25 |
+
-0.3632,
|
| 26 |
+
-0.1922,
|
| 27 |
+
-0.9497,
|
| 28 |
+
0.2503,
|
| 29 |
+
-0.2921
|
| 30 |
+
],
|
| 31 |
+
"latents_std": [
|
| 32 |
+
2.8184,
|
| 33 |
+
1.4541,
|
| 34 |
+
2.3275,
|
| 35 |
+
2.6558,
|
| 36 |
+
1.2196,
|
| 37 |
+
1.7708,
|
| 38 |
+
2.6052,
|
| 39 |
+
2.0743,
|
| 40 |
+
3.2687,
|
| 41 |
+
2.1526,
|
| 42 |
+
2.8652,
|
| 43 |
+
1.5579,
|
| 44 |
+
1.6382,
|
| 45 |
+
1.1253,
|
| 46 |
+
2.8251,
|
| 47 |
+
1.916
|
| 48 |
+
],
|
| 49 |
+
"num_res_blocks": 2,
|
| 50 |
+
"temperal_downsample": [
|
| 51 |
+
false,
|
| 52 |
+
true,
|
| 53 |
+
true
|
| 54 |
+
],
|
| 55 |
+
"z_dim": 16
|
| 56 |
+
}
|
qwen-image-series-50x-above-base-model/vae/diffusion_pytorch_model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0c8bc8b758c649abef9ea407b95408389a3b2f610d0d10fcb054fe171d0a8344
|
| 3 |
+
size 253806966
|
qwen-image-series-50x-below-base-model/.quantfunc_keymap_cache/37e8f96ef1c64bf46879a40ccc33d348.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02df71a347fc4010098264ad082d7d96ccacefe46472b726ba602623dba62955
|
| 3 |
+
size 34344
|
qwen-image-series-50x-below-base-model/.quantfunc_keymap_cache/ffee6342fd24d487e726656f39a6386b.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7407700e82bdcc89d643e13817b3467be0df660cfa5348e30837a9d366391159
|
| 3 |
+
size 112900
|
qwen-image-series-50x-below-base-model/.quantfunc_vram_cache_lighting.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-below-base-model/.quantfunc_vram_cache_lighting_ed38d446_lora_39c6fde71e307d0495bd97d4ca504940.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-below-base-model/.quantfunc_vram_cache_svdq.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-below-base-model/model_index.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "QwenImagePipeline",
|
| 3 |
+
"_diffusers_version": "0.34.0.dev0",
|
| 4 |
+
"scheduler": [
|
| 5 |
+
"diffusers",
|
| 6 |
+
"FlowMatchEulerDiscreteScheduler"
|
| 7 |
+
],
|
| 8 |
+
"text_encoder": [
|
| 9 |
+
"transformers",
|
| 10 |
+
"Qwen2_5_VLForConditionalGeneration"
|
| 11 |
+
],
|
| 12 |
+
"tokenizer": [
|
| 13 |
+
"transformers",
|
| 14 |
+
"Qwen2Tokenizer"
|
| 15 |
+
],
|
| 16 |
+
"transformer": [
|
| 17 |
+
"diffusers",
|
| 18 |
+
"QwenImageTransformer2DModel"
|
| 19 |
+
],
|
| 20 |
+
"vae": [
|
| 21 |
+
"diffusers",
|
| 22 |
+
"AutoencoderKLQwenImage"
|
| 23 |
+
]
|
| 24 |
+
}
|
qwen-image-series-50x-below-base-model/quantfunc_config.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"backend": "lighting",
|
| 3 |
+
"model_id": "df4b338b-c4ff-483e-9f68-6f6f4c46201e",
|
| 4 |
+
"obfuscated": true
|
| 5 |
+
}
|
qwen-image-series-50x-below-base-model/scheduler/scheduler_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "FlowMatchEulerDiscreteScheduler",
|
| 3 |
+
"base_image_seq_len": 256,
|
| 4 |
+
"base_shift": 1.0986122886681098,
|
| 5 |
+
"max_image_seq_len": 8192,
|
| 6 |
+
"max_shift": 1.0986122886681098,
|
| 7 |
+
"num_train_timesteps": 1000,
|
| 8 |
+
"shift": 1.0,
|
| 9 |
+
"time_shift_type": "exponential",
|
| 10 |
+
"use_dynamic_shifting": true
|
| 11 |
+
}
|
qwen-image-series-50x-below-base-model/text_encoder/config.json
ADDED
|
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"Qwen2_5_VLForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"attention_dropout": 0.0,
|
| 6 |
+
"bos_token_id": 151643,
|
| 7 |
+
"eos_token_id": 151645,
|
| 8 |
+
"hidden_act": "silu",
|
| 9 |
+
"hidden_size": 3584,
|
| 10 |
+
"image_token_id": 151655,
|
| 11 |
+
"initializer_range": 0.02,
|
| 12 |
+
"intermediate_size": 18944,
|
| 13 |
+
"max_position_embeddings": 128000,
|
| 14 |
+
"max_window_layers": 28,
|
| 15 |
+
"model_type": "qwen2_5_vl",
|
| 16 |
+
"num_attention_heads": 28,
|
| 17 |
+
"num_hidden_layers": 28,
|
| 18 |
+
"num_key_value_heads": 4,
|
| 19 |
+
"rms_norm_eps": 1e-06,
|
| 20 |
+
"rope_scaling": {
|
| 21 |
+
"mrope_section": [
|
| 22 |
+
16,
|
| 23 |
+
24,
|
| 24 |
+
24
|
| 25 |
+
],
|
| 26 |
+
"rope_type": "default",
|
| 27 |
+
"type": "default"
|
| 28 |
+
},
|
| 29 |
+
"rope_theta": 1000000.0,
|
| 30 |
+
"sliding_window": 32768,
|
| 31 |
+
"text_config": {
|
| 32 |
+
"architectures": [
|
| 33 |
+
"Qwen2_5_VLForConditionalGeneration"
|
| 34 |
+
],
|
| 35 |
+
"attention_dropout": 0.0,
|
| 36 |
+
"bos_token_id": 151643,
|
| 37 |
+
"eos_token_id": 151645,
|
| 38 |
+
"hidden_act": "silu",
|
| 39 |
+
"hidden_size": 3584,
|
| 40 |
+
"image_token_id": null,
|
| 41 |
+
"initializer_range": 0.02,
|
| 42 |
+
"intermediate_size": 18944,
|
| 43 |
+
"layer_types": [
|
| 44 |
+
"full_attention",
|
| 45 |
+
"full_attention",
|
| 46 |
+
"full_attention",
|
| 47 |
+
"full_attention",
|
| 48 |
+
"full_attention",
|
| 49 |
+
"full_attention",
|
| 50 |
+
"full_attention",
|
| 51 |
+
"full_attention",
|
| 52 |
+
"full_attention",
|
| 53 |
+
"full_attention",
|
| 54 |
+
"full_attention",
|
| 55 |
+
"full_attention",
|
| 56 |
+
"full_attention",
|
| 57 |
+
"full_attention",
|
| 58 |
+
"full_attention",
|
| 59 |
+
"full_attention",
|
| 60 |
+
"full_attention",
|
| 61 |
+
"full_attention",
|
| 62 |
+
"full_attention",
|
| 63 |
+
"full_attention",
|
| 64 |
+
"full_attention",
|
| 65 |
+
"full_attention",
|
| 66 |
+
"full_attention",
|
| 67 |
+
"full_attention",
|
| 68 |
+
"full_attention",
|
| 69 |
+
"full_attention",
|
| 70 |
+
"full_attention",
|
| 71 |
+
"full_attention"
|
| 72 |
+
],
|
| 73 |
+
"max_position_embeddings": 128000,
|
| 74 |
+
"max_window_layers": 28,
|
| 75 |
+
"model_type": "qwen2_5_vl_text",
|
| 76 |
+
"num_attention_heads": 28,
|
| 77 |
+
"num_hidden_layers": 28,
|
| 78 |
+
"num_key_value_heads": 4,
|
| 79 |
+
"rms_norm_eps": 1e-06,
|
| 80 |
+
"rope_scaling": {
|
| 81 |
+
"mrope_section": [
|
| 82 |
+
16,
|
| 83 |
+
24,
|
| 84 |
+
24
|
| 85 |
+
],
|
| 86 |
+
"rope_type": "default",
|
| 87 |
+
"type": "default"
|
| 88 |
+
},
|
| 89 |
+
"rope_theta": 1000000.0,
|
| 90 |
+
"sliding_window": null,
|
| 91 |
+
"torch_dtype": "float32",
|
| 92 |
+
"use_cache": true,
|
| 93 |
+
"use_sliding_window": false,
|
| 94 |
+
"video_token_id": null,
|
| 95 |
+
"vision_end_token_id": 151653,
|
| 96 |
+
"vision_start_token_id": 151652,
|
| 97 |
+
"vision_token_id": 151654,
|
| 98 |
+
"vocab_size": 152064
|
| 99 |
+
},
|
| 100 |
+
"tie_word_embeddings": false,
|
| 101 |
+
"torch_dtype": "bfloat16",
|
| 102 |
+
"transformers_version": "4.53.1",
|
| 103 |
+
"use_cache": true,
|
| 104 |
+
"use_sliding_window": false,
|
| 105 |
+
"video_token_id": 151656,
|
| 106 |
+
"vision_config": {
|
| 107 |
+
"depth": 32,
|
| 108 |
+
"fullatt_block_indexes": [
|
| 109 |
+
7,
|
| 110 |
+
15,
|
| 111 |
+
23,
|
| 112 |
+
31
|
| 113 |
+
],
|
| 114 |
+
"hidden_act": "silu",
|
| 115 |
+
"hidden_size": 1280,
|
| 116 |
+
"in_channels": 3,
|
| 117 |
+
"in_chans": 3,
|
| 118 |
+
"initializer_range": 0.02,
|
| 119 |
+
"intermediate_size": 3420,
|
| 120 |
+
"model_type": "qwen2_5_vl",
|
| 121 |
+
"num_heads": 16,
|
| 122 |
+
"out_hidden_size": 3584,
|
| 123 |
+
"patch_size": 14,
|
| 124 |
+
"spatial_merge_size": 2,
|
| 125 |
+
"spatial_patch_size": 14,
|
| 126 |
+
"temporal_patch_size": 2,
|
| 127 |
+
"tokens_per_second": 2,
|
| 128 |
+
"torch_dtype": "float32",
|
| 129 |
+
"window_size": 112
|
| 130 |
+
},
|
| 131 |
+
"vision_end_token_id": 151653,
|
| 132 |
+
"vision_start_token_id": 151652,
|
| 133 |
+
"vision_token_id": 151654,
|
| 134 |
+
"vocab_size": 152064
|
| 135 |
+
}
|
qwen-image-series-50x-below-base-model/text_encoder/model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1421bae43e820c7b5ac8a55450bee6f998cb3aaf198f8978c80c6f8ed9ac4ba2
|
| 3 |
+
size 4557255810
|
qwen-image-series-50x-below-base-model/tokenizer/added_tokens.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"</tool_call>": 151658,
|
| 3 |
+
"<tool_call>": 151657,
|
| 4 |
+
"<|box_end|>": 151649,
|
| 5 |
+
"<|box_start|>": 151648,
|
| 6 |
+
"<|endoftext|>": 151643,
|
| 7 |
+
"<|file_sep|>": 151664,
|
| 8 |
+
"<|fim_middle|>": 151660,
|
| 9 |
+
"<|fim_pad|>": 151662,
|
| 10 |
+
"<|fim_prefix|>": 151659,
|
| 11 |
+
"<|fim_suffix|>": 151661,
|
| 12 |
+
"<|im_end|>": 151645,
|
| 13 |
+
"<|im_start|>": 151644,
|
| 14 |
+
"<|image_pad|>": 151655,
|
| 15 |
+
"<|object_ref_end|>": 151647,
|
| 16 |
+
"<|object_ref_start|>": 151646,
|
| 17 |
+
"<|quad_end|>": 151651,
|
| 18 |
+
"<|quad_start|>": 151650,
|
| 19 |
+
"<|repo_name|>": 151663,
|
| 20 |
+
"<|video_pad|>": 151656,
|
| 21 |
+
"<|vision_end|>": 151653,
|
| 22 |
+
"<|vision_pad|>": 151654,
|
| 23 |
+
"<|vision_start|>": 151652
|
| 24 |
+
}
|
qwen-image-series-50x-below-base-model/tokenizer/chat_template.jinja
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{%- if tools %}
|
| 2 |
+
{{- '<|im_start|>system\n' }}
|
| 3 |
+
{%- if messages[0]['role'] == 'system' %}
|
| 4 |
+
{{- messages[0]['content'] }}
|
| 5 |
+
{%- else %}
|
| 6 |
+
{{- 'You are a helpful assistant.' }}
|
| 7 |
+
{%- endif %}
|
| 8 |
+
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
| 9 |
+
{%- for tool in tools %}
|
| 10 |
+
{{- "\n" }}
|
| 11 |
+
{{- tool | tojson }}
|
| 12 |
+
{%- endfor %}
|
| 13 |
+
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
| 14 |
+
{%- else %}
|
| 15 |
+
{%- if messages[0]['role'] == 'system' %}
|
| 16 |
+
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
|
| 17 |
+
{%- else %}
|
| 18 |
+
{{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
|
| 19 |
+
{%- endif %}
|
| 20 |
+
{%- endif %}
|
| 21 |
+
{%- for message in messages %}
|
| 22 |
+
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
|
| 23 |
+
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
| 24 |
+
{%- elif message.role == "assistant" %}
|
| 25 |
+
{{- '<|im_start|>' + message.role }}
|
| 26 |
+
{%- if message.content %}
|
| 27 |
+
{{- '\n' + message.content }}
|
| 28 |
+
{%- endif %}
|
| 29 |
+
{%- for tool_call in message.tool_calls %}
|
| 30 |
+
{%- if tool_call.function is defined %}
|
| 31 |
+
{%- set tool_call = tool_call.function %}
|
| 32 |
+
{%- endif %}
|
| 33 |
+
{{- '\n<tool_call>\n{"name": "' }}
|
| 34 |
+
{{- tool_call.name }}
|
| 35 |
+
{{- '", "arguments": ' }}
|
| 36 |
+
{{- tool_call.arguments | tojson }}
|
| 37 |
+
{{- '}\n</tool_call>' }}
|
| 38 |
+
{%- endfor %}
|
| 39 |
+
{{- '<|im_end|>\n' }}
|
| 40 |
+
{%- elif message.role == "tool" %}
|
| 41 |
+
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
|
| 42 |
+
{{- '<|im_start|>user' }}
|
| 43 |
+
{%- endif %}
|
| 44 |
+
{{- '\n<tool_response>\n' }}
|
| 45 |
+
{{- message.content }}
|
| 46 |
+
{{- '\n</tool_response>' }}
|
| 47 |
+
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
| 48 |
+
{{- '<|im_end|>\n' }}
|
| 49 |
+
{%- endif %}
|
| 50 |
+
{%- endif %}
|
| 51 |
+
{%- endfor %}
|
| 52 |
+
{%- if add_generation_prompt %}
|
| 53 |
+
{{- '<|im_start|>assistant\n' }}
|
| 54 |
+
{%- endif %}
|
qwen-image-series-50x-below-base-model/tokenizer/merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-below-base-model/tokenizer/special_tokens_map.json
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
"<|im_start|>",
|
| 4 |
+
"<|im_end|>",
|
| 5 |
+
"<|object_ref_start|>",
|
| 6 |
+
"<|object_ref_end|>",
|
| 7 |
+
"<|box_start|>",
|
| 8 |
+
"<|box_end|>",
|
| 9 |
+
"<|quad_start|>",
|
| 10 |
+
"<|quad_end|>",
|
| 11 |
+
"<|vision_start|>",
|
| 12 |
+
"<|vision_end|>",
|
| 13 |
+
"<|vision_pad|>",
|
| 14 |
+
"<|image_pad|>",
|
| 15 |
+
"<|video_pad|>"
|
| 16 |
+
],
|
| 17 |
+
"eos_token": {
|
| 18 |
+
"content": "<|im_end|>",
|
| 19 |
+
"lstrip": false,
|
| 20 |
+
"normalized": false,
|
| 21 |
+
"rstrip": false,
|
| 22 |
+
"single_word": false
|
| 23 |
+
},
|
| 24 |
+
"pad_token": {
|
| 25 |
+
"content": "<|endoftext|>",
|
| 26 |
+
"lstrip": false,
|
| 27 |
+
"normalized": false,
|
| 28 |
+
"rstrip": false,
|
| 29 |
+
"single_word": false
|
| 30 |
+
}
|
| 31 |
+
}
|
qwen-image-series-50x-below-base-model/tokenizer/tokenizer_config.json
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_bos_token": false,
|
| 3 |
+
"add_prefix_space": false,
|
| 4 |
+
"added_tokens_decoder": {
|
| 5 |
+
"151643": {
|
| 6 |
+
"content": "<|endoftext|>",
|
| 7 |
+
"lstrip": false,
|
| 8 |
+
"normalized": false,
|
| 9 |
+
"rstrip": false,
|
| 10 |
+
"single_word": false,
|
| 11 |
+
"special": true
|
| 12 |
+
},
|
| 13 |
+
"151644": {
|
| 14 |
+
"content": "<|im_start|>",
|
| 15 |
+
"lstrip": false,
|
| 16 |
+
"normalized": false,
|
| 17 |
+
"rstrip": false,
|
| 18 |
+
"single_word": false,
|
| 19 |
+
"special": true
|
| 20 |
+
},
|
| 21 |
+
"151645": {
|
| 22 |
+
"content": "<|im_end|>",
|
| 23 |
+
"lstrip": false,
|
| 24 |
+
"normalized": false,
|
| 25 |
+
"rstrip": false,
|
| 26 |
+
"single_word": false,
|
| 27 |
+
"special": true
|
| 28 |
+
},
|
| 29 |
+
"151646": {
|
| 30 |
+
"content": "<|object_ref_start|>",
|
| 31 |
+
"lstrip": false,
|
| 32 |
+
"normalized": false,
|
| 33 |
+
"rstrip": false,
|
| 34 |
+
"single_word": false,
|
| 35 |
+
"special": true
|
| 36 |
+
},
|
| 37 |
+
"151647": {
|
| 38 |
+
"content": "<|object_ref_end|>",
|
| 39 |
+
"lstrip": false,
|
| 40 |
+
"normalized": false,
|
| 41 |
+
"rstrip": false,
|
| 42 |
+
"single_word": false,
|
| 43 |
+
"special": true
|
| 44 |
+
},
|
| 45 |
+
"151648": {
|
| 46 |
+
"content": "<|box_start|>",
|
| 47 |
+
"lstrip": false,
|
| 48 |
+
"normalized": false,
|
| 49 |
+
"rstrip": false,
|
| 50 |
+
"single_word": false,
|
| 51 |
+
"special": true
|
| 52 |
+
},
|
| 53 |
+
"151649": {
|
| 54 |
+
"content": "<|box_end|>",
|
| 55 |
+
"lstrip": false,
|
| 56 |
+
"normalized": false,
|
| 57 |
+
"rstrip": false,
|
| 58 |
+
"single_word": false,
|
| 59 |
+
"special": true
|
| 60 |
+
},
|
| 61 |
+
"151650": {
|
| 62 |
+
"content": "<|quad_start|>",
|
| 63 |
+
"lstrip": false,
|
| 64 |
+
"normalized": false,
|
| 65 |
+
"rstrip": false,
|
| 66 |
+
"single_word": false,
|
| 67 |
+
"special": true
|
| 68 |
+
},
|
| 69 |
+
"151651": {
|
| 70 |
+
"content": "<|quad_end|>",
|
| 71 |
+
"lstrip": false,
|
| 72 |
+
"normalized": false,
|
| 73 |
+
"rstrip": false,
|
| 74 |
+
"single_word": false,
|
| 75 |
+
"special": true
|
| 76 |
+
},
|
| 77 |
+
"151652": {
|
| 78 |
+
"content": "<|vision_start|>",
|
| 79 |
+
"lstrip": false,
|
| 80 |
+
"normalized": false,
|
| 81 |
+
"rstrip": false,
|
| 82 |
+
"single_word": false,
|
| 83 |
+
"special": true
|
| 84 |
+
},
|
| 85 |
+
"151653": {
|
| 86 |
+
"content": "<|vision_end|>",
|
| 87 |
+
"lstrip": false,
|
| 88 |
+
"normalized": false,
|
| 89 |
+
"rstrip": false,
|
| 90 |
+
"single_word": false,
|
| 91 |
+
"special": true
|
| 92 |
+
},
|
| 93 |
+
"151654": {
|
| 94 |
+
"content": "<|vision_pad|>",
|
| 95 |
+
"lstrip": false,
|
| 96 |
+
"normalized": false,
|
| 97 |
+
"rstrip": false,
|
| 98 |
+
"single_word": false,
|
| 99 |
+
"special": true
|
| 100 |
+
},
|
| 101 |
+
"151655": {
|
| 102 |
+
"content": "<|image_pad|>",
|
| 103 |
+
"lstrip": false,
|
| 104 |
+
"normalized": false,
|
| 105 |
+
"rstrip": false,
|
| 106 |
+
"single_word": false,
|
| 107 |
+
"special": true
|
| 108 |
+
},
|
| 109 |
+
"151656": {
|
| 110 |
+
"content": "<|video_pad|>",
|
| 111 |
+
"lstrip": false,
|
| 112 |
+
"normalized": false,
|
| 113 |
+
"rstrip": false,
|
| 114 |
+
"single_word": false,
|
| 115 |
+
"special": true
|
| 116 |
+
},
|
| 117 |
+
"151657": {
|
| 118 |
+
"content": "<tool_call>",
|
| 119 |
+
"lstrip": false,
|
| 120 |
+
"normalized": false,
|
| 121 |
+
"rstrip": false,
|
| 122 |
+
"single_word": false,
|
| 123 |
+
"special": false
|
| 124 |
+
},
|
| 125 |
+
"151658": {
|
| 126 |
+
"content": "</tool_call>",
|
| 127 |
+
"lstrip": false,
|
| 128 |
+
"normalized": false,
|
| 129 |
+
"rstrip": false,
|
| 130 |
+
"single_word": false,
|
| 131 |
+
"special": false
|
| 132 |
+
},
|
| 133 |
+
"151659": {
|
| 134 |
+
"content": "<|fim_prefix|>",
|
| 135 |
+
"lstrip": false,
|
| 136 |
+
"normalized": false,
|
| 137 |
+
"rstrip": false,
|
| 138 |
+
"single_word": false,
|
| 139 |
+
"special": false
|
| 140 |
+
},
|
| 141 |
+
"151660": {
|
| 142 |
+
"content": "<|fim_middle|>",
|
| 143 |
+
"lstrip": false,
|
| 144 |
+
"normalized": false,
|
| 145 |
+
"rstrip": false,
|
| 146 |
+
"single_word": false,
|
| 147 |
+
"special": false
|
| 148 |
+
},
|
| 149 |
+
"151661": {
|
| 150 |
+
"content": "<|fim_suffix|>",
|
| 151 |
+
"lstrip": false,
|
| 152 |
+
"normalized": false,
|
| 153 |
+
"rstrip": false,
|
| 154 |
+
"single_word": false,
|
| 155 |
+
"special": false
|
| 156 |
+
},
|
| 157 |
+
"151662": {
|
| 158 |
+
"content": "<|fim_pad|>",
|
| 159 |
+
"lstrip": false,
|
| 160 |
+
"normalized": false,
|
| 161 |
+
"rstrip": false,
|
| 162 |
+
"single_word": false,
|
| 163 |
+
"special": false
|
| 164 |
+
},
|
| 165 |
+
"151663": {
|
| 166 |
+
"content": "<|repo_name|>",
|
| 167 |
+
"lstrip": false,
|
| 168 |
+
"normalized": false,
|
| 169 |
+
"rstrip": false,
|
| 170 |
+
"single_word": false,
|
| 171 |
+
"special": false
|
| 172 |
+
},
|
| 173 |
+
"151664": {
|
| 174 |
+
"content": "<|file_sep|>",
|
| 175 |
+
"lstrip": false,
|
| 176 |
+
"normalized": false,
|
| 177 |
+
"rstrip": false,
|
| 178 |
+
"single_word": false,
|
| 179 |
+
"special": false
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
"additional_special_tokens": [
|
| 183 |
+
"<|im_start|>",
|
| 184 |
+
"<|im_end|>",
|
| 185 |
+
"<|object_ref_start|>",
|
| 186 |
+
"<|object_ref_end|>",
|
| 187 |
+
"<|box_start|>",
|
| 188 |
+
"<|box_end|>",
|
| 189 |
+
"<|quad_start|>",
|
| 190 |
+
"<|quad_end|>",
|
| 191 |
+
"<|vision_start|>",
|
| 192 |
+
"<|vision_end|>",
|
| 193 |
+
"<|vision_pad|>",
|
| 194 |
+
"<|image_pad|>",
|
| 195 |
+
"<|video_pad|>"
|
| 196 |
+
],
|
| 197 |
+
"bos_token": null,
|
| 198 |
+
"clean_up_tokenization_spaces": false,
|
| 199 |
+
"eos_token": "<|im_end|>",
|
| 200 |
+
"errors": "replace",
|
| 201 |
+
"extra_special_tokens": {},
|
| 202 |
+
"model_max_length": 131072,
|
| 203 |
+
"pad_token": "<|endoftext|>",
|
| 204 |
+
"split_special_tokens": false,
|
| 205 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
| 206 |
+
"unk_token": null
|
| 207 |
+
}
|
qwen-image-series-50x-below-base-model/tokenizer/vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
qwen-image-series-50x-below-base-model/vae/config.json
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "AutoencoderKLQwenImage",
|
| 3 |
+
"_diffusers_version": "0.34.0.dev0",
|
| 4 |
+
"attn_scales": [],
|
| 5 |
+
"base_dim": 96,
|
| 6 |
+
"dim_mult": [
|
| 7 |
+
1,
|
| 8 |
+
2,
|
| 9 |
+
4,
|
| 10 |
+
4
|
| 11 |
+
],
|
| 12 |
+
"dropout": 0.0,
|
| 13 |
+
"latents_mean": [
|
| 14 |
+
-0.7571,
|
| 15 |
+
-0.7089,
|
| 16 |
+
-0.9113,
|
| 17 |
+
0.1075,
|
| 18 |
+
-0.1745,
|
| 19 |
+
0.9653,
|
| 20 |
+
-0.1517,
|
| 21 |
+
1.5508,
|
| 22 |
+
0.4134,
|
| 23 |
+
-0.0715,
|
| 24 |
+
0.5517,
|
| 25 |
+
-0.3632,
|
| 26 |
+
-0.1922,
|
| 27 |
+
-0.9497,
|
| 28 |
+
0.2503,
|
| 29 |
+
-0.2921
|
| 30 |
+
],
|
| 31 |
+
"latents_std": [
|
| 32 |
+
2.8184,
|
| 33 |
+
1.4541,
|
| 34 |
+
2.3275,
|
| 35 |
+
2.6558,
|
| 36 |
+
1.2196,
|
| 37 |
+
1.7708,
|
| 38 |
+
2.6052,
|
| 39 |
+
2.0743,
|
| 40 |
+
3.2687,
|
| 41 |
+
2.1526,
|
| 42 |
+
2.8652,
|
| 43 |
+
1.5579,
|
| 44 |
+
1.6382,
|
| 45 |
+
1.1253,
|
| 46 |
+
2.8251,
|
| 47 |
+
1.916
|
| 48 |
+
],
|
| 49 |
+
"num_res_blocks": 2,
|
| 50 |
+
"temperal_downsample": [
|
| 51 |
+
false,
|
| 52 |
+
true,
|
| 53 |
+
true
|
| 54 |
+
],
|
| 55 |
+
"z_dim": 16
|
| 56 |
+
}
|
qwen-image-series-50x-below-base-model/vae/diffusion_pytorch_model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0c8bc8b758c649abef9ea407b95408389a3b2f610d0d10fcb054fe171d0a8344
|
| 3 |
+
size 253806966
|
transformer/config.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_class_name": "QwenImageTransformer2DModel",
|
| 3 |
+
"_diffusers_version": "0.34.0.dev0",
|
| 4 |
+
"attention_head_dim": 128,
|
| 5 |
+
"axes_dims_rope": [
|
| 6 |
+
16,
|
| 7 |
+
56,
|
| 8 |
+
56
|
| 9 |
+
],
|
| 10 |
+
"guidance_embeds": false,
|
| 11 |
+
"in_channels": 64,
|
| 12 |
+
"joint_attention_dim": 3584,
|
| 13 |
+
"num_attention_heads": 24,
|
| 14 |
+
"num_layers": 60,
|
| 15 |
+
"out_channels": 16,
|
| 16 |
+
"patch_size": 2,
|
| 17 |
+
"pooled_projection_dim": 768
|
| 18 |
+
}
|
transformer/qwen-image-2512-50x-above-lighting-4steps-prequant.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ffa41bd340432b954c67807cd04386aed4c1255a30951f5a623be19ccb8d69fa
|
| 3 |
+
size 11420095396
|
transformer/qwen-image-2512-50x-above-lighting-4steps.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9d3b7354a3bf6eafd3ed193ec9e2af7504e494ebe86b54e994d235f62961c324
|
| 3 |
+
size 14498964653
|
transformer/qwen-image-2512-50x-below-lighting-4steps-prequant.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ec13d077253fe7126adfe9ea6a5c08ed8c726230f3f1bfdccd3f3e349f13b575
|
| 3 |
+
size 11139201766
|
transformer/qwen-image-2512-50x-below-lighting-4steps.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4492b33c783d169e056f8c1bd95d2ad9f673d864ec0be83e9c9ef236634fc1e
|
| 3 |
+
size 14218071081
|