Update README.md
Browse files
README.md
CHANGED
|
@@ -1,169 +1,111 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
pipeline_tag: image-text-to-text
|
| 4 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
-
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
|
| 7 |
-
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
| 8 |
-
</a>
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
# Qwen3-VL-30B-A3B-Instruct
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
Meet Qwen3-VL β the most powerful vision-language model in the Qwen series to date.
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
#### Key Enhancements:
|
| 22 |
-
|
| 23 |
-
* **Visual Agent**: Operates PC/mobile GUIsβrecognizes elements, understands functions, invokes tools, completes tasks.
|
| 24 |
-
|
| 25 |
-
* **Visual Coding Boost**: Generates Draw.io/HTML/CSS/JS from images/videos.
|
| 26 |
|
| 27 |
-
|
|
|
|
|
|
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
|
| 38 |
|
|
|
|
| 39 |
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
-
|
| 43 |
-
<img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_arc.jpg" width="80%"/>
|
| 44 |
-
<p>
|
| 45 |
|
|
|
|
| 46 |
|
| 47 |
-
|
| 48 |
|
| 49 |
-
|
| 50 |
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
---
|
| 57 |
|
| 58 |
-
##
|
| 59 |
|
| 60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
|
| 68 |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
The code of Qwen3-VL has been in the latest Hugging Face transformers and we advise you to build from source with command:
|
| 72 |
-
```
|
| 73 |
-
pip install git+https://github.com/huggingface/transformers
|
| 74 |
-
# pip install transformers==4.57.0 # currently, V4.57.0 is not released
|
| 75 |
-
```
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
-
|
| 80 |
|
| 81 |
```python
|
| 82 |
-
from transformers import
|
| 83 |
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
"
|
| 87 |
)
|
| 88 |
|
| 89 |
-
|
| 90 |
-
# model = Qwen3VLMoeForConditionalGeneration.from_pretrained(
|
| 91 |
-
# "Qwen/Qwen3-VL-30B-A3B-Instruct",
|
| 92 |
-
# dtype=torch.bfloat16,
|
| 93 |
-
# attn_implementation="flash_attention_2",
|
| 94 |
-
# device_map="auto",
|
| 95 |
-
# )
|
| 96 |
-
|
| 97 |
-
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-30B-A3B-Instruct")
|
| 98 |
-
|
| 99 |
-
messages = [
|
| 100 |
-
{
|
| 101 |
-
"role": "user",
|
| 102 |
-
"content": [
|
| 103 |
-
{
|
| 104 |
-
"type": "image",
|
| 105 |
-
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
|
| 106 |
-
},
|
| 107 |
-
{"type": "text", "text": "Describe this image."},
|
| 108 |
-
],
|
| 109 |
-
}
|
| 110 |
-
]
|
| 111 |
-
|
| 112 |
-
# Preparation for inference
|
| 113 |
-
inputs = processor.apply_chat_template(
|
| 114 |
-
messages,
|
| 115 |
-
tokenize=True,
|
| 116 |
-
add_generation_prompt=True,
|
| 117 |
-
return_dict=True,
|
| 118 |
-
return_tensors="pt"
|
| 119 |
-
)
|
| 120 |
-
|
| 121 |
-
# Inference: Generation of the output
|
| 122 |
-
generated_ids = model.generate(**inputs, max_new_tokens=128)
|
| 123 |
-
generated_ids_trimmed = [
|
| 124 |
-
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
| 125 |
-
]
|
| 126 |
-
output_text = processor.batch_decode(
|
| 127 |
-
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
| 128 |
-
)
|
| 129 |
-
print(output_text)
|
| 130 |
-
```
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
## Citation
|
| 135 |
-
|
| 136 |
-
If you find our work helpful, feel free to give us a cite.
|
| 137 |
-
|
| 138 |
-
```
|
| 139 |
-
@misc{qwen3technicalreport,
|
| 140 |
-
title={Qwen3 Technical Report},
|
| 141 |
-
author={Qwen Team},
|
| 142 |
-
year={2025},
|
| 143 |
-
eprint={2505.09388},
|
| 144 |
-
archivePrefix={arXiv},
|
| 145 |
-
primaryClass={cs.CL},
|
| 146 |
-
url={https://arxiv.org/abs/2505.09388},
|
| 147 |
-
}
|
| 148 |
-
|
| 149 |
-
@article{Qwen2.5-VL,
|
| 150 |
-
title={Qwen2.5-VL Technical Report},
|
| 151 |
-
author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},
|
| 152 |
-
journal={arXiv preprint arXiv:2502.13923},
|
| 153 |
-
year={2025}
|
| 154 |
-
}
|
| 155 |
-
|
| 156 |
-
@article{Qwen2VL,
|
| 157 |
-
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
|
| 158 |
-
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
|
| 159 |
-
journal={arXiv preprint arXiv:2409.12191},
|
| 160 |
-
year={2024}
|
| 161 |
-
}
|
| 162 |
-
|
| 163 |
-
@article{Qwen-VL,
|
| 164 |
-
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
|
| 165 |
-
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
|
| 166 |
-
journal={arXiv preprint arXiv:2308.12966},
|
| 167 |
-
year={2023}
|
| 168 |
-
}
|
| 169 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
| 3 |
library_name: transformers
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
pipeline_tag: image-text-to-text
|
| 7 |
+
tags:
|
| 8 |
+
- text-generation
|
| 9 |
+
- instruct
|
| 10 |
+
- coding
|
| 11 |
+
- research
|
| 12 |
+
- qwen
|
| 13 |
+
- hyze
|
| 14 |
+
- Hitesh
|
| 15 |
+
metrics:
|
| 16 |
+
- accuracy
|
| 17 |
+
base_model:
|
| 18 |
+
- Qwen/Qwen3-VL-30B-A3B-Instruct
|
| 19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
<p align="center">
|
| 22 |
+
<img src="https://i.imgur.com/ePJMLNp.png" alt="Hyze Logo" width="220"/>
|
| 23 |
+
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
<p align="center">
|
| 26 |
+
<img src="https://qwen.readthedocs.io/en/v2.0/_images/qwen2.png" alt="Qwen Logo" width="220"/>
|
| 27 |
+
</p>
|
| 28 |
|
| 29 |
+
<h1 align="center">HyzeQwenInstruct-30B</h1>
|
| 30 |
|
| 31 |
+
<p align="center">
|
| 32 |
+
A high-performance instruction model by <b>Hyze AI</b> built for coding and research.
|
| 33 |
+
</p>
|
| 34 |
|
| 35 |
+
<p align="center">
|
| 36 |
+
π <a href="https://hyzeai.vercel.app">hyzeai.vercel.app</a> β’
|
| 37 |
+
π <a href="https://hyzedocs.vercel.app">hyzedocs.vercel.app</a> β’
|
| 38 |
+
π§ <a href="https://hyzecode.vercel.app">hyzecode.vercel.app</a>
|
| 39 |
+
</p>
|
| 40 |
|
| 41 |
+
---
|
| 42 |
|
| 43 |
+
## π Overview
|
| 44 |
|
| 45 |
+
**HyzeQwenInstruct-30B** is a 30-billion parameter instruction-tuned large language model optimized for:
|
| 46 |
|
| 47 |
+
- π§βπ» Advanced code generation
|
| 48 |
+
- π Technical research & reasoning
|
| 49 |
+
- π§ Deep structured explanations
|
| 50 |
+
- π€ Strong instruction following
|
| 51 |
|
| 52 |
+
Designed for developers, engineers, and researchers who need powerful AI assistance.
|
|
|
|
|
|
|
| 53 |
|
| 54 |
+
---
|
| 55 |
|
| 56 |
+
## π§ Training Focus
|
| 57 |
|
| 58 |
+
HyzeQwenInstruct-30B was optimized for:
|
| 59 |
|
| 60 |
+
### π§βπ» Coding
|
| 61 |
+
- Python, JavaScript, C++, and more
|
| 62 |
+
- Code completion & generation
|
| 63 |
+
- Debugging & refactoring
|
| 64 |
+
- Algorithm explanations
|
| 65 |
|
| 66 |
+
### π Research & Technical Reasoning
|
| 67 |
+
- Structured academic-style answers
|
| 68 |
+
- Scientific explanations
|
| 69 |
+
- Step-by-step reasoning
|
| 70 |
+
- Long-form responses
|
| 71 |
|
| 72 |
+
### π― Instruction Tuning
|
| 73 |
+
- Precise intent following
|
| 74 |
+
- Context retention
|
| 75 |
+
- Clean output formatting
|
| 76 |
|
| 77 |
---
|
| 78 |
|
| 79 |
+
## π Benchmarks β Technical Comparison
|
| 80 |
|
| 81 |
+
| Model | Size | Coding | Reasoning | Notes |
|
| 82 |
+
|-------|------|--------|-----------|-------|
|
| 83 |
+
| **HyzeQwenInstruct-30B** | 30B | βββββ | βββββ | Optimized for dev + research |
|
| 84 |
+
| Qwen-30B-Instruct | 30B | βββββ | βββββ | Strong base alignment |
|
| 85 |
+
| GPT-NeoX-20B | 20B | βββββ | βββββ | Smaller parameter count |
|
| 86 |
+
| GPT-1 | 117M | βββββ | βββββ | Early generation model |
|
| 87 |
|
| 88 |
+
### β‘ Performance Characteristics
|
| 89 |
|
| 90 |
+
- Strong code structure generation
|
| 91 |
+
- Clear technical explanations
|
| 92 |
+
- High instruction accuracy
|
| 93 |
+
- Suitable for professional workflows
|
| 94 |
|
| 95 |
+
> Benchmark ratings are based on internal qualitative evaluation.
|
| 96 |
|
| 97 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
+
## π§ͺ Usage
|
| 100 |
|
| 101 |
+
### Transformers (Python)
|
| 102 |
|
| 103 |
```python
|
| 104 |
+
from transformers import pipeline
|
| 105 |
|
| 106 |
+
generator = pipeline(
|
| 107 |
+
"text-generation",
|
| 108 |
+
model="HyzeAI/HyzeQwenInstruct-30B"
|
| 109 |
)
|
| 110 |
|
| 111 |
+
print(generator("Write a Python function to implement quicksort:"))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|