Simple-VL-8B / README.md
Gábor Stefanik
mirror files from https://www.modelscope.cn/models/swift/Simple-VL-8B
71d367a
---
frameworks:
- Pytorch
license: apache-2.0
tasks:
- image-text-to-text
model-type:
#如 gpt、phi、llama、chatglm、baichuan 等
- qwen
domain:
#如 nlp、cv、audio、multi-modal
- multi-modal
language:
#语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
- en
base_model:
- Qwen/Qwen3-8B
- Qwen/Qwen2.5-VL-7B-Instruct
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
---
Simple-VL-8B is a vision-language (VL) model trained by integrating the language modeling capabilities of Qwen3-8B with the visual understanding architecture of Qwen2.5-VL-7B-Instruct .
The model is trained under [ms-swift](https://github.com/modelscope/ms-swift/tree/main) framework, the SOP process document can be found [here](https://swift.readthedocs.io/en/latest/BestPractices/Rapidly-Training-VL-model.html)
Base Models :
- [Qwen2.5-VL-7B-Instruct](https://www.modelscope.cn/models/Qwen/Qwen2.5-VL-7B-Instruct)
- [Qwen3-8B](https://www.modelscope.cn/models/Qwen/Qwen3-8B)
The Simple-VL-8B model was created through a two-stage fine-tuning process:
1. Architecture Modification : The original Qwen2.5-VL-7B-Instruct model's LLM component was replaced with weights from Qwen3-8B. Several key parameters in the configuration were updated to match Qwen3-8B's structure.
2. Two-Stage Training :
1. Stage 1 : Only the vision-to-language aligner (merger layer) was trained while keeping the ViT and LLM components frozen.
2. Stage 2 : All components were unfrozen and jointly fine-tuned to enhance overall performance.
Here we show a code snippet to show you how to use the chat model
```python
from modelscope import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"swift/Simple-VL-8B", torch_dtype="auto", device_map="auto"
)
# default processer
processor = AutoProcessor.from_pretrained("swift/Simple-VL-8B")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```