File size: 1,268 Bytes
d35a579 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
base_model: Qwen2-VL-7B-Instruct
library_name: peft
---
# Model Card
## Model Details
- **Finetuned from model:** Qwen2-VL-7B-Instruct
- **Finetuned dataset:** VAR, YoucookII
- **Paper:** ABDUCTIVEMLLM: Boosting Visual Abductive Reasoning Within MLLMs
## How to Get Started with the Model
Use the code below to get started with the model.
```python
model = Qwen2VLForConditionalGeneration.from_pretrained(
'path/to/your/Qwen2-VL-7B-Instruct',
torch_dtype='auto',
attn_implementation="flash_attention_2"
)
# merge lora adapter from Training Stage-1
lora_model_dir = 'qwen2vl_7b_var_lora1'
model = PeftModel.from_pretrained(model, model_id=lora_model_dir)
model = model.merge_and_unload()
# merge lora adapter from Training Stage-2
lora2_dir = 'qwen2vl_7b_var_select3_lora2'
model = PeftModel.from_pretrained(model, model_id=lora2_dir)
model = model.merge_and_unload()
model.generate()
```
## Citation
```
@article{chang2026abductivemllm,
title={AbductiveMLLM: Boosting Visual Abductive Reasoning Within MLLMs},
author={Chang, Boyu and Wang, Qi and Guo, Xi and Nan, Zhixiong and Yao, Yazhou and Zhou, Tianfei},
journal={arXiv preprint arXiv:2601.02771},
year={2026}
}
```
|