Improve model card: Add paper abstract and project page link

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +77 -12
README.md CHANGED
@@ -1,26 +1,34 @@
1
  ---
2
- license: apache-2.0
3
- pipeline_tag: image-text-to-text
4
- library_name: transformers
5
  base_model:
6
- - OpenGVLab/InternVL3_5-8B-MPO
7
- base_model_relation: finetune
8
  datasets:
9
- - OpenGVLab/MMPR-v1.2
10
- - OpenGVLab/MMPR-Tiny
11
  language:
12
- - multilingual
 
 
 
13
  tags:
14
- - internvl
15
- - custom_code
 
16
  ---
17
 
18
  # InternVL3_5-8B
19
 
20
- [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[πŸ“œ InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[πŸ“œ InternVL3\]](https://huggingface.co/papers/2504.10479) [\[πŸ“œ InternVL3.5\]](https://huggingface.co/papers/2508.18265)
21
 
22
  [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ—¨οΈ Chat Demo\]](https://chat.intern-ai.org.cn/) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
23
 
 
 
 
 
 
 
 
 
24
  <div align="center">
25
  <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
26
  </div>
@@ -494,7 +502,9 @@ image_urls=[
494
 
495
  images = [load_image(img_url) for img_url in image_urls]
496
  # Numbering images improves multi-image conversations
497
- response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
 
 
498
  print(response.text)
499
  ```
500
 
@@ -596,4 +606,59 @@ If you find this project useful in your research, please consider citing:
596
  journal={arXiv preprint arXiv:2508.18265},
597
  year={2025}
598
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
599
  ```
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
+ - OpenGVLab/InternVL3_5-8B-MPO
 
4
  datasets:
5
+ - OpenGVLab/MMPR-v1.2
6
+ - OpenGVLab/MMPR-Tiny
7
  language:
8
+ - multilingual
9
+ library_name: transformers
10
+ license: apache-2.0
11
+ pipeline_tag: image-text-to-text
12
  tags:
13
+ - internvl
14
+ - custom_code
15
+ base_model_relation: finetune
16
  ---
17
 
18
  # InternVL3_5-8B
19
 
20
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸš€ Project Page]`](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[πŸ“œ InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[πŸ“œ InternVL3\]](https://huggingface.co/papers/2504.10479) [\[πŸ“œ InternVL3.5\]](https://huggingface.co/papers/2508.18265)
21
 
22
  [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ—¨οΈ Chat Demo\]](https://chat.intern-ai.org.cn/) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
23
 
24
+ ## Paper Information
25
+
26
+ This model builds upon the research presented in the paper [Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling](https://huggingface.co/papers/2412.05271).
27
+
28
+ ### Abstract
29
+
30
+ We introduce InternVL 2.5, an advanced multimodal large language model (MLLM) series that builds upon InternVL 2.0, maintaining its core model architecture while introducing significant enhancements in training and testing strategies as well as data quality. In this work, we delve into the relationship between model scaling and performance, systematically exploring the performance trends in vision encoders, language models, dataset sizes, and test-time configurations. Through extensive evaluations on a wide range of benchmarks, including multi-discipline reasoning, document understanding, multi-image / video understanding, real-world comprehension, multimodal hallucination detection, visual grounding, multilingual capabilities, and pure language processing, InternVL 2.5 exhibits competitive performance, rivaling leading commercial models such as GPT-4o and Claude-3.5-Sonnet. Notably, our model is the first open-source MLLMs to surpass 70% on the MMMU benchmark, achieving a 3.7-point improvement through Chain-of-Thought (CoT) reasoning and showcasing strong potential for test-time scaling. We hope this model contributes to the open-source community by setting new standards for developing and applying multimodal AI systems. HuggingFace demo see this https URL
31
+
32
  <div align="center">
33
  <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
34
  </div>
 
502
 
503
  images = [load_image(img_url) for img_url in image_urls]
504
  # Numbering images improves multi-image conversations
505
+ response = pipe((f'Image-1: {IMAGE_TOKEN}
506
+ Image-2: {IMAGE_TOKEN}
507
+ describe these two images', images))
508
  print(response.text)
509
  ```
510
 
 
606
  journal={arXiv preprint arXiv:2508.18265},
607
  year={2025}
608
  }
609
+ @article{zhu2025internvl3,
610
+ title={Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models},
611
+ author={Zhu, Jinguo and Wang, Weiyun and Chen, Zhe and Liu, Zhaoyang and Ye, Shenglong and Gu, Lixin and Tian, Hao and Duan, Yuchen and Su, Weijie and Shao, Jie and others},
612
+ journal={arXiv preprint arXiv:2504.10479},
613
+ year={2025}
614
+ }
615
+ @article{chen2024expanding,
616
+ title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
617
+ author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
618
+ journal={arXiv preprint arXiv:2412.05271},
619
+ year={2024}
620
+ }
621
+ @article{wang2024mpo,
622
+ title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
623
+ author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
624
+ journal={arXiv preprint arXiv:2411.10442},
625
+ year={2024}
626
+ }
627
+ @article{gao2024mini,
628
+ title={Mini-InternVL: a flexible-transfer pocket multi-modal model with 5\% parameters and 90\% performance},
629
+ author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
630
+ journal={Visual Intelligence},
631
+ volume={2},
632
+ number={1},
633
+ pages={1--17},
634
+ year={2024},
635
+ publisher={Springer}
636
+ }
637
+ @article{chen2024far,
638
+ title={How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites},
639
+ author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
640
+ journal={Science China Information Sciences},
641
+ volume={67},
642
+ number={12},
643
+ pages={220101},
644
+ year={2024},
645
+ publisher={Springer}
646
+ }
647
+ @inproceedings{chen2024internvl,
648
+ title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
649
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
650
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
651
+ pages={24185--24198},
652
+ year={2024}
653
+ }
654
  ```
655
+
656
+ ## Acknowledgement
657
+
658
+ InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
659
+
660
+ ______________________________________________________________________
661
+
662
+ Scan the following QR Code, join our WeChat group.
663
+
664
+ <p align="center"><img width="300" alt="image" src="https://github.com/user-attachments/assets/f776df09-ebba-4fd5-80c2-fec4ff1518be"></p>