Revert previous PR

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +123 -19
README.md CHANGED
@@ -1,26 +1,130 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - video-text-to-text
 
 
 
 
5
  ---
6
 
7
- This repository contains the data for the paper [PAVE: Patching and Adapting Video Large Language Models](https://arxiv.org/abs/2503.19794).
8
 
9
- Code: https://github.com/dragonlzm/PAVE
10
 
11
- ## Citation [optional]
12
- arxiv.org/abs/2503.19794
13
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
14
 
15
- **BibTeX:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ```
17
- @misc{liu2025pavepatchingadaptingvideo,
18
- title={PAVE: Patching and Adapting Video Large Language Models},
19
- author={Zhuoming Liu and Yiquan Li and Khoi Duc Nguyen and Yiwu Zhong and Yin Li},
20
- year={2025},
21
- eprint={2503.19794},
22
- archivePrefix={arXiv},
23
- primaryClass={cs.CV},
24
- url={https://arxiv.org/abs/2503.19794},
25
- }
26
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ library_name: transformers
4
+ license_name: nvclv1
5
+ license_link: LICENSE
6
+ datasets:
7
+ - ILSVRC/imagenet-1k
8
+ pipeline_tag: image-feature-extraction
9
  ---
10
 
 
11
 
12
+ [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
13
 
14
+ ## Model Overview
 
 
15
 
16
+ We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria.
17
+
18
+ ## Model Performance
19
+
20
+ MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in
21
+ terms of Top-1 accuracy and throughput.
22
+
23
+ <p align="center">
24
+ <img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
25
+ class="center">
26
+ </p>
27
+
28
+
29
+ ## Model Usage
30
+
31
+ It is highly recommended to install the requirements for MambaVision by running the following:
32
+
33
+
34
+ ```Bash
35
+ pip install mambavision
36
  ```
37
+
38
+ For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
39
+
40
+ ### Image Classification
41
+
42
+ In the following example, we demonstrate how MambaVision can be used for image classification.
43
+
44
+ Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
45
+
46
+
47
+ <p align="center">
48
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
49
+ class="center">
50
+ </p>
51
+
52
+
53
+ The following snippet can be used for image classification:
54
+
55
+ ```Python
56
+ from transformers import AutoModelForImageClassification
57
+ from PIL import Image
58
+ from timm.data.transforms_factory import create_transform
59
+ import requests
60
+
61
+ model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-B-1K", trust_remote_code=True)
62
+
63
+ # eval mode for inference
64
+ model.cuda().eval()
65
+
66
+ # prepare image for the model
67
+ url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
68
+ image = Image.open(requests.get(url, stream=True).raw)
69
+ input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
70
+
71
+ transform = create_transform(input_size=input_resolution,
72
+ is_training=False,
73
+ mean=model.config.mean,
74
+ std=model.config.std,
75
+ crop_mode=model.config.crop_mode,
76
+ crop_pct=model.config.crop_pct)
77
+
78
+ inputs = transform(image).unsqueeze(0).cuda()
79
+ # model inference
80
+ outputs = model(inputs)
81
+ logits = outputs['logits']
82
+ predicted_class_idx = logits.argmax(-1).item()
83
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
84
+ ```
85
+
86
+ The predicted label is ```brown bear, bruin, Ursus arctos.```
87
+
88
+ ### Feature Extraction
89
+
90
+ MambaVision can also be used as a generic feature extractor.
91
+
92
+ Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
93
+
94
+ The following snippet can be used for feature extraction:
95
+
96
+ ```Python
97
+ from transformers import AutoModel
98
+ from PIL import Image
99
+ from timm.data.transforms_factory import create_transform
100
+ import requests
101
+
102
+ model = AutoModel.from_pretrained("nvidia/MambaVision-B-1K", trust_remote_code=True)
103
+
104
+ # eval mode for inference
105
+ model.cuda().eval()
106
+
107
+ # prepare image for the model
108
+ url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
109
+ image = Image.open(requests.get(url, stream=True).raw)
110
+ input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
111
+
112
+ transform = create_transform(input_size=input_resolution,
113
+ is_training=False,
114
+ mean=model.config.mean,
115
+ std=model.config.std,
116
+ crop_mode=model.config.crop_mode,
117
+ crop_pct=model.config.crop_pct)
118
+ inputs = transform(image).unsqueeze(0).cuda()
119
+ # model inference
120
+ out_avg_pool, features = model(inputs)
121
+ print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 640])
122
+ print("Number of stages in extracted features:", len(features)) # 4 stages
123
+ print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
124
+ print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])
125
+ ```
126
+
127
+
128
+ ### License:
129
+
130
+ [NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-T-1K/blob/main/LICENSE)