Video-Text-to-Text
Transformers
Safetensors
English
internvl_chat
feature-extraction
multimodal
custom_code
Eval Results (legacy)
Instructions to use OpenGVLab/InternVideo2_5_Chat_8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVideo2_5_Chat_8B with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVideo2_5_Chat_8B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -100,7 +100,7 @@ We measured the average inference speed (tokens/s) of generating 1024 new tokens
|
|
| 100 |
|Quantization | Speed (3022 tokens) | Speed (8192 tokens) w/o encoder| Speed(8192 tokens) w/ encoder|
|
| 101 |
|--- |--- |---| ---|
|
| 102 |
|BF16 | 33.40 | 31.91 | 21.33|
|
| 103 |
-
|INT4 | - | 31.95 |
|
| 104 |
|
| 105 |
The profiling runs on a single A800-SXM4-80G GPU with PyTorch 2.4.0 and CUDA 12.1.
|
| 106 |
|
|
|
|
| 100 |
|Quantization | Speed (3022 tokens) | Speed (8192 tokens) w/o encoder| Speed(8192 tokens) w/ encoder|
|
| 101 |
|--- |--- |---| ---|
|
| 102 |
|BF16 | 33.40 | 31.91 | 21.33|
|
| 103 |
+
|INT4 | - | 31.95 | 26.37|
|
| 104 |
|
| 105 |
The profiling runs on a single A800-SXM4-80G GPU with PyTorch 2.4.0 and CUDA 12.1.
|
| 106 |
|