Datasets:
File size: 6,694 Bytes
facffe0 d2351d4 facffe0 d52d90e d2351d4 d52d90e d2351d4 facffe0 b83eae3 facffe0 0183796 facffe0 9143a5b b83eae3 9143a5b facffe0 d2351d4 facffe0 d2351d4 facffe0 d2351d4 facffe0 b83eae3 facffe0 b83eae3 facffe0 b83eae3 facffe0 b83eae3 facffe0 b83eae3 facffe0 b83eae3 facffe0 d2351d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
language:
- zh
- en
license: cc-by-sa-4.0
task_categories:
- video-text-to-text
tags:
- multimodal
- video-understanding
- short-video
- benchmark
- e-commerce
- vqa
library_name:
- transformers
---
<font size=3><div align='center' > [[🍎 Home Page](https://kwai-keye.github.io/)] [[📖 Technical Report](https://huggingface.co/papers/2507.01949)] [[\ud83d\udcca Models](https://huggingface.co/Kwai-Keye)] [[\ud83d\ude80 Demo](https://huggingface.co/spaces/Kwai-Keye/Keye-VL-8B-Preview)] </div></font>
This repository contains **KC-MMBench**, a new benchmark dataset meticulously tailored for real-world short-video scenarios, as presented in the paper "[Kwai Keye-VL Technical Report](https://huggingface.co/papers/2507.01949)". Constructed from [Kuaishou](https://www.kuaishou.com/) short video data, KC-MMBench comprises 6 distinct datasets designed to evaluate the performance of Vision-Language Models (VLMs) like [**Kwai Keye-VL-8B**](https://huggingface.co/Kwai-Keye/Keye-VL-8B-Preview), Qwen2.5-VL, and InternVL in comprehending dynamic, information-dense short-form videos.
For the associated code, detailed documentation, and evaluation scripts, please refer to the official [Kwai Keye-VL GitHub repository](https://github.com/Kwai-Keye/Kwai-Keye-VL).
If you want to use KC-MMbench, please download with:
```bash
git clone https://huggingface.co/datasets/Kwai-Keye/KC-MMbench
```
## Tasks
| Task | Description |
| -------------- | --------------------------------------------------------------------------- |
| CPV | The task of predicting product attributes in e-commerce. |
| Hot_Videos_Aggregation | The task of determining whether multiple videos belong to the same topic. |
| Collection_Order | The task of determining the logical order between multiple videos with the same topic. |
| Pornographic_Comment | The task of whether short video comments contain pornographic content. |
| High_Like | A binary classification task to determine the rate of likes of a short video. |
| SPU | The task of determining whether two items are the same product in e-commerce. |
## Performance
| Task | Qwen2.5-VL-3B | Qwen2.5-VL-7B | InternVL-3-8B | MiMo-VL-7B | Kwai Keye-VL-8B |
| -------------- | ------------- | ------------- | ------------- | ------- | ---- |
| CPV | 12.39 | 20.08 | 14.95 | 16.66 | 55.13 |
| Hot_Videos_Aggregation | 42.38 | 46.35 | 52.31 | 49.00 | 54.30 |
| Collection_Order | 36.88 | 59.83 | 64.75 | 78.68 | 84.43 |
| Pornographic_Comment | 56.61 | 56.08 | 57.14 | 68.25 | 71.96 |
| High_Like | 48.85 | 47.94 | 47.03 | 51.14 | 55.25 |
| SPU | 74.09 | 81.34 | 75.64 | 81.86 | 87.05 |
## Usage
This section provides a quick guide on how to interact with models using the `keye-vl-utils` library, which is essential for processing and integrating visual language information with Keye Series Models like Kwai Keye-VL-8B.
### Install `keye-vl-utils`
First, install the necessary utility library:
```bash
pip install keye-vl-utils
```
### Keye-VL Inference Example
Here's an example of performing inference with a Kwai Keye-VL model, demonstrating how to prepare inputs for both image and video scenarios.
```python
from transformers import AutoModel, AutoProcessor
from keye_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model_path = "Kwai-Keye/Keye-VL-8B-Preview"
model = AutoModel.from_pretrained(
model_path, torch_dtype="auto", device_map="auto", attn_implementation="flash_attention_2", trust_remote_code=True,
).to('cuda')
# Example messages demonstrating various input types (image, video)
messages = [
# Image Input Examples
[{"role": "user", "content": [{"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
[{"role": "user", "content": [{"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
[{"role": "user", "content": [{"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}]}],
# Video Input Examples (most relevant for KC-MMBench)
[{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4"}, {"type": "text", "text": "Describe this video."}]}],
[{"role": "user", "content": [{"type": "video", "video": ["file:///path/to/extracted_frame1.jpg", "file:///path/to/extracted_frame2.jpg", "file:///path/to/extracted_frame3.jpg"],}, {"type": "text", "text": "Describe this video."},],}],
[{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4", "fps": 2.0, "resized_height": 280, "resized_width": 280}, {"type": "text", "text": "Describe this video."}]}],
]
processor = AutoProcessor.from_pretrained(model_path)
# Note: model loaded above already
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
images, videos, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(text=text, images=images, videos=videos, padding=True, return_tensors="pt", **video_kwargs).to("cuda")
generated_ids = model.generate(**inputs)
print(generated_ids)
```
### Evaluation
For detailed instructions on how to evaluate models using the KC-MMBench datasets, including setup and running evaluation scripts, please refer to the `evaluation/KC-MMBench/README.md` file in the official [Kwai Keye-VL GitHub repository](https://github.com/Kwai-Keye/Kwai-Keye-VL/tree/main/evaluation/KC-MMBench).
Below is the example configuration for evaluation using VLMs on our datasets:
```python
{
"model": "...", # Specify your model
"data": {
"CPV": {
"class": "KwaiVQADataset",
"dataset": "CPV"
},
"Hot_Videos_Aggregation": {
"class": "KwaiVQADataset",
"dataset": "Hot_Videos_Aggregation"
},
"Collection_Order": {
"class": "KwaiVQADataset",
"dataset": "Collection_Order"
},
"Pornographic_Comment": {
"class": "KwaiYORNDataset",
"dataset": "Pornographic_Comment"
},
"High_like":{
"class":"KwaiYORNDataset",
"dataset":"High_like"
},
"SPU": {
"class": "KwaiYORNDataset",
"dataset": "SPU"
}
}
}
``` |