Richard1999 commited on
Commit
6d28ffd
ยท
verified ยท
1 Parent(s): 250b810

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +14 -40
README.md CHANGED
@@ -1,3 +1,13 @@
 
 
 
 
 
 
 
 
 
 
1
  # Towards Event-oriented Long Video Understanding
2
 
3
  ![VideoQA](https://img.shields.io/badge/Task-VideoQA-red)
@@ -5,24 +15,18 @@
5
 
6
  ![Event-Bench](https://img.shields.io/badge/Dataset-EventBench-green)
7
  ![VIM](https://img.shields.io/badge/Model-VIM-green)
8
- <!-- <p align="center">
9
- <img src="./asset/icon.png" width="15%" height="15%">
10
- </p> -->
11
 
12
 
13
 
14
- <font size=3><div align='center' > [[๐Ÿ“– arXiv Paper]()] [[๐Ÿ“Š Dataset]()] </div></font>
15
 
16
  ---
17
 
18
- ## ๐Ÿ”ฅ News
19
- * **`2024.06.20`** ๐ŸŒŸ Benchmark, evaluation code, training data, and model are released!
20
-
21
 
22
 
23
  ## ๐Ÿ‘€ Overview
24
 
25
- we introduce **Event-Bench**, an event-oriented long video understanding benchmark built on existing datasets and human annotations. **Event-Bench** consists of three event understanding abilities and six event-related tasks, including 2,190 test instances to comprehensively evaluate the ability to understand video events.
26
  <p align="center">
27
  <img src="./asset/fig_benchmark.jpg" width="100%" height="100%">
28
  </p>
@@ -33,7 +37,7 @@ we introduce **Event-Bench**, an event-oriented long video understanding benchma
33
 
34
  ## ๐Ÿ” Dataset
35
  Download the raw videos in VNBench from the [google drive link](https://drive.google.com/file/d/1wjjH2dK-KpaObFdS1yc-TBUTCvXsaLwc/view?usp=sharing).
36
- Download the annotation of VNBench from the [huggingface link](https://huggingface.co/datasets/Richard1999/Event-Bench)
37
  **License**:
38
  ```
39
  Event-Bench is only used for academic research. Commercial use in any form is prohibited.
@@ -41,39 +45,9 @@ Event-Bench is only used for academic research. Commercial use in any form is pr
41
 
42
 
43
  ## ๐Ÿ”ฎ Evaluation Pipeline
44
- **Prompt**:
45
-
46
- The common prompt used in our evaluation follows this format:
47
-
48
- ```
49
- <QUESTION>
50
- A. <OPTION1>
51
- B. <OPTION2>
52
- C. <OPTION3>
53
- D. <OPTION4>
54
- Answer with the option's letter from the given choices directly.
55
- ```
56
 
57
 
58
-
59
- **Evaluation**:
60
-
61
- We recommend you to save the inference result in the format as [example_result.jsonl](./evaluation/example_result.jsonl). Once you have prepared the model responses in this format, please execute our evaluation script [evaluate_em.py](./evaluation/evaluate_em.py), and you will get the accuracy scores.
62
-
63
-
64
- ```bash
65
- python evaluate_em.py \
66
- --path $RESULTS_FILE
67
- ```
68
-
69
- If you want to use GPT-4-turbo for evaluation, please use the following script [evaluate_gpt.py](./evaluation/evaluate_gpt.py).
70
-
71
- ```bash
72
- python evaluate_gpt.py \
73
- --input_file $INPUT_FILE \
74
- --output_file $OUTPUT_FILE
75
- ```
76
-
77
  ## ๐Ÿ“ˆ Experimental Results
78
  - **Evaluation results of different Video MLLMs.**
79
 
 
1
+ ---
2
+ task_categories:
3
+ - visual-question-answering
4
+ language:
5
+ - en
6
+ size_categories:
7
+ - 1K<n<10K
8
+ license: mit
9
+ ---
10
+
11
  # Towards Event-oriented Long Video Understanding
12
 
13
  ![VideoQA](https://img.shields.io/badge/Task-VideoQA-red)
 
15
 
16
  ![Event-Bench](https://img.shields.io/badge/Dataset-EventBench-green)
17
  ![VIM](https://img.shields.io/badge/Model-VIM-green)
 
 
 
18
 
19
 
20
 
21
+ <font size=3><div align='center' > [[๐Ÿ“– arXiv Paper]()]
22
 
23
  ---
24
 
 
 
 
25
 
26
 
27
  ## ๐Ÿ‘€ Overview
28
 
29
+ We introduce **Event-Bench**, an event-oriented long video understanding benchmark built on existing datasets and human annotations. **Event-Bench** consists of three event understanding abilities and six event-related tasks, including 2,190 test instances to comprehensively evaluate the ability to understand video events.
30
  <p align="center">
31
  <img src="./asset/fig_benchmark.jpg" width="100%" height="100%">
32
  </p>
 
37
 
38
  ## ๐Ÿ” Dataset
39
  Download the raw videos in VNBench from the [google drive link](https://drive.google.com/file/d/1wjjH2dK-KpaObFdS1yc-TBUTCvXsaLwc/view?usp=sharing).
40
+
41
  **License**:
42
  ```
43
  Event-Bench is only used for academic research. Commercial use in any form is prohibited.
 
45
 
46
 
47
  ## ๐Ÿ”ฎ Evaluation Pipeline
48
+ Please refer to https://github.com/RUCAIBox/Event-Bench
 
 
 
 
 
 
 
 
 
 
 
49
 
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ## ๐Ÿ“ˆ Experimental Results
52
  - **Evaluation results of different Video MLLMs.**
53