Datasets:

Modalities:
Text
Video
Formats:
webdataset
Languages:
English
ArXiv:
Libraries:
Datasets
WebDataset
License:
Alex commited on
Commit
107f445
·
1 Parent(s): 2300025

new data format and 2025 videos

Browse files
README.md CHANGED
@@ -30,9 +30,9 @@ git clone https://huggingface.co/datasets/hltcoe/wikivideo
30
  ```
31
  I would also tmux this because it might take a while.
32
 
33
-
34
  #### Step 3: Untar the videos
35
- In the `data/` folder, you will see places for `data/audios` and `data/videos`. You need to untar the videos and audios into these folders. The audios file is `audios.tar.gz` and the videos is `videos.tar.gz`.
 
36
  ```bash
37
  # untar the videos
38
  tar -xvzf videos.tar.gz -C data/videos
@@ -40,6 +40,15 @@ tar -xvzf videos.tar.gz -C data/videos
40
  tar -xvzf audios.tar.gz -C data/audios
41
  ```
42
 
 
 
 
 
 
 
 
 
 
43
  #### Finish
44
  Now you should be done. You will see a `annotations` folder in the huggingface repo, but this also exists in the `data/` folder already in the `data/wikivideo` directory.
45
 
@@ -49,42 +58,41 @@ In the `data/wikivideo` directory, you will find the file `final_data.json` whic
49
  ```json
50
  {
51
  "Wikipedia Title": {
52
- "article": "The article text",
53
- "query_id": "test_pt2_query_XXX",
54
  "original_article": ["sent1", "sent2", ...],
55
- "audio_lists": [[true, false, true, ...], ...],
56
- "video_lists": [[true, false, true, ...], ...],
57
- "ocr_lists": [[true, false, true, ...], ...],
58
- "neither_lists": [[true, false, true, ...], ...],
59
- "video_ocr_lists": [[true, false, true, ...], ...],
60
- "claims": [["claim1", "claim2", ...], ...],
61
- "videos":{
62
- "video_id": {
 
 
 
 
 
 
 
 
 
63
  "anon_scale_id": "XXX",
64
  "language": "english",
65
  "video_type": "Professional | Edited | Diet Raw | Raw",
66
- "relevance": 3,
67
- }
68
  }
69
  },
70
  ...
71
  }
72
  ```
73
  In this json, you see that the top level key is the Wikipeda Article Title. Each other key is defined as follows:
74
- - `article`: This is the human written article on the topic using the video data.
 
 
 
75
  - `query_id`: This is the query id for the article from the MultiVENT 2.0 dataset. This will be helpful when doing RAG experiments.
76
- - `original_article`: This is the original Wikipedia article from MegaWika 2.0. It is sentence tokenized. Each sentence corresponsed to an index in the audio_lists, video_lists, ocr_lists, neither_lists, video_ocr_lists, and claims lists.
77
- - `claims`: This is a list of claims that are in the article. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a human written claim from the sentence. These specific claims correspond to the boolean elements in the audio_lists, video_lists, ocr_lists, neither_lists, and video_ocr_lists.
78
- - `audio_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the audio, otherwise it is not.
79
- - `video_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the video, otherwise it is not.
80
- - `ocr_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the ocr, otherwise it is not.
81
- - `neither_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by any of the modalities.
82
- - `video_ocr_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the video or ocr, otherwise it is not.
83
- - `videos`: This is a dictionary of videos relevant to the query topic. The key is the video id and the value is a dictionary of video metadata. The metadata is defined as follows:
84
- - `anon_scale_id`: This is the anonnoynmous ID used in MultiVENT 2.0 for the video. This will help you deduplicate the test set of the dataset when doing video retrieval in the RAG experiments.
85
- - `language`: This is the language of the video.
86
- - `video_type`: This is the type of video.
87
- - `relevance`: This is the relevance of the video to the article. These values range from 0-3.
88
 
89
  # RAG Data
90
  The videos that we used as the distractor set (and also include these videos in videos.tar.gz) can be found here MultiVENT 2.0 (https://huggingface.co/datasets/hltcoe/MultiVENT2.0)
 
30
  ```
31
  I would also tmux this because it might take a while.
32
 
 
33
  #### Step 3: Untar the videos
34
+ ##### Videos from MultiVENT2.0 (WikiVideo 2024)
35
+ These videos range from 2015-2024.
36
  ```bash
37
  # untar the videos
38
  tar -xvzf videos.tar.gz -C data/videos
 
40
  tar -xvzf audios.tar.gz -C data/audios
41
  ```
42
 
43
+ ##### Videos from the year 2025 (WikiVideo25)
44
+ These videos are for the year 2025. They are also the videos used in the MAGMaR shared task at ACL 2026.
45
+ ```bash
46
+ # untar the videos
47
+ tar -xvzf videos_2025.tar.gz -C data/videos
48
+ # untar the audios
49
+ tar -xvzf audios_2025.tar.gz -C data/audios
50
+ ```
51
+
52
  #### Finish
53
  Now you should be done. You will see a `annotations` folder in the huggingface repo, but this also exists in the `data/` folder already in the `data/wikivideo` directory.
54
 
 
58
  ```json
59
  {
60
  "Wikipedia Title": {
61
+ "claims": [["claim1", "claim2", ...], ...],
 
62
  "original_article": ["sent1", "sent2", ...],
63
+ "claims_to_supporting_videos": {
64
+ "claim1": {
65
+ "supporting_videos": ["video_id1", "video_id2", ...],
66
+ "videos_modalities": {
67
+ "video_id1": {
68
+ "video": true,
69
+ "audio": false,
70
+ "ocr": true
71
+ },
72
+ ...
73
+ }
74
+ }
75
+ },
76
+ "article": "The article text",
77
+ "query_id": "query id",
78
+ "videos": {
79
+ "video_id1": {
80
  "anon_scale_id": "XXX",
81
  "language": "english",
82
  "video_type": "Professional | Edited | Diet Raw | Raw",
83
+ "relevance": 3
 
84
  }
85
  },
86
  ...
87
  }
88
  ```
89
  In this json, you see that the top level key is the Wikipeda Article Title. Each other key is defined as follows:
90
+ - `claims`: The claims from the original article
91
+ - `original_article`: The original article split into sentences
92
+ - `claims_to_supporting_videos`: A mapping from each claim to the videos that support it along with the modalities present in each video
93
+ - `article`: This is the human written article on the topic using the video data.
94
  - `query_id`: This is the query id for the article from the MultiVENT 2.0 dataset. This will be helpful when doing RAG experiments.
95
+ - `videos`: Metadata about each video used in the article
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  # RAG Data
98
  The videos that we used as the distractor set (and also include these videos in videos.tar.gz) can be found here MultiVENT 2.0 (https://huggingface.co/datasets/hltcoe/MultiVENT2.0)
annotations/final_data.json DELETED
The diff for this file is too large to render. See raw diff
 
annotations/final_data_2015-2025.json ADDED
The diff for this file is too large to render. See raw diff
 
annotations/final_data_2025.json ADDED
The diff for this file is too large to render. See raw diff
 
audios_2025.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90a5faa0c0580c50c03df5ea6e3a67e739be34eb0e4644bdca9a82d902051dcb
3
+ size 38665183
metadata_2025.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a38daa99329606877a6cef700445e69875dd78e89839820c2a7e4f93c65b811
3
+ size 544692
videos_2025.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f73fa64585658e025d4c2d35113a4a183553300d7d235082a1ef9dd96b7baf21
3
+ size 656591186