Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
License:
z1ya0 commited on
Commit
2e9afd8
·
verified ·
1 Parent(s): f9d90dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -118
README.md CHANGED
@@ -1,118 +1,134 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- pretty_name: LongTVQA+
6
- ---
7
-
8
- # LongTVQA+ Dataset
9
-
10
- This repository contains the **LongTVQA+** dataset in JSON format.
11
-
12
- LongTVQA+ is built upon the original **TVQA+** dataset, with the key difference that it **extends the question grounding scope from short clip-level segments (≈1 minute) to long episode-level videos (up to ~20 minutes)**.
13
- This enables research on long-form video understanding, long-range temporal reasoning, and fine-grained spatio-temporal grounding in realistic TV show episodes.
14
-
15
- In addition to the extended temporal scope, LongTVQA+ preserves and leverages the rich annotations provided in TVQA+, including:
16
-
17
-
18
- 1. Frame-level bounding box annotations for visual concept words appearing in questions and correct answers.
19
- 2. Refined timestamp annotations aligned with long episode-level context.
20
-
21
- Please refer to the original **TVQA+ paper** for details on the annotation protocol and baseline evaluations.
22
-
23
- ---
24
-
25
- ## Files
26
-
27
- - `LongTVQA_plus_train.json` — training split (23,545 QA samples)
28
- - `LongTVQA_plus_val.json` — validation split (3,017 QA samples)
29
- - `LongTVQA_plus_subtitle_clip_level.json` — clip-level subtitles indexed by video clip (4,198 clips)
30
- - `LongTVQA_plus_subtitle_episode_level.json` — episode-level subtitles indexed by episode (220 episodes)
31
-
32
- ---
33
-
34
- ## QA JSON Format
35
-
36
- Each entry in `LongTVQA_plus_train.json` and `LongTVQA_plus_val.json` is a dictionary with the following fields:
37
-
38
- | Key | Type | Description |
39
- | --- | --- | --- |
40
- | `qid` | int | Question ID (same as in TVQA+). |
41
- | `q` | str | Question text. |
42
- | `a0` ... `a4` | str | Five multiple-choice answers. |
43
- | `answer` | str | Correct answer key (`"a0"`–`"a4"`). |
44
- | `ts` | list | Refined timestamp annotation. For example, `[0, 5.4]` indicates the localized temporal span starts at 0s and ends at 5.4s. |
45
- | `episode_name` | str | Episode ID (e.g. `s01e02`). |
46
- | `occur_clip` | str | Video clip name. Format: `{show_name_abbr}_s{season}e{episode}_seg{segment}_clip_{clip}`. Episodes are typically divided into two segments separated by the opening theme. For **The Big Bang Theory**, `{show_name_abbr}` is omitted (e.g. `s05e02_seg02_clip_00`). |
47
- | `bbox` | dict | Frame-level bounding box annotations sampled at 3 FPS. Keys are frame indices. Values are lists of bounding boxes with `img_id`, `top`, `left`, `width`, `height`, and `label`. |
48
-
49
- ---
50
-
51
- ### QA Sample
52
-
53
- ```json
54
- {
55
- "answer": "a1",
56
- "qid": 134094,
57
- "ts": [5.99, 11.98],
58
- "a1": "Howard is talking to Raj and Leonard",
59
- "a0": "Howard is talking to Bernadette",
60
- "a3": "Howard is talking to Leonard and Penny",
61
- "a2": "Howard is talking to Sheldon , and Raj",
62
- "q": "Who is Howard talking to when he is in the lab room ?",
63
- "episode_name": "s05e02",
64
- "occur_clip": "s05e02_seg02_clip_00",
65
- "a4": "Howard is talking to Penny and Bernadette",
66
- "bbox": {
67
- "14": [
68
- {
69
- "img_id": 14,
70
- "top": 153,
71
- "label": "Howard",
72
- "width": 180,
73
- "height": 207,
74
- "left": 339
75
- },
76
- {
77
- "img_id": 14,
78
- "top": 6,
79
- "label": "lab",
80
- "width": 637,
81
- "height": 354,
82
- "left": 3
83
- }
84
- ],
85
- "20": [],
86
- "26": [],
87
- "32": [],
88
- "38": []
89
- }
90
- }
91
- ```
92
-
93
- ---
94
-
95
- ## Subtitles JSON Format
96
-
97
- Two subtitle files are provided to support different temporal granularities:
98
-
99
- | File | Key | Type | Description |
100
- | ------------------------------------------- | -------------- | ---- | ------------------------------------------------------------------------------------------------------------------- |
101
- | `LongTVQA_plus_subtitle_clip_level.json` | `vid_name` | str | Clip-level subtitle text, with utterances separated by `<eos>`. |
102
- | `LongTVQA_plus_subtitle_episode_level.json` | `episode_name` | str | Episode-level subtitle text, including clip markers such as `<seg01_clip_00>`, and utterances separated by `<eos>`. |
103
-
104
- ---
105
-
106
- ### Subtitles Sample
107
-
108
- ```json
109
- {
110
- "s09e14_seg02_clip_04": "Sheldon : That 's a risk I'm willing to take ! <eos> Amy : Well , this is so nice . <eos> ..."
111
- }
112
- ```
113
-
114
- ---
115
-
116
- ## License
117
-
118
- This dataset is released under the **MIT License**.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pretty_name: LongTVQA+
6
+ ---
7
+
8
+ # LongTVQA+ Dataset
9
+
10
+ This repository contains the **LongTVQA+** dataset in JSON format.
11
+
12
+ LongTVQA+ is built upon the original **TVQA+** dataset, with the key difference that it **extends the question grounding scope from short clip-level segments (≈1 minute) to long episode-level videos (up to ~20 minutes)**.
13
+ This enables research on long-form video understanding, long-range temporal reasoning, and fine-grained spatio-temporal grounding in realistic TV show episodes.
14
+
15
+ In addition to the extended temporal scope, LongTVQA+ preserves and leverages the rich annotations provided in TVQA+, including:
16
+
17
+
18
+ 1. Frame-level bounding box annotations for visual concept words appearing in questions and correct answers.
19
+ 2. Refined timestamp annotations aligned with long episode-level context.
20
+
21
+ Please refer to the original **TVQA+ paper** for details on the annotation protocol and baseline evaluations.
22
+
23
+ ---
24
+
25
+ ## Files
26
+
27
+ - `LongTVQA_plus_train.json` — training split (23,545 QA samples)
28
+ - `LongTVQA_plus_val.json` — validation split (3,017 QA samples)
29
+ - `LongTVQA_plus_subtitle_clip_level.json` — clip-level subtitles indexed by video clip (4,198 clips)
30
+ - `LongTVQA_plus_subtitle_episode_level.json` — episode-level subtitles indexed by episode (220 episodes)
31
+
32
+ ---
33
+
34
+ ## QA JSON Format
35
+
36
+ Each entry in `LongTVQA_plus_train.json` and `LongTVQA_plus_val.json` is a dictionary with the following fields:
37
+
38
+ | Key | Type | Description |
39
+ | --- | --- | --- |
40
+ | `qid` | int | Question ID (same as in TVQA+). |
41
+ | `q` | str | Question text. |
42
+ | `a0` ... `a4` | str | Five multiple-choice answers. |
43
+ | `answer` | str | Correct answer key (`"a0"`–`"a4"`). |
44
+ | `ts` | list | Refined timestamp annotation. For example, `[0, 5.4]` indicates the localized temporal span starts at 0s and ends at 5.4s. |
45
+ | `episode_name` | str | Episode ID (e.g. `s01e02`). |
46
+ | `occur_clip` | str | Video clip name. Format: `{show_name_abbr}_s{season}e{episode}_seg{segment}_clip_{clip}`. Episodes are typically divided into two segments separated by the opening theme. For **The Big Bang Theory**, `{show_name_abbr}` is omitted (e.g. `s05e02_seg02_clip_00`). |
47
+ | `bbox` | dict | Frame-level bounding box annotations sampled at 3 FPS. Keys are frame indices. Values are lists of bounding boxes with `img_id`, `top`, `left`, `width`, `height`, and `label`. |
48
+
49
+ ---
50
+
51
+ ### QA Sample
52
+
53
+ ```json
54
+ {
55
+ "answer": "a1",
56
+ "qid": 134094,
57
+ "ts": [5.99, 11.98],
58
+ "a1": "Howard is talking to Raj and Leonard",
59
+ "a0": "Howard is talking to Bernadette",
60
+ "a3": "Howard is talking to Leonard and Penny",
61
+ "a2": "Howard is talking to Sheldon , and Raj",
62
+ "q": "Who is Howard talking to when he is in the lab room ?",
63
+ "episode_name": "s05e02",
64
+ "occur_clip": "s05e02_seg02_clip_00",
65
+ "a4": "Howard is talking to Penny and Bernadette",
66
+ "bbox": {
67
+ "14": [
68
+ {
69
+ "img_id": 14,
70
+ "top": 153,
71
+ "label": "Howard",
72
+ "width": 180,
73
+ "height": 207,
74
+ "left": 339
75
+ },
76
+ {
77
+ "img_id": 14,
78
+ "top": 6,
79
+ "label": "lab",
80
+ "width": 637,
81
+ "height": 354,
82
+ "left": 3
83
+ }
84
+ ],
85
+ "20": [],
86
+ "26": [],
87
+ "32": [],
88
+ "38": []
89
+ }
90
+ }
91
+ ```
92
+
93
+ ---
94
+
95
+ ## Subtitles JSON Format
96
+
97
+ Two subtitle files are provided to support different temporal granularities:
98
+
99
+ | File | Key | Type | Description |
100
+ | ------------------------------------------- | -------------- | ---- | ------------------------------------------------------------------------------------------------------------------- |
101
+ | `LongTVQA_plus_subtitle_clip_level.json` | `vid_name` | str | Clip-level subtitle text, with utterances separated by `<eos>`. |
102
+ | `LongTVQA_plus_subtitle_episode_level.json` | `episode_name` | str | Episode-level subtitle text, including clip markers such as `<seg01_clip_00>`, and utterances separated by `<eos>`. |
103
+
104
+ ---
105
+
106
+ ### Subtitles Sample
107
+
108
+ ```json
109
+ {
110
+ "s09e14_seg02_clip_04": "Sheldon : That 's a risk I'm willing to take ! <eos> Amy : Well , this is so nice . <eos> ..."
111
+ }
112
+ ```
113
+
114
+ ---
115
+
116
+
117
+ ## 📝 Citation
118
+
119
+ If you find our work helpful, please cite:
120
+
121
+ ```bibtex
122
+ @misc{liu2025longvideoagentmultiagentreasoninglong,
123
+ title={LongVideoAgent: Multi-Agent Reasoning with Long Videos},
124
+ author={Runtao Liu and Ziyi Liu and Jiaqi Tang and Yue Ma and Renjie Pi and Jipeng Zhang and Qifeng Chen},
125
+ year={2025},
126
+ eprint={2512.20618},
127
+ archivePrefix={arXiv},
128
+ primaryClass={cs.AI},
129
+ url={[https://arxiv.org/abs/2512.20618](https://arxiv.org/abs/2512.20618)},
130
+ }
131
+
132
+ ## License
133
+
134
+ This dataset is released under the **MIT License**.