Datasets:
license: mit
language:
- en
pretty_name: LongTVQA+
LongTVQA+ Dataset
This repository contains the LongTVQA+ dataset in JSON format.
LongTVQA+ is built upon the original TVQA+ dataset, with the key difference that it extends the question grounding scope from short clip-level segments (≈1 minute) to long episode-level videos (up to ~20 minutes).
This enables research on long-form video understanding, long-range temporal reasoning, and fine-grained spatio-temporal grounding in realistic TV show episodes.
In addition to the extended temporal scope, LongTVQA+ preserves and leverages the rich annotations provided in TVQA+, including:
- Frame-level bounding box annotations for visual concept words appearing in questions and correct answers.
- Refined timestamp annotations aligned with long episode-level context.
Please refer to the original TVQA+ paper for details on the annotation protocol and baseline evaluations.
Files
LongTVQA_plus_train.json— training split (23,545 QA samples)LongTVQA_plus_val.json— validation split (3,017 QA samples)LongTVQA_plus_subtitle_clip_level.json— clip-level subtitles indexed by video clip (4,198 clips)LongTVQA_plus_subtitle_episode_level.json— episode-level subtitles indexed by episode (220 episodes)
QA JSON Format
Each entry in LongTVQA_plus_train.json and LongTVQA_plus_val.json is a dictionary with the following fields:
| Key | Type | Description |
|---|---|---|
qid |
int | Question ID (same as in TVQA+). |
q |
str | Question text. |
a0 ... a4 |
str | Five multiple-choice answers. |
answer |
str | Correct answer key ("a0"–"a4"). |
ts |
list | Refined timestamp annotation. For example, [0, 5.4] indicates the localized temporal span starts at 0s and ends at 5.4s. |
episode_name |
str | Episode ID (e.g. s01e02). |
occur_clip |
str | Video clip name. Format: {show_name_abbr}_s{season}e{episode}_seg{segment}_clip_{clip}. Episodes are typically divided into two segments separated by the opening theme. For The Big Bang Theory, {show_name_abbr} is omitted (e.g. s05e02_seg02_clip_00). |
bbox |
dict | Frame-level bounding box annotations sampled at 3 FPS. Keys are frame indices. Values are lists of bounding boxes with img_id, top, left, width, height, and label. |
QA Sample
{
"answer": "a1",
"qid": 134094,
"ts": [5.99, 11.98],
"a1": "Howard is talking to Raj and Leonard",
"a0": "Howard is talking to Bernadette",
"a3": "Howard is talking to Leonard and Penny",
"a2": "Howard is talking to Sheldon , and Raj",
"q": "Who is Howard talking to when he is in the lab room ?",
"episode_name": "s05e02",
"occur_clip": "s05e02_seg02_clip_00",
"a4": "Howard is talking to Penny and Bernadette",
"bbox": {
"14": [
{
"img_id": 14,
"top": 153,
"label": "Howard",
"width": 180,
"height": 207,
"left": 339
},
{
"img_id": 14,
"top": 6,
"label": "lab",
"width": 637,
"height": 354,
"left": 3
}
],
"20": [],
"26": [],
"32": [],
"38": []
}
}
Subtitles JSON Format
Two subtitle files are provided to support different temporal granularities:
| File | Key | Type | Description |
|---|---|---|---|
LongTVQA_plus_subtitle_clip_level.json |
vid_name |
str | Clip-level subtitle text, with utterances separated by <eos>. |
LongTVQA_plus_subtitle_episode_level.json |
episode_name |
str | Episode-level subtitle text, including clip markers such as <seg01_clip_00>, and utterances separated by <eos>. |
Subtitles Sample
{
"s09e14_seg02_clip_04": "Sheldon : That 's a risk I'm willing to take ! <eos> Amy : Well , this is so nice . <eos> ..."
}
License
This dataset is released under the MIT License.
📝 Citation
If you find our work helpful, please cite:
@misc{liu2025longvideoagentmultiagentreasoninglong,
title={LongVideoAgent: Multi-Agent Reasoning with Long Videos},
author={Runtao Liu and Ziyi Liu and Jiaqi Tang and Yue Ma and Renjie Pi and Jipeng Zhang and Qifeng Chen},
year={2025},
eprint={2512.20618},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={[https://arxiv.org/abs/2512.20618](https://arxiv.org/abs/2512.20618)},
}