File size: 3,351 Bytes
ebece54
 
 
 
 
c7c40b7
 
 
 
ebece54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b971db
 
df98ca2
9b971db
 
 
 
ebece54
 
 
 
 
df98ca2
 
f604827
ebece54
57aa993
 
 
0083ee0
57aa993
 
f604827
57aa993
 
f604827
 
57aa993
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d45323
57aa993
 
f604827
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
dataset_info:
  features:
  - name: video_id
    dtype: string
  - name: video_start
    dtype: string
  - name: video_end
    dtype: string
  - name: merged_caption
    dtype: string
  - name: video_caption
    dtype: string
  - name: video_transcript
    dtype: string
  - name: clip_timestamps
    list:
      list: float64
  - name: clip_captions
    list: string
  - name: clip_transcripts
    list: string
  - name: frame_timestamps
    list: float64
  - name: frame_indices
    list: int64
  - name: frame_captions
    list: string
  - name: video_frame_merged_caption
    dtype: string
  - name: annotation_score
    dtype: int64
  splits:
  - name: train
    num_bytes: 3297177981
    num_examples: 104002
  - name: val
    num_bytes: 104656523
    num_examples: 3815
  download_size: 1676535243
  dataset_size: 3401834504
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: val
    path: data/val-*
license: odc-by
---


# Molmo2-Cap
Molmo2-Cap is a dataset of very long (average 900 words), detailed video captions.
It can be used to pre-train and fine-tune vision-language models. 

Molmo2-Cap is part of the [Molmo2 dataset collection](https://huggingface.co/collections/allenai/molmo2-data) and was used to train the [Molmo2 family of models](https://huggingface.co/collections/allenai/molmo2).

Quick links:
- 📃 [Paper](https://allenai.org/papers/molmo2)
- 🎥 [Blog with Videos](https://allenai.org/blog/molmo2)


## Data Format
Videos are stored as Youtube video ID that will need to be downloaded separately. 

### Clip Annotations

`clip_timestamps`: a list of video clips' start and end timestamps.

`clip_transcripts`: a list of audio transcripts of annotator describing the video clips.

`clip_captions`: a list of clean captions converted from the above audio transcripts using LLM.

### Frame Annotations

`frame_timestamps`/`frame_indices`: a list of timestamps/indices of sampled frames.

`frame_captions`: a list of frame captions generate by Molmo.

### Video Annotations

`video_transcript`: audio transcript of annotator describing the video.

`video_caption`: clean caption converted from the above audio transcript using LLM.

`merged_caption`: long caption combining both `clip_captions` and `video_caption` using LLM.

`video_frame_merged_caption`: long caption combining both `clip_captions`, `video_caption`, and `frame_caption` using LLM.

We suggest to use either `merged_caption` or `video_frame_merged_caption`.

### Other

`annotation_score`: the score (0-10) of the quality of the caption judged by LLM. High score indicates a good amount of visual details and low score indicates the inclusion of off-topic meta commentary or content irrelevant to the videos.

## License
This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2’s [Responsible Use Guidelines](https://allenai.org/responsible-use). This dataset includes text captions generated from GPT-4.1 and GPT-5, which are subject to OpenAI’s [Terms of Use](https://openai.com/policies/row-terms-of-use/). Some contents of this dataset have been created based the use of third party datasets that are subject to academic and noncommercial research use only. Please refer to the Source Attribution file for more information about these sources.