Molmo2-Cap / README.md
sanghol's picture
Update README.md
f604827 verified
metadata
dataset_info:
  features:
    - name: video_id
      dtype: string
    - name: video_start
      dtype: string
    - name: video_end
      dtype: string
    - name: merged_caption
      dtype: string
    - name: video_caption
      dtype: string
    - name: video_transcript
      dtype: string
    - name: clip_timestamps
      list:
        list: float64
    - name: clip_captions
      list: string
    - name: clip_transcripts
      list: string
    - name: frame_timestamps
      list: float64
    - name: frame_indices
      list: int64
    - name: frame_captions
      list: string
    - name: video_frame_merged_caption
      dtype: string
    - name: annotation_score
      dtype: int64
  splits:
    - name: train
      num_bytes: 3297177981
      num_examples: 104002
    - name: val
      num_bytes: 104656523
      num_examples: 3815
  download_size: 1676535243
  dataset_size: 3401834504
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
license: odc-by

Molmo2-Cap

Molmo2-Cap is a dataset of very long (average 900 words), detailed video captions. It can be used to pre-train and fine-tune vision-language models.

Molmo2-Cap is part of the Molmo2 dataset collection and was used to train the Molmo2 family of models.

Quick links:

Data Format

Videos are stored as Youtube video ID that will need to be downloaded separately.

Clip Annotations

clip_timestamps: a list of video clips' start and end timestamps.

clip_transcripts: a list of audio transcripts of annotator describing the video clips.

clip_captions: a list of clean captions converted from the above audio transcripts using LLM.

Frame Annotations

frame_timestamps/frame_indices: a list of timestamps/indices of sampled frames.

frame_captions: a list of frame captions generate by Molmo.

Video Annotations

video_transcript: audio transcript of annotator describing the video.

video_caption: clean caption converted from the above audio transcript using LLM.

merged_caption: long caption combining both clip_captions and video_caption using LLM.

video_frame_merged_caption: long caption combining both clip_captions, video_caption, and frame_caption using LLM.

We suggest to use either merged_caption or video_frame_merged_caption.

Other

annotation_score: the score (0-10) of the quality of the caption judged by LLM. High score indicates a good amount of visual details and low score indicates the inclusion of off-topic meta commentary or content irrelevant to the videos.

License

This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2’s Responsible Use Guidelines. This dataset includes text captions generated from GPT-4.1 and GPT-5, which are subject to OpenAI’s Terms of Use. Some contents of this dataset have been created based the use of third party datasets that are subject to academic and noncommercial research use only. Please refer to the Source Attribution file for more information about these sources.