The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Temporally and Spatially Aligned Infrared-Visible Video Dataset
English | 简体中文
VidLLVIP is an unofficial processed paired infrared-visible video dataset derived from the raw LLVIP videos. The dataset provides temporally aligned, spatially registered, quality-checked, 5-second video pairs for video fusion, cross-modal registration, and multimodal video understanding.
VidLLVIP is derived from LLVIP. Please follow the original LLVIP license and citation requirements when using or redistributing this dataset.
📰 News
- 🚀 2026-05-06: We released the VidLLVIP dataset.
- 🎉 2026-02-05: Our multimodal video fusion paper CMVF was accepted by Information Fusion. The code is available in the CMVF GitHub repository.
Download
The large video files are distributed separately:
- Option 1: Download from Hugging Face
- Option 2: Download from Quark Drive
After downloading, if you want to reproduce the full pipeline, extract datamaker.zip and matrix.zip and place them under the corresponding datamaker/ directory; extract raw.zip and place it under the corresponding raw/ directory. If you only want to use the final dataset directly, extract dataset.zip and place it under the corresponding dataset/ directory.
Highlights
- Built from
14source infrared-visible video pairs, numbered01to14. - Provides
894final 5-second paired clips with one IR video and one VI video per sample. - Uses same-name files under
dataset/iranddataset/vias the pairing rule. - Final clip format:
1280 x 1024,25 FPS,125frames, no audio. - Includes scripts for temporal alignment, spatial registration, checkerboard quality inspection, and 5-second clip generation.
Dataset Snapshot
| Item | Value |
|---|---|
| Source | LLVIP raw infrared-visible videos |
| Processed source pairs | 14 pairs, IDs 01-14 |
| Final paired clips | 894 pairs |
| Modalities | Infrared (ir) and visible (vi) |
| Clip length | 5 seconds |
| Resolution | 1280 x 1024 |
| Frame rate | 25 FPS |
| Frames per clip | 125 |
| Pairing rule | Same file name under dataset/ir and dataset/vi |
Repository Layout
VidLLVIP/
README.md
README_zh-CN.md
raw/
videos/{ir,vi}/ # Original LLVIP videos before alignment
datamaker/
01_time_align.py # Temporal alignment
02_space_align.py # Spatial registration
03_checkerboard.py # Checkerboard QA videos
04_split_5s_videos.py # 5-second clip generation
requirements.txt
matrix/ # 3x3 perspective matrices for IDs 01-14
01_align/ # Time-aligned full videos and timestamp sheets
02_warp/ # Spatially registered full videos
03_ckboard/ # Checkerboard QA videos
dataset/
ir/ # Final infrared clips
vi/ # Final visible clips
figs/ # README figures
Data Format
Final clips are stored as paired files:
dataset/
ir/01_0000_0005.mp4
vi/01_0000_0005.mp4
The file name format is:
{source_id}_{start_second}_{end_second}.mp4
For example, 01_0000_0005.mp4 means source video 01, from 0s to 5s. The same file name in dataset/ir and dataset/vi forms one paired sample.
Quick Start
If you only need the final paired clips, read dataset/ir and dataset/vi directly:
from pathlib import Path
root = Path("dataset")
for ir_path in sorted((root / "ir").glob("*.mp4")):
vi_path = root / "vi" / ir_path.name
assert vi_path.exists(), f"Missing visible pair: {vi_path}"
# Load ir_path and vi_path with your video reader.
To reproduce the preprocessing pipeline, install the Python dependencies:
cd datamaker
conda create -n vidllvip python=3.10 -y
conda activate vidllvip
pip install -r requirements.txt
The system also needs ffmpeg and ffprobe on PATH.
Reproduce the Dataset
1. Temporal Alignment
python 01_time_align.py
Inputs:
raw/videos/ir/{id}.mp4raw/videos/vi/{id}.mp4
Outputs:
datamaker/01_align/{id}/ir.mp4datamaker/01_align/{id}/vi.mp4datamaker/01_align/{id}/timestamp.xlsx
The script reads frame timestamps, chooses the shorter stream as the base, and matches the other modality with monotone nearest-frame matching. The default maximum timestamp gap is 0.08s.
2. Spatial Registration
python 02_space_align.py
Inputs:
datamaker/01_align/{id}/ir.mp4datamaker/01_align/{id}/vi.mp4datamaker/matrix/{id}.csv
Outputs:
datamaker/02_warp/{id}/ir.mp4datamaker/02_warp/{id}/vi.mp4
The script warps IR frames into the VI coordinate system with a 3x3 perspective matrix, then crops both modalities to 1280 x 1024.
3. Checkerboard QA
python 03_checkerboard.py
Inputs:
datamaker/02_warp/{id}/ir.mp4datamaker/02_warp/{id}/vi.mp4
Outputs:
datamaker/03_ckboard/{id}.mp4
The checkerboard videos alternate IR and VI blocks, making edge continuity and object alignment easier to inspect by eye.
4. Split Into 5-Second Clips
python 04_split_5s_videos.py
Inputs:
datamaker/02_warp/{id}/ir.mp4datamaker/02_warp/{id}/vi.mp4
Outputs:
dataset/ir/{id}_{start}_{end}.mp4dataset/vi/{id}_{start}_{end}.mp4
The default window and stride are both 5s. Tails shorter than 5s are skipped.
Suggested Uses
- Video fusion: use same-name clips from
dataset/iranddataset/vi. - Cross-modal registration: use
datamaker/01_alignas temporally aligned but spatially unregistered input, anddatamaker/02_warpas the registered reference. - Joint fusion and registration: train registration on
datamaker/01_align, then train or evaluate fusion ondatamaker/02_warpordataset/.
Figures
The figs/ directory is ordered by first appearance in this README:
| File | Purpose |
|---|---|
figs/01_overview.jpg |
Dataset overview and key statistics. |
figs/02_sample_pairs.jpg |
Representative IR/VI/fusion frame examples. |
figs/03_pipeline.jpg |
End-to-end construction pipeline. |
figs/04_dataset_structure.jpg |
Released file structure and pairing rule. |
figs/05_alignment_quality.jpg |
Temporal and spatial alignment quality checks. |
Citation
VidLLVIP is an unofficial processed version derived from the raw LLVIP infrared and visible videos. If you use VidLLVIP or the processing scripts, registration matrices, or paired video clips in this repository, please also follow the original LLVIP license and citation requirements.
1. Original Dataset Citation
VidLLVIP is derived from LLVIP. When using this dataset, please first cite the original LLVIP dataset:
@inproceedings{jia2021llvip,
title = {LLVIP: A visible-infrared paired dataset for low-light vision},
author = {Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Zhou, Wenli},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages = {3496--3504},
year = {2021}
}
2. VidLLVIP Citation
If you use the processed VidLLVIP dataset, registration matrices, or preprocessing pipeline provided by this project, please also cite VidLLVIP:
@dataset{ding2026vidllvip,
author = {Ding, Jianfeng},
title = {VidLLVIP: A visible-infrared paired video dataset for low-light vision},
year = {2026},
version = {v1.0.0},
url = {https://github.com/jianfeng0369/VidLLVIP}
}
3. Related Paper Citation
CMVF is an infrared and visible video fusion method based on spatio-temporal consistency and designed for unregistered inputs. If your research uses the CMVF method or code, or is related to infrared and visible video fusion, please also consider citing the following paper:
@article{cmvf2026ding,
title = {CMVF: Cross-modal unregistered video fusion via spatio-temporal consistency},
journal = {Information Fusion},
volume = {132},
pages = {104212},
year = {2026},
issn = {1566-2535},
author = {Jianfeng Ding and Hao Zhang and Zhongyuan Wang and Jinsheng Xiao and Xin Tian and Zhen Han and Jiayi Ma}
}
Contact
If you have any questions, please contact: jianfeng0369@gmail.com.
- Downloads last month
- 65




