| <p align="center"> |
| <img src="figs/vidllvip_title.svg" alt="VidLLVIP" width="560"> |
| </p> |
|
|
| <p align="center"> |
| <strong>Temporally and Spatially Aligned Infrared-Visible Video Dataset</strong> |
| </p> |
|
|
| <p align="center"> |
| English | <a href="README_zh-CN.md">简体中文</a> |
| </p> |
|
|
| <p align="center"> |
| <a href="https://github.com/jianfeng0369/VidLLVIP"><img alt="VidLLVIP GitHub" src="https://img.shields.io/badge/VidLLVIP-GitHub-181717?style=for-the-badge&logo=github&logoColor=ffffff"></a> |
| <a href="https://huggingface.co/datasets/jianfeng0369/VidLLVIP"><img alt="Hugging Face Dataset" src="https://img.shields.io/badge/VidLLVIP-Hugging%20Face-FFD21E?style=for-the-badge&logo=huggingface&logoColor=000000"></a> |
| <a href="https://pan.quark.cn/s/e3abe425aa5f?pwd=E5gv"><img alt="Quark Drive Download" src="https://img.shields.io/badge/Quark%20Drive-Download-14A7F5?style=for-the-badge&logo=icloud&logoColor=ffffff"></a> |
| <a href="https://arxiv.org/abs/2108.10831"><img alt="LLVIP Paper" src="https://img.shields.io/badge/LLVIP-Paper-B31B1B?style=for-the-badge&logo=arxiv&logoColor=ffffff"></a> |
| <a href="https://github.com/bupt-ai-cz/LLVIP"><img alt="LLVIP GitHub" src="https://img.shields.io/badge/LLVIP-GitHub-181717?style=for-the-badge&logo=github&logoColor=ffffff"></a> |
| <a href="https://doi.org/10.1016/j.inffus.2026.104212"><img alt="CMVF Paper" src="https://img.shields.io/badge/CMVF-Paper-FF6C00?style=for-the-badge&logo=elsevier&logoColor=ffffff"></a> |
| <a href="https://github.com/jianfeng0369/CMVF"><img alt="CMVF GitHub" src="https://img.shields.io/badge/CMVF-GitHub-181717?style=for-the-badge&logo=github&logoColor=ffffff"></a> |
| </p> |
|
|
| VidLLVIP is an unofficial processed paired infrared-visible video dataset derived from the raw [LLVIP](https://github.com/bupt-ai-cz/LLVIP) videos. The dataset provides temporally aligned, spatially registered, quality-checked, 5-second video pairs for video fusion, cross-modal registration, and multimodal video understanding. |
|
|
|  |
|
|
| > VidLLVIP is derived from LLVIP. Please follow the original LLVIP license and citation requirements when using or redistributing this dataset. |
|
|
| ## 📰 News |
|
|
| - 🚀 **2026-05-06**: We released the [VidLLVIP dataset](https://github.com/jianfeng0369/VidLLVIP). |
| - 🎉 **2026-02-05**: Our multimodal video fusion paper [CMVF](https://doi.org/10.1016/j.inffus.2026.104212) was accepted by *Information Fusion*. The code is available in the [CMVF GitHub repository](https://github.com/jianfeng0369/CMVF). |
|
|
| ## Download |
|
|
| The large video files are distributed separately: |
|
|
| - Option 1: Download from [Hugging Face](https://huggingface.co/datasets/jianfeng0369/VidLLVIP) |
| - Option 2: Download from [Quark Drive](https://pan.quark.cn/s/e3abe425aa5f?pwd=E5gv) |
|
|
| After downloading, if you want to reproduce the full pipeline, extract `datamaker.zip` and `matrix.zip` and place them under the corresponding `datamaker/` directory; extract `raw.zip` and place it under the corresponding `raw/` directory. If you only want to use the final dataset directly, extract `dataset.zip` and place it under the corresponding `dataset/` directory. |
|
|
| ## Highlights |
|
|
| - Built from `14` source infrared-visible video pairs, numbered `01` to `14`. |
| - Provides `894` final 5-second paired clips with one IR video and one VI video per sample. |
| - Uses same-name files under `dataset/ir` and `dataset/vi` as the pairing rule. |
| - Final clip format: `1280 x 1024`, `25 FPS`, `125` frames, no audio. |
| - Includes scripts for temporal alignment, spatial registration, checkerboard quality inspection, and 5-second clip generation. |
|
|
| ## Dataset Snapshot |
|
|
| | Item | Value | |
| | --- | --- | |
| | Source | LLVIP raw infrared-visible videos | |
| | Processed source pairs | 14 pairs, IDs `01`-`14` | |
| | Final paired clips | 894 pairs | |
| | Modalities | Infrared (`ir`) and visible (`vi`) | |
| | Clip length | 5 seconds | |
| | Resolution | `1280 x 1024` | |
| | Frame rate | 25 FPS | |
| | Frames per clip | 125 | |
| | Pairing rule | Same file name under `dataset/ir` and `dataset/vi` | |
|
|
|  |
|
|
| ## Repository Layout |
|
|
| ```text |
| VidLLVIP/ |
| README.md |
| README_zh-CN.md |
| raw/ |
| videos/{ir,vi}/ # Original LLVIP videos before alignment |
| datamaker/ |
| 01_time_align.py # Temporal alignment |
| 02_space_align.py # Spatial registration |
| 03_checkerboard.py # Checkerboard QA videos |
| 04_split_5s_videos.py # 5-second clip generation |
| requirements.txt |
| matrix/ # 3x3 perspective matrices for IDs 01-14 |
| 01_align/ # Time-aligned full videos and timestamp sheets |
| 02_warp/ # Spatially registered full videos |
| 03_ckboard/ # Checkerboard QA videos |
| dataset/ |
| ir/ # Final infrared clips |
| vi/ # Final visible clips |
| figs/ # README figures |
| ``` |
|
|
|  |
|
|
| ## Data Format |
|
|
| Final clips are stored as paired files: |
|
|
| ```text |
| dataset/ |
| ir/01_0000_0005.mp4 |
| vi/01_0000_0005.mp4 |
| ``` |
|
|
| The file name format is: |
|
|
| ```text |
| {source_id}_{start_second}_{end_second}.mp4 |
| ``` |
|
|
| For example, `01_0000_0005.mp4` means source video `01`, from `0s` to `5s`. The same file name in `dataset/ir` and `dataset/vi` forms one paired sample. |
|
|
|  |
|
|
| ## Quick Start |
|
|
| If you only need the final paired clips, read `dataset/ir` and `dataset/vi` directly: |
|
|
| ```python |
| from pathlib import Path |
| |
| root = Path("dataset") |
| |
| for ir_path in sorted((root / "ir").glob("*.mp4")): |
| vi_path = root / "vi" / ir_path.name |
| assert vi_path.exists(), f"Missing visible pair: {vi_path}" |
| # Load ir_path and vi_path with your video reader. |
| ``` |
|
|
| To reproduce the preprocessing pipeline, install the Python dependencies: |
|
|
| ```bash |
| cd datamaker |
| conda create -n vidllvip python=3.10 -y |
| conda activate vidllvip |
| pip install -r requirements.txt |
| ``` |
|
|
| The system also needs `ffmpeg` and `ffprobe` on `PATH`. |
|
|
| ## Reproduce the Dataset |
|
|
| ### 1. Temporal Alignment |
|
|
| ```bash |
| python 01_time_align.py |
| ``` |
|
|
| Inputs: |
|
|
| - `raw/videos/ir/{id}.mp4` |
| - `raw/videos/vi/{id}.mp4` |
|
|
| Outputs: |
|
|
| - `datamaker/01_align/{id}/ir.mp4` |
| - `datamaker/01_align/{id}/vi.mp4` |
| - `datamaker/01_align/{id}/timestamp.xlsx` |
|
|
| The script reads frame timestamps, chooses the shorter stream as the base, and matches the other modality with monotone nearest-frame matching. The default maximum timestamp gap is `0.08s`. |
|
|
| ### 2. Spatial Registration |
|
|
| ```bash |
| python 02_space_align.py |
| ``` |
|
|
| Inputs: |
|
|
| - `datamaker/01_align/{id}/ir.mp4` |
| - `datamaker/01_align/{id}/vi.mp4` |
| - `datamaker/matrix/{id}.csv` |
|
|
| Outputs: |
|
|
| - `datamaker/02_warp/{id}/ir.mp4` |
| - `datamaker/02_warp/{id}/vi.mp4` |
|
|
| The script warps IR frames into the VI coordinate system with a 3x3 perspective matrix, then crops both modalities to `1280 x 1024`. |
|
|
|  |
|
|
| ### 3. Checkerboard QA |
|
|
| ```bash |
| python 03_checkerboard.py |
| ``` |
|
|
| Inputs: |
|
|
| - `datamaker/02_warp/{id}/ir.mp4` |
| - `datamaker/02_warp/{id}/vi.mp4` |
|
|
| Outputs: |
|
|
| - `datamaker/03_ckboard/{id}.mp4` |
|
|
| The checkerboard videos alternate IR and VI blocks, making edge continuity and object alignment easier to inspect by eye. |
|
|
| ### 4. Split Into 5-Second Clips |
|
|
| ```bash |
| python 04_split_5s_videos.py |
| ``` |
|
|
| Inputs: |
|
|
| - `datamaker/02_warp/{id}/ir.mp4` |
| - `datamaker/02_warp/{id}/vi.mp4` |
|
|
| Outputs: |
|
|
| - `dataset/ir/{id}_{start}_{end}.mp4` |
| - `dataset/vi/{id}_{start}_{end}.mp4` |
|
|
| The default window and stride are both `5s`. Tails shorter than `5s` are skipped. |
|
|
| ## Suggested Uses |
|
|
| - Video fusion: use same-name clips from `dataset/ir` and `dataset/vi`. |
| - Cross-modal registration: use `datamaker/01_align` as temporally aligned but spatially unregistered input, and `datamaker/02_warp` as the registered reference. |
| - Joint fusion and registration: train registration on `datamaker/01_align`, then train or evaluate fusion on `datamaker/02_warp` or `dataset/`. |
|
|
| ## Figures |
|
|
| The `figs/` directory is ordered by first appearance in this README: |
|
|
| | File | Purpose | |
| | --- | --- | |
| | [`figs/01_overview.jpg`](figs/01_overview.jpg) | Dataset overview and key statistics. | |
| | [`figs/02_sample_pairs.jpg`](figs/02_sample_pairs.jpg) | Representative IR/VI/fusion frame examples. | |
| | [`figs/03_pipeline.jpg`](figs/03_pipeline.jpg) | End-to-end construction pipeline. | |
| | [`figs/04_dataset_structure.jpg`](figs/04_dataset_structure.jpg) | Released file structure and pairing rule. | |
| | [`figs/05_alignment_quality.jpg`](figs/05_alignment_quality.jpg) | Temporal and spatial alignment quality checks. | |
|
|
| ## Citation |
|
|
| VidLLVIP is an unofficial processed version derived from the raw LLVIP infrared and visible videos. If you use VidLLVIP or the processing scripts, registration matrices, or paired video clips in this repository, please also follow the original LLVIP license and citation requirements. |
|
|
| ### 1. Original Dataset Citation |
|
|
| VidLLVIP is derived from LLVIP. When using this dataset, please first cite the original LLVIP dataset: |
|
|
| ```bibtex |
| @inproceedings{jia2021llvip, |
| title = {LLVIP: A visible-infrared paired dataset for low-light vision}, |
| author = {Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Zhou, Wenli}, |
| booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision}, |
| pages = {3496--3504}, |
| year = {2021} |
| } |
| ``` |
|
|
| ### 2. VidLLVIP Citation |
|
|
| If you use the processed VidLLVIP dataset, registration matrices, or preprocessing pipeline provided by this project, please also cite VidLLVIP: |
|
|
| ```bibtex |
| @dataset{ding2026vidllvip, |
| author = {Ding, Jianfeng}, |
| title = {VidLLVIP: A visible-infrared paired video dataset for low-light vision}, |
| year = {2026}, |
| version = {v1.0.0}, |
| url = {https://github.com/jianfeng0369/VidLLVIP} |
| } |
| ``` |
|
|
| ### 3. Related Paper Citation |
|
|
| CMVF is an infrared and visible video fusion method based on spatio-temporal consistency and designed for unregistered inputs. If your research uses the CMVF method or code, or is related to infrared and visible video fusion, please also consider citing the following paper: |
|
|
| ```bibtex |
| @article{cmvf2026ding, |
| title = {CMVF: Cross-modal unregistered video fusion via spatio-temporal consistency}, |
| journal = {Information Fusion}, |
| volume = {132}, |
| pages = {104212}, |
| year = {2026}, |
| issn = {1566-2535}, |
| author = {Jianfeng Ding and Hao Zhang and Zhongyuan Wang and Jinsheng Xiao and Xin Tian and Zhen Han and Jiayi Ma} |
| } |
| ``` |
|
|
| ## Contact |
|
|
| If you have any questions, please contact: <jianfeng0369@gmail.com>. |
|
|