MM-RIS / README.md
ronniejiangC's picture
Add comprehensive dataset card for MM-RIS (#2)
7e5c7b3 verified
metadata
task_categories:
  - image-segmentation
language:
  - en
tags:
  - multimodal
  - referring-image-segmentation
  - infrared
  - visible
  - image-fusion
size_categories:
  - 10K<n<100K

MM-RIS: Multimodal Referring Image Segmentation Dataset

The MM-RIS dataset was introduced in the paper RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation.

This large-scale benchmark supports the multimodal referring image segmentation (RIS) task by providing a goal-aligned approach to supervise and evaluate how effectively natural language contributes to infrared and visible image fusion outcomes.

Paper

RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation

Code

The official code repository for the associated RIS-FUSION project can be found on GitHub: https://github.com/SijuMa2003/RIS-FUSION

Introduction

Text-driven infrared and visible image fusion has gained attention for enabling natural language to guide the fusion process. However, existing methods often lack a goal-aligned task to supervise and evaluate how effectively the input text contributes to the fusion outcome.

We observe that referring image segmentation (RIS) and text-driven fusion share a common objective: highlighting the object referred to by the text. Motivated by this, we propose RIS-FUSION, a cascaded framework that unifies fusion and RIS through joint optimization.

To support the multimodal referring image segmentation task, we introduce MM-RIS, a large-scale benchmark with 12.5k training and 3.5k testing triplets, each consisting of an infrared-visible image pair, a segmentation mask, and a referring expression.

Dataset Structure

The MM-RIS dataset is available in this Hugging Face repository and consists of the following Parquet files:

  • mm_ris_test.parquet
  • mm_ris_val.parquet
  • mm_ris_train_part1.parquet
  • mm_ris_train_part2.parquet

These files together comprise 12.5k training and 3.5k testing triplets. Each triplet includes an infrared image, a visible image, a segmentation mask, and a natural language referring expression.

Sample Usage

To prepare the MM-RIS dataset for use with the RIS-FUSION code, you will need to download all the dataset files from this repository and merge the training partitions.

  1. Download the dataset files: Download mm_ris_test.parquet, mm_ris_val.parquet, mm_ris_train_part1.parquet, and mm_ris_train_part2.parquet from this Hugging Face repository and place them under a data/ directory in your project, ideally within a cloned RIS-FUSION GitHub repository.

  2. Merge partitioned parquet files: The RIS-FUSION GitHub repository provides a script to merge the partitioned training data. Assuming you have cloned the repository and placed the parquet files in ./data/:

    python ./data/merge_parquet.py
    

    This script will combine mm_ris_train_part1.parquet and mm_ris_train_part2.parquet into a single mm_ris_train.parquet file.

Once the dataset is prepared, you can use it for training and testing models as shown in the examples below.

Training Example

python train_with_lavt.py      \
  --train_parquet ./data/mm_ris_train.parquet    \
  --val_parquet   ./data/mm_ris_val.parquet     \
  --prefusion_model unet_fuser --prefusion_base_ch 32     \
  --epochs 10 -b 16 -j 16     \
  --img_size 480     \
  --swin_type base \
  --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth     \
  --bert_tokenizer ./bert/pretrained_weights/bert-base-uncased \
  --ck_bert ./bert/pretrained_weights/bert-base-uncased     \
  --init_from_lavt_one ./pretrained_weights/lavt_one_8_cards_ImgNet22KPre_swin-base-window12_refcoco+_adamw_b32lr0.00005wd1e-2_E40.pth     \
  --lr_seg 5e-5 --wd_seg 1e-2 --lr_pf 1e-4 --wd_pf 1e-2     \
  --lambda_prefusion 3.0     \
  --w_sobel_vis 0.0 \
  --w_sobel_ir 1.0     \
  --w_grad 1.0     \
  --w_ssim_vis 0.5 \
  --w_ssim_ir 0.0     \
  --w_mse_vis 0.5 \
  --w_mse_ir 2.0     
  --eval_vis_dir ./eval_vis \
  --output-dir ./ckpts/risfusion

Testing Example

python test.py   \
  --ckpt  ./ckpts/risfusion/model_best_lavt.pth   \
  --test_parquet ./data/mm_ris_test.parquet   \
  --out_dir ./your_output_dir  \
  --bert_tokenizer ./bert/pretrained_weights/bert-base-uncased   \
  --ck_bert ./bert/pretrained_weights/bert-base-uncased

Citation

If you find this dataset or the associated paper useful, please consider citing:

@article{RIS-FUSION2025,
  title   = {RIS-FUSION: Rethinking Text-Driven Infrared and Visible Image Fusion from the Perspective of Referring Image Segmentation},
  author  = {Ma, Siju and Gong, Changsiyu and Fan, Xiaofeng and Ma, Yong and Jiang, Chengjie},
  journal = {...},
  year    = {2025}
}

Acknowledgements