L-Mind / README.md
Lance1573's picture
Add files using upload-large-folder tool
def8fbf verified
metadata
license: mit
pretty_name: L-Mind
size_categories:
  - 10K<n<100K
task_categories:
  - image-to-image
language:
  - en
  - zh
tags:
  - eeg
  - fnirs
  - bci
  - image-editing
  - multimodal
configs:
  - config_name: speech
    data_files:
      - split: train
        path: train_speech.jsonl
      - split: test
        path: test_speech.jsonl
  - config_name: legacy
    data_files:
      - split: train
        path: train_0424.jsonl
      - split: test
        path: test_0424.jsonl

L-Mind: A Multimodal Dataset for Neural-Driven Image Editing

This dataset is part of the NeurIPS 2025 paper: "Neural-Driven Image Editing", which introduces LoongX, a hands-free image editing approach driven by multimodal neurophysiological signals.

πŸ“„ Overview

L-Mind is a large-scale multimodal dataset designed to bridge Brain-Computer Interfaces (BCIs) with generative AI. It enables research into accessible, intuitive image editing for individuals with limited motor control or language abilities.

  • Total Samples: 23,928 image editing pairs
  • Participants: 12 subjects (plus cross-subject evaluation data)
  • Task: Instruction-based image editing viewed and conceived by users

🧠 Modalities

The dataset features synchronized recordings of:

  1. EEG (Electroencephalography): Captures rapid neural dynamics (4 channels: Pz, Fp2, Fpz, Oz).
  2. fNIRS (Functional Near-Infrared Spectroscopy): Measures hemodynamic responses (cognitive load/emotion).
  3. PPG (Photoplethysmography): Monitors physiological state (heart rate/stress).
  4. Head Motion: 6-axis IMU data tracking user movement.
  5. Speech: Audio instructions provided by users.
  6. Visuals: Source Image, Target Image, and Text Instruction.

πŸš€ Applications

This dataset supports the training of neural-driven generative models (like LoongX) that can interpret user intent directly from brain and physiological signals to perform:

  • Background replacement
  • Object manipulation
  • Global stylistic changes
  • Text editing

πŸ”— Resources

πŸ“š Citation

If you use this dataset, please cite:

@inproceedings{zhouneural,
  title={Neural-Driven Image Editing},
  author={Zhou, Pengfei and Xia, Jie and Peng, Xiaopeng and Zhao, Wangbo and Ye, Zilong and Li, Zekai and Yang, Suorong and Pan, Jiadong and Chen, Yuanxiang and Wang, Ziqiao and others},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}