File size: 2,587 Bytes
6557307
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
pretty_name: L-Mind
size_categories:
- 10K<n<100K

task_categories:
- image-to-image

language:
- en
- zh

tags:
- eeg
- fnirs
- bci
- image-editing
- multimodal

configs:
- config_name: speech
  data_files:
  - split: train
    path: train_speech.jsonl
  - split: test
    path: test_speech.jsonl

- config_name: legacy
  data_files:
  - split: train
    path: train_0424.jsonl
  - split: test
    path: test_0424.jsonl
---

# L-Mind: A Multimodal Dataset for Neural-Driven Image Editing

This dataset is part of the **NeurIPS 2025** paper: **"Neural-Driven Image Editing"**, which introduces **LoongX**, a hands-free image editing approach driven by multimodal neurophysiological signals.

## 📄 Overview
**L-Mind** is a large-scale multimodal dataset designed to bridge Brain-Computer Interfaces (BCIs) with generative AI. It enables research into accessible, intuitive image editing for individuals with limited motor control or language abilities.

- **Total Samples:** 23,928 image editing pairs
- **Participants:** 12 subjects (plus cross-subject evaluation data)
- **Task:** Instruction-based image editing viewed and conceived by users

## 🧠 Modalities
The dataset features synchronized recordings of:
1.  **EEG** (Electroencephalography): Captures rapid neural dynamics (4 channels: Pz, Fp2, Fpz, Oz).
2.  **fNIRS** (Functional Near-Infrared Spectroscopy): Measures hemodynamic responses (cognitive load/emotion).
3.  **PPG** (Photoplethysmography): Monitors physiological state (heart rate/stress).
4.  **Head Motion**: 6-axis IMU data tracking user movement.
5.  **Speech**: Audio instructions provided by users.
6.  **Visuals**: Source Image, Target Image, and Text Instruction.

## 🚀 Applications
This dataset supports the training of neural-driven generative models (like LoongX) that can interpret user intent directly from brain and physiological signals to perform:
- Background replacement
- Object manipulation
- Global stylistic changes
- Text editing

## 🔗 Resources
- **Project Website:** [https://loongx1.github.io](https://loongx1.github.io)
- **Paper:** [Neural-Driven Image Editing](https://arxiv.org/abs/2507.05397)

## 📚 Citation
If you use this dataset, please cite:
```bibtex
@inproceedings{zhouneural,
  title={Neural-Driven Image Editing},
  author={Zhou, Pengfei and Xia, Jie and Peng, Xiaopeng and Zhao, Wangbo and Ye, Zilong and Li, Zekai and Yang, Suorong and Pan, Jiadong and Chen, Yuanxiang and Wang, Ziqiao and others},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}
```