Enhance dataset card: Add paper/code links, tasks, sample usage, correct license, and citations

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +139 -14
README.md CHANGED
@@ -1,29 +1,154 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
 
 
 
 
 
 
 
5
  ## 📖 Dataset Description
6
 
7
- This dataset contains the images and preprocessed data from THINGS-EEG1THINGS-fMRITHINGS-MEG dataset.
 
 
8
 
 
 
 
 
 
9
 
10
  ## 🧾 Supported Tasks
11
 
12
- - **Task 1** (e.g., THINGS-EEG1)
13
- - **Input**: EEG
14
- - **Output**: Image label
15
- - **Evaluation metric**: Accuracy
16
 
17
- - **Task 2** (e.g., THINGS-MEG)
18
- - **Input**: MEG
19
- - **Output**: Image label
20
- - **Evaluation metric**: Accuracy
21
 
22
- - **Task 3** (e.g., THINGS-fMRI)
23
- - **Input**: fMRI
24
- - **Output**: Image label
25
- - **Evaluation metric**: Accuracy
26
 
27
  ---
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - image-feature-extraction
5
+ - image-to-image
6
+ - image-to-text
7
+ language:
8
+ - en
9
+ tags:
10
+ - neuroscience
11
+ - brain-decoding
12
+ - eeg
13
+ - meg
14
+ - fmri
15
+ - multimodal
16
+ - bci
17
  ---
18
 
19
+ # BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings
20
+
21
+ This repository contains the images and preprocessed data from THINGS-EEG1, THINGS-fMRI, THINGS-MEG dataset, associated with the paper [BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings](https://huggingface.co/papers/2507.09747).
22
+
23
+ * **Paper**: [https://huggingface.co/papers/2507.09747](https://huggingface.co/papers/2507.09747)
24
+ * **Code**: [https://github.com/ncclab-sustech/BrainFLORA](https://github.com/ncclab-sustech/BrainFLORA)
25
+
26
  ## 📖 Dataset Description
27
 
28
+ This dataset contains the images and preprocessed data from THINGS-EEG1, THINGS-fMRI, THINGS-MEG dataset, which can be directly used for training.
29
+
30
+ To download the raw data, you can follow these links:
31
 
32
+ * **THINGS-EEG1**: [Download](https://openneuro.org/datasets/ds003825/versions/1.1.0)
33
+ * **THINGS-EEG2**: [Download](https://osf.io/3jk45/)
34
+ * **THINGS-MEG**: [Download](https://openneuro.org/datasets/ds004212/versions/2.0.0)
35
+ * **THINGS-fMRI**: [Download](https://openneuro.org/datasets/ds004192/versions/1.0.7)
36
+ * **THINGS-Images**: [Download](https://osf.io/rdxy2)
37
 
38
  ## 🧾 Supported Tasks
39
 
40
+ - **Visual Retrieval**
41
+ - **Input**: EEG, MEG, or fMRI brain signals
42
+ - **Output**: Image label (retrieved image)
43
+ - **Evaluation metric**: Accuracy
44
 
45
+ - **Visual Reconstruction**
46
+ - **Input**: EEG, MEG, or fMRI brain signals
47
+ - **Output**: Reconstructed image
48
+ - **Evaluation metric**: Not specified in current card, but typically image similarity metrics.
49
 
50
+ - **Visual Captioning**
51
+ - **Input**: EEG, MEG, or fMRI brain signals
52
+ - **Output**: Text caption describing the image
53
+ - **Evaluation metric**: Not specified in current card, but typically text generation metrics like BLEU, ROUGE, etc.
54
 
55
  ---
56
 
57
+ ## 🚀 Sample Usage
58
+
59
+ The following snippets demonstrate how to use the BrainFLORA code for training and evaluation of different tasks. First, ensure your environment is set up by following the instructions in the [GitHub repository](https://github.com/ncclab-sustech/BrainFLORA#%EF%B8%8Fenvironment-setup).
60
+
61
+ ### Environment Setup
62
+
63
+ Run `setup.sh` to quickly create a conda environment that contains the packages necessary to run our scripts; activate the environment with `conda activate BrainFLORA`.
64
+
65
+ ```bash
66
+ . setup.sh
67
+ ```
68
+
69
+ You can also create a new conda environment and install the required dependencies by running:
70
+
71
+ ```bash
72
+ conda env create -f environment.yml
73
+ conda activate BrainFLORA
74
+ ```
75
+
76
+ ### Visual Retrieval
77
+
78
+ Train modality encoders for joint subject training on the THINGS-EEG2 dataset (modify your dataset path):
79
+
80
+ ```python
81
+ cd Retrieval/
82
+ python retrieval_joint_train_medformer.py --logger True --gpu cuda:0 --output_dir ./outputs/contrast
83
+ ```
84
+
85
+ Replicate results for other modalities (e.g., MEG, fMRI):
86
+
87
+ ```python
88
+ cd Retrieval/
89
+ python retrieval_joint_train_MEG_rerank_medformer.py --logger True --gpu cuda:0 --output_dir ./outputs/contrast
90
+ ```
91
+
92
+ Evaluate the models using the provided notebook:
93
+
94
+ ```bash
95
+ cd eval/
96
+ # Open FLORA_inference.ipynb
97
+ ```
98
+
99
+ ### Visual Reconstruction
100
+
101
+ Train and get multimodal neural embeddings aligned with CLIP embeddings (for high-level and low-level visual reconstruction pipeline):
102
+
103
+ ```python
104
+ python train_unified_encoder_highlevel_diffprior.py --modalities ['eeg', 'meg', 'fmri'] --gpu cuda:0 --output_dir ./outputs/contrast
105
+ ```
106
+
107
+ Reconstruct images by assigning modalities and subjects:
108
+
109
+ ```python
110
+ python FLORA_inference_reconst.py
111
+ ```
112
+
113
+ ### Visual Captioning
114
+
115
+ **Step 1**: Train feature adapter (same as the first step in Visual Reconstruction):
116
+
117
+ ```python
118
+ python train_unified_encoder_highlevel_diffprior.py --modalities ['eeg', 'meg', 'fmri'] --gpu cuda:0 --output_dir ./outputs/contrast
119
+ ```
120
+
121
+ **Step 2**: Get captions from the prior latent space:
122
+
123
+ ```bash
124
+ # Open FLORA_inference_caption.ipynb
125
+ ```
126
+
127
+ ## 📚 Citations
128
+
129
+ If you find our work useful, please consider citing:
130
+
131
+ ```bibtex
132
+ @article{li2025brainflora,
133
+ title={BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings},
134
+ author={Li, Dongyang and Qin, Haoyang and Wu, Mingyang and Wei, Chen and Liu, Quanying},
135
+ journal={arXiv preprint arXiv:2507.09747},
136
+ year={2025}
137
+ }
138
 
139
+ @article{li2024visual,
140
+ title={Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion},
141
+ author={Li, Dongyang and Wei, Chen and Li, Shiying and Zou, Jiachen and Liu, Quanying},
142
+ journal={Advances in Neural Information Processing Systems},
143
+ volume={37},
144
+ pages={102822--102864},
145
+ year={2024}
146
+ }
147
+ @inproceedings{wei2024cocog,
148
+ title={CoCoG: controllable visual stimuli generation based on human concep08/03/2024t representations},
149
+ author={Wei, Chen and Zou, Jiachen and Heinke, Dietmar and Liu, Quanying},
150
+ booktitle={Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence},
151
+ pages={3178--3186},
152
+ year={2024}
153
+ }
154
+ ```