Improve dataset card: Add paper/code links, metadata, and detailed description
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,151 @@
|
|
| 1 |
-
---
|
| 2 |
-
license:
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-feature-extraction
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- neuroscience
|
| 9 |
+
- multimodal
|
| 10 |
+
- eeg
|
| 11 |
+
- meg
|
| 12 |
+
- fmri
|
| 13 |
+
- brain-computer-interface
|
| 14 |
+
- visual-retrieval
|
| 15 |
+
- visual-reconstruction
|
| 16 |
+
- visual-captioning
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings
|
| 20 |
+
|
| 21 |
+
**Paper:** [BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings](https://huggingface.co/papers/2507.09747)
|
| 22 |
+
|
| 23 |
+
**Code (GitHub):** [https://github.com/LidongYang/BrainFLORA](https://github.com/LidongYang/BrainFLORA)
|
| 24 |
+
|
| 25 |
+
## Abstract
|
| 26 |
+
Understanding how the brain represents visual information is a fundamental challenge in neuroscience and artificial intelligence. While AI-driven decoding of neural data has provided insights into the human visual system, integrating multimodal neuroimaging signals, such as EEG, MEG, and fMRI, remains a critical hurdle due to their inherent spatiotemporal misalignment. Current approaches often analyze these modalities in isolation, limiting a holistic view of neural representation. In this study, we introduce BrainFLORA, a unified framework for integrating cross-modal neuroimaging data to construct a shared neural representation. Our approach leverages multimodal large language models (MLLMs) augmented with modality-specific adapters and task decoders, achieving state-of-the-art performance in joint-subject visual retrieval task and has the potential to extend multitasking. Combining neuroimaging analysis methods, we further reveal how visual concept representations align across neural modalities and with real world object perception. We demonstrate that the brain's structured visual concept representations exhibit an implicit mapping to physical-world stimuli, bridging neuroscience and machine learning from different modalities of neural imaging. Beyond methodological advancements, BrainFLORA offers novel implications for cognitive neuroscience and brain-computer interfaces (BCIs).
|
| 27 |
+
|
| 28 |
+
## Overview
|
| 29 |
+
<div align="center">
|
| 30 |
+
<div>
|
| 31 |
+
<img src="https://github.com/LidongYang/BrainFLORA/blob/main/imgs/fig-overview_00.png?raw=true" alt="fig-genexample" style="max-width: 75%; height: auto;"/>
|
| 32 |
+
</div>
|
| 33 |
+
</div>
|
| 34 |
+
A comparative overview of multimodal decoding paradigms.
|
| 35 |
+
|
| 36 |
+
## Framework
|
| 37 |
+
<div align="center">
|
| 38 |
+
<div>
|
| 39 |
+
<img src="https://github.com/LidongYang/BrainFLORA/blob/main/imgs/fig-framework_00.png?raw=true" alt="Framework" style="max-width: 70%; height: auto;"/>
|
| 40 |
+
</div>
|
| 41 |
+
</div>
|
| 42 |
+
Overall architecture of BrainFLORA.
|
| 43 |
+
|
| 44 |
+
## Update
|
| 45 |
+
* **2025/07/15**, the [arxiv](https://arxiv.org/abs/2507.09747) paper is public.
|
| 46 |
+
* **2025/07/12**, we officially released the code.
|
| 47 |
+
* **2025/07/05**, BrainFLORA is accepted by *ACM MM 2025*.
|
| 48 |
+
|
| 49 |
+
## Environment Setup
|
| 50 |
+
Run ``setup.sh`` to quickly create a conda environment that contains the packages necessary to run our scripts; activate the environment with conda activate BrainFLORA.
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
. setup.sh
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
You can also create a new conda environment and install the required dependencies by running
|
| 57 |
+
```bash
|
| 58 |
+
conda env create -f environment.yml
|
| 59 |
+
conda activate BrainFLORA
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Prepare for Dataset
|
| 63 |
+
To download the raw data, you can follow these links:
|
| 64 |
+
|
| 65 |
+
| Dataset | Download path | Dataset | Download path |
|
| 66 |
+
| :---: | :---: | :---: | :---: |
|
| 67 |
+
| THINGS-EEG1 | [Download](https://openneuro.org/datasets/ds003825/versions/1.1.0) | THINGS-EEG2 | [Download](https://osf.io/3jk45/) |
|
| 68 |
+
| THINGS-MEG | [Download](https://openneuro.org/datasets/ds004212/versions/2.0.0) | THINGS-fMRI | [Download](https://openneuro.org/datasets/ds004192/versions/1.0.7) |
|
| 69 |
+
| THINGS-Images | [Download](https://osf.io/rdxy2) | | |
|
| 70 |
+
|
| 71 |
+
## Quick Training and Test
|
| 72 |
+
|
| 73 |
+
#### 1. Visual Retrieval
|
| 74 |
+
We provide the script to train the modality encoders for ``joint subject training`` in *THINGS-EEG2* dataset. Please modify your data set path and run:
|
| 75 |
+
```bash
|
| 76 |
+
cd Retrieval/
|
| 77 |
+
python retrieval_joint_train_medformer.py --logger True --gpu cuda:0 --output_dir ./outputs/contrast
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
Additionally, replicating the results of other modalities (e.g. MEG, fMRI) by running:
|
| 81 |
+
```bash
|
| 82 |
+
cd Retrieval/
|
| 83 |
+
python retrieval_joint_train_MEG_rerank_medformer.py --logger True --gpu cuda:0 --output_dir ./outputs/contrast
|
| 84 |
+
```
|
| 85 |
+
We provide the script to evaluation the models:
|
| 86 |
+
```bash
|
| 87 |
+
cd eval/
|
| 88 |
+
FLORA_inference.ipynb
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
#### 2. Visual Reconstruction
|
| 92 |
+
We provide quick training and inference scripts for ``high level and low level pipeline`` of visual reconstruction. Please modify your data set path and run zero-shot on test dataset:
|
| 93 |
+
```bash
|
| 94 |
+
# Train and get multimodal neural embeddings aligned with clip embedding:
|
| 95 |
+
python train_unified_encoder_highlevel_diffprior.py --modalities ['eeg', 'meg', 'fmri'] --gpu cuda:0 --output_dir ./outputs/contrast
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
# Reconstruct images by assigning modalities and subjects:
|
| 100 |
+
python FLORA_inference_reconst.py
|
| 101 |
+
```
|
| 102 |
+
#### 3. Visual Captioning
|
| 103 |
+
|
| 104 |
+
We provide scripts for visual caption generation.
|
| 105 |
+
```bash
|
| 106 |
+
# step 1: train feature adapter
|
| 107 |
+
python train_unified_encoder_highlevel_diffprior.py --modalities ['eeg', 'meg', 'fmri'] --gpu cuda:0 --output_dir ./outputs/contrast
|
| 108 |
+
|
| 109 |
+
# step 2: get caption from prior latent
|
| 110 |
+
FLORA_inference_caption.ipynb
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Citations
|
| 114 |
+
If you find our work useful, please consider citing:
|
| 115 |
+
|
| 116 |
+
```bibtex
|
| 117 |
+
@article{li2025brain,
|
| 118 |
+
title={BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings},
|
| 119 |
+
author={Li, Dongyang and Qin, Haoyang and Wu, Mingyang and Wei, Chen and Liu, Quanying},
|
| 120 |
+
journal={arXiv preprint arXiv:2507.09747},
|
| 121 |
+
year={2025},
|
| 122 |
+
url={https://arxiv.org/abs/2507.09747}
|
| 123 |
+
}
|
| 124 |
+
@article{li2024visual,
|
| 125 |
+
title={Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion},
|
| 126 |
+
author={Li, Dongyang and Wei, Chen and Li, Shiying and Zou, Jiachen and Liu, Quanying},
|
| 127 |
+
journal={Advances in Neural Information Processing Systems},
|
| 128 |
+
volume={37},
|
| 129 |
+
pages={102822--102864},
|
| 130 |
+
year={2024}
|
| 131 |
+
}
|
| 132 |
+
@inproceedings{wei2024cocog,
|
| 133 |
+
title={CoCoG: controllable visual stimuli generation based on human concep08/03/2024t representations},
|
| 134 |
+
author={Wei, Chen and Zou, Jiachen and Heinke, Dietmar and Liu, Quanying},
|
| 135 |
+
booktitle={Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence},
|
| 136 |
+
pages={3178--3186},
|
| 137 |
+
year={2024}
|
| 138 |
+
}
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
## Acknowledge
|
| 142 |
+
1. Thanks to Y Song et al. for their contribution in data set preprocessing and neural network structure, we refer to their work:</br>"[Decoding Natural Images from EEG for Object Recognition](https://arxiv.org/pdf/2308.13234.pdf)".</br> Yonghao Song, Bingchuan Liu, Xiang Li, Nanlin Shi, Yijun Wang, and Xiaorong Gao.
|
| 143 |
+
|
| 144 |
+
2. We also thank the authors of [SDRecon](https://github.com/yu-takagi/StableDiffusionReconstruction) for providing the codes and the results. Some parts of the training script are based on [MindEye](https://medarc-ai.github.io/mindeye/) and [MindEye2](https://github.com/MedARC-AI/MindEyeV2). Thanks for the awesome research works.
|
| 145 |
+
|
| 146 |
+
3. Here we provide our THING-EEG2 dataset cited in the paper:</br>"[A large and rich EEG dataset for modeling human visual object recognition](https://www.sciencedirect.com/science/article/pii/S1053811922008758?via%3Dihub)".</br>
|
| 147 |
+
Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy.
|
| 148 |
+
|
| 149 |
+
4. Another used THINGS-MEG and THINGS-fMRI data set provides a reference:</br>"[THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior.](https://elifesciences.org/articles/82580.pdf)".</br> Hebart, Martin N., Oliver Contier, Lina Teichmann, Adam H. Rockter, Charles Y. Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, and Chris I. Baker.
|
| 150 |
+
|
| 151 |
+
Contact [Dongyang Li](https://github.com/dongyangli-del) if you have any questions or suggestions.
|