Update README.md
#3
by OlatunjiS - opened
README.md
CHANGED
|
@@ -1,29 +1,136 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
#
|
| 6 |
|
| 7 |
[](https://arxiv.org/abs/2503.06940)
|
| 8 |
|
| 9 |
-
##
|
| 10 |
-
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
-
## Repository Structure
|
| 14 |
- **videos.tar**: Contains the video stimuli viewed by participants. Subjects 1, 2, and 6 watched the first 20 episodes, while subjects 3, 4, and 5 watched the first 10 and the last 10 episodes.
|
| 15 |
- **sub-00xx**: Each folder corresponds to a specific participant and includes their raw and processed fMRI data, as well as the processed EEG data.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
|
|
|
| 17 |
|
| 18 |
-
##
|
| 19 |
-
- **We have released all the data.**
|
| 20 |
-
- **Subjects 1, 2, 3, and 4 in this dataset correspond to Subjects 6, 8, 1, and 4 in the [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape) and [fMRI-Objaverse](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Objaverse) datasets.**
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
If you find our paper useful for your research and applications, please cite using this BibTeX:
|
| 25 |
|
| 26 |
-
```
|
| 27 |
@misc{gao2025cinebrain,
|
| 28 |
title={CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic Audiovisual Narrative Processing},
|
| 29 |
author={Jianxiong Gao and Yichang Liu and Baofeng Yang and Jianfeng Feng and Yanwei Fu},
|
|
@@ -33,4 +140,8 @@ If you find our paper useful for your research and applications, please cite usi
|
|
| 33 |
primaryClass={cs.CV},
|
| 34 |
url={https://arxiv.org/abs/2503.06940},
|
| 35 |
}
|
| 36 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- multimodal
|
| 5 |
+
- other
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- neuroscience
|
| 10 |
+
- fMRI
|
| 11 |
+
- EEG
|
| 12 |
+
- ECG
|
| 13 |
+
- brain-imaging
|
| 14 |
+
- multimodal
|
| 15 |
+
- science
|
| 16 |
+
- huggingscience
|
| 17 |
+
size_categories:
|
| 18 |
+
- 100K<n<1M
|
| 19 |
---
|
| 20 |
|
| 21 |
+
# Dataset Card for CineBrain
|
| 22 |
|
| 23 |
[](https://arxiv.org/abs/2503.06940)
|
| 24 |
|
| 25 |
+
## Dataset Description
|
| 26 |
+
|
| 27 |
+
### Dataset Summary
|
| 28 |
+
|
| 29 |
+
CineBrain is a large-scale multimodal brain dataset that includes fMRI, EEG, and ECG recordings collected while participants watched episodes of The Big Bang Theory. Each participant viewed 20 episodes, and for each episode, only the first 18 minutes were used. In total, each participant watched approximately 6 hours of video. The fMRI was acquired with a TR of 0.8 seconds, and the EEG was recorded at 1000 Hz.
|
| 30 |
+
|
| 31 |
+
### Supported Tasks and Leaderboards
|
| 32 |
+
|
| 33 |
+
- **Multimodal Brain Analysis**: Analyze relationships between visual/auditory stimuli and brain responses
|
| 34 |
+
- **Neural Decoding**: Decode brain states from neuroimaging data during naturalistic viewing
|
| 35 |
+
- **Cross-modal Learning**: Learn mappings between different brain imaging modalities
|
| 36 |
+
- **Temporal Dynamics**: Study temporal patterns in brain activity during narrative processing
|
| 37 |
+
|
| 38 |
+
### Languages
|
| 39 |
+
|
| 40 |
+
The dataset contains brain recordings during English audiovisual narrative processing (The Big Bang Theory episodes).
|
| 41 |
+
|
| 42 |
+
## Dataset Structure
|
| 43 |
+
|
| 44 |
+
### Repository Structure
|
| 45 |
|
|
|
|
| 46 |
- **videos.tar**: Contains the video stimuli viewed by participants. Subjects 1, 2, and 6 watched the first 20 episodes, while subjects 3, 4, and 5 watched the first 10 and the last 10 episodes.
|
| 47 |
- **sub-00xx**: Each folder corresponds to a specific participant and includes their raw and processed fMRI data, as well as the processed EEG data.
|
| 48 |
+
- **captions-qwen-2.5-vl-7b.json**: Video captions generated using Qwen-2.5-VL-7B model
|
| 49 |
+
|
| 50 |
+
### Data Instances
|
| 51 |
+
|
| 52 |
+
Each participant folder contains:
|
| 53 |
+
- **fMRI_raw_data.tar**: Raw functional MRI data
|
| 54 |
+
- **fMRI_preprocessed_data.tar**: Preprocessed functional MRI data
|
| 55 |
+
- **EEG_preprocessed_data.tar**: Preprocessed EEG recordings
|
| 56 |
+
|
| 57 |
+
### Data Fields
|
| 58 |
+
|
| 59 |
+
- **fMRI data**: 4D neuroimaging data (x, y, z, time) with TR=0.8s
|
| 60 |
+
- **EEG data**: Multi-channel EEG recordings at 1000 Hz sampling rate
|
| 61 |
+
- **ECG data**: Electrocardiogram recordings for physiological monitoring
|
| 62 |
+
- **Video stimuli**: 20 episodes of The Big Bang Theory (first 18 minutes each)
|
| 63 |
+
|
| 64 |
+
### Data Splits
|
| 65 |
+
|
| 66 |
+
The dataset includes 6 subjects with varying episode coverage:
|
| 67 |
+
- **Subjects 1, 2, 6**: Episodes 1-20 (full coverage)
|
| 68 |
+
- **Subjects 3, 4, 5**: Episodes 1-10 and 11-20 (split coverage)
|
| 69 |
+
|
| 70 |
+
Total: ~36 hours of brain recordings across all subjects
|
| 71 |
+
|
| 72 |
+
## Dataset Creation
|
| 73 |
+
|
| 74 |
+
### Curation Rationale
|
| 75 |
+
|
| 76 |
+
This dataset was created to support neuroscience research on naturalistic audiovisual narrative processing. It provides high-quality multimodal brain data during ecological viewing conditions, enabling studies of:
|
| 77 |
+
- Neural mechanisms of narrative comprehension
|
| 78 |
+
- Cross-modal sensory integration
|
| 79 |
+
- Individual differences in brain responses to media content
|
| 80 |
+
- Temporal dynamics of attention and engagement
|
| 81 |
+
|
| 82 |
+
### Source Data
|
| 83 |
|
| 84 |
+
Brain recordings were collected from healthy participants while they watched sitcom episodes in a controlled laboratory environment. The use of popular media content ensures ecological validity while maintaining experimental control.
|
| 85 |
|
| 86 |
+
#### Initial Data Collection and Normalization
|
|
|
|
|
|
|
| 87 |
|
| 88 |
+
- **fMRI**: Collected with high temporal resolution (TR=0.8s) for detailed temporal dynamics
|
| 89 |
+
- **EEG**: Recorded at 1000 Hz for precise temporal resolution of neural events
|
| 90 |
+
- **Preprocessing**: Standard neuroimaging preprocessing pipelines applied
|
| 91 |
+
- **Quality control**: Data quality checks and artifact removal procedures applied
|
| 92 |
+
|
| 93 |
+
### Personal and Sensitive Information
|
| 94 |
+
|
| 95 |
+
⚠️ **Neuroimaging Data**: This dataset contains brain imaging data from human subjects. While anonymized, users should follow appropriate ethical guidelines and data use agreements when working with neuroimaging data.
|
| 96 |
+
|
| 97 |
+
## Important Notes
|
| 98 |
+
|
| 99 |
+
- **Data Release**: All data has been released and is available for download
|
| 100 |
+
- **Cross-dataset Correspondence**: Subjects 1, 2, 3, and 4 in this dataset correspond to Subjects 6, 8, 1, and 4 in the [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape) and [fMRI-Objaverse](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Objaverse) datasets
|
| 101 |
+
|
| 102 |
+
## Considerations for Using the Data
|
| 103 |
+
|
| 104 |
+
### Social Impact of Dataset
|
| 105 |
+
|
| 106 |
+
This dataset enables fundamental neuroscience research that could lead to better understanding of:
|
| 107 |
+
- How the brain processes complex narrative content
|
| 108 |
+
- Individual differences in media consumption and comprehension
|
| 109 |
+
- Development of brain-computer interfaces for communication
|
| 110 |
+
- Improved treatments for attention and comprehension disorders
|
| 111 |
+
|
| 112 |
+
### Discussion of Biases
|
| 113 |
+
|
| 114 |
+
- **Demographic bias**: Sample may not represent global population diversity
|
| 115 |
+
- **Cultural bias**: Content is English-language Western sitcom
|
| 116 |
+
- **Selection bias**: Participants were likely university-affiliated volunteers
|
| 117 |
+
- **Temporal bias**: Data collected at specific time points may not generalize
|
| 118 |
+
|
| 119 |
+
## Additional Information
|
| 120 |
+
|
| 121 |
+
### Dataset Curators
|
| 122 |
+
|
| 123 |
+
Jianxiong Gao, Yichang Liu, Baofeng Yang, Jianfeng Feng, Yanwei Fu
|
| 124 |
+
|
| 125 |
+
### Licensing Information
|
| 126 |
+
|
| 127 |
+
This dataset is released under the Apache-2.0 license.
|
| 128 |
+
|
| 129 |
+
### Citation Information
|
| 130 |
|
| 131 |
If you find our paper useful for your research and applications, please cite using this BibTeX:
|
| 132 |
|
| 133 |
+
```bibtex
|
| 134 |
@misc{gao2025cinebrain,
|
| 135 |
title={CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic Audiovisual Narrative Processing},
|
| 136 |
author={Jianxiong Gao and Yichang Liu and Baofeng Yang and Jianfeng Feng and Yanwei Fu},
|
|
|
|
| 140 |
primaryClass={cs.CV},
|
| 141 |
url={https://arxiv.org/abs/2503.06940},
|
| 142 |
}
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
### Contributions
|
| 146 |
+
|
| 147 |
+
Thanks to the neuroscience research community and the original authors for creating and sharing this valuable dataset for advancing our understanding of brain function during naturalistic conditions.
|