Darioli commited on
Commit
97158a8
·
verified ·
1 Parent(s): 5471b6d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +154 -3
README.md CHANGED
@@ -1,3 +1,154 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MICrONS Functional Activity Dataset & Reader
2
+
3
+ This repository contains a curated portion of the MICrONS (Multi-Scale Networked Analysis of Cellular Responding Order) dataset. It consists of functional calcium imaging data from the visual cortex of mice in response to various visual stimuli (natural clips (Clip) and parametric videos (Monet2, Trippy)).
4
+ Videos have been downsampled to match neural activity scan frequency with frame choice corresponding to the the frame appearing at least 66ms before scan time.
5
+ The data is organized into a highly efficient, indexed HDF5 format, allowing for rapid cross-session analysis based on either stimulus identity or brain anatomy.
6
+
7
+ ## 📊 Dataset Overview
8
+
9
+ - Sessions: 14 sessions of registered neural activity.
10
+ - Stimuli: Three categories of videos (Clip, Monet2, Trippy) identified by unique condition hashes.
11
+ - Neural Data: Calcium traces (responses) from thousands of neurons across multiple visual areas (V1, AL, LM, RL).
12
+ - Behavioral Data: Synchronized treadmill speed and pupil radius.
13
+ - Eye Tracking: Pupil center coordinates (x, y) for gaze analysis.
14
+
15
+ ## ⚙️ Setup & Installation
16
+
17
+ There are two ways of accessing the contents of this repo.
18
+
19
+ ### 1. Clone the Repository
20
+ Since the dataset is stored as a large HDF5 file (.h5), you must have Git LFS installed.
21
+
22
+ ```bash
23
+ # Install git-lfs if you haven't already
24
+ git lfs install
25
+
26
+ # Clone the repository
27
+ git clone https://huggingface.co/datasets/NeuroBLab/MICrONS
28
+ cd microns-functional
29
+
30
+ # Install required packages
31
+
32
+ pip install - r requirements.txt
33
+ ```
34
+ ### 2. Programmatic Access (Python)
35
+
36
+ If you don't want to clone the full repository, you can download the reader and the data file directly into your Python script using the huggingface_hub library.
37
+
38
+ **1. Install the library**
39
+
40
+ ```bash
41
+ pip install huggingface_hub h5py numpy
42
+ ```
43
+
44
+ **2. Download and Run**
45
+
46
+ ```python
47
+ import sys
48
+ import importlib.util
49
+ from huggingface_hub import hf_hub_download
50
+
51
+ # 1. Define Repository Info
52
+ REPO_ID = "NeuroBLab/MICrONS"
53
+ DATA_FILENAME = "microns.h5"
54
+ READER_FILENAME = "reader.py"
55
+
56
+ print("Downloading files from Hugging Face...")
57
+
58
+ # 2. Download the Reader script
59
+ reader_path = hf_hub_download(repo_id=REPO_ID, filename=READER_FILENAME)
60
+
61
+ # 3. Download the HDF5 Data file (this handles Git LFS automatically)
62
+ data_path = hf_hub_download(repo_id=REPO_ID, filename=DATA_FILENAME)
63
+
64
+ # 4. Dynamically import the MicronsReader class from the downloaded file
65
+ spec = importlib.util.spec_from_file_location("reader", reader_path)
66
+ reader_module = importlib.util.module_from_spec(spec)
67
+ sys.modules["reader"] = reader_module
68
+ spec.loader.exec_module(reader_module)
69
+
70
+ from reader import MicronsReader
71
+
72
+ # 5. Use the reader
73
+ with MicronsReader(data_path) as reader:
74
+ print("File downloaded and reader initialized!")
75
+ reader.print_structure(max_items=1)
76
+ ```
77
+
78
+ ## 🛠️ Reader API Demo
79
+
80
+ The MicronsReader class is designed to handle the hierarchical structure of the HDF5 file transparently, including internal hash encoding and SoftLink navigation.
81
+
82
+ ### 1. Initialize the Reader
83
+
84
+ The best way to use the reader is via a context manager to ensure the HDF5 file handle is closed properly.
85
+
86
+ ```python
87
+ from reader import MicronsReader
88
+
89
+ path = "microns.h5"
90
+
91
+ with MicronsReader(path) as reader:
92
+ # Your analysis code here
93
+ pass
94
+ ```
95
+
96
+ ### 2. Overview of Dataset Structure
97
+
98
+ To see the internal organization of the file without loading the actual data into RAM:
99
+
100
+ ```python
101
+ with MicronsReader(path) as reader:
102
+ reader.print_structure(max_items=3)
103
+ ```
104
+
105
+ ### 3. Exploring Stimuli and Sessions
106
+
107
+ You can query the database by session, stimulus type, or brain area.
108
+
109
+ ```python
110
+ with MicronsReader(path) as reader:
111
+ # List available stimulus types
112
+ types = reader.get_video_types() # ['Clip', 'Monet2', 'Trippy']
113
+
114
+ # Get all unique hashes for a specific type
115
+ monet_hashes = reader.get_hashes_by_type('Monet2')
116
+
117
+ # Find which videos were shown in a specific session
118
+ session_hashes = reader.get_hashes_by_session('4_7', return_unique=True)
119
+
120
+ # Check which brain areas are recorded in a session
121
+ areas = reader.get_available_brain_areas('4_7') # ['V1', 'AL', 'LM', 'RL']
122
+ ```
123
+
124
+ ### 4. Loading Full Data (Stimulus + Responses)
125
+
126
+ The get_full_data_by_hash method is the most powerful tool in the library. It aggregates the video pixels and every recorded neural/behavioral repeat across all 14 sessions.
127
+
128
+ ```python
129
+ target_hash = "0JcYLY6eaQxNgD0AqyHf"
130
+
131
+ with MicronsReader(path) as reader:
132
+ # Load all data for this video, filtering for V1 neurons only
133
+ data = reader.get_full_data_by_hash(target_hash, brain_area='V1')
134
+
135
+ if data:
136
+ print(f"Video Shape: {data['clip'].shape}") # (Frames, H, W)
137
+
138
+ for trial in data['trials']:
139
+ print(f"Session: {trial['session']}")
140
+ print(f"Neural Responses: {trial['responses'].shape}") # (Neurons, Frames)
141
+ print(f"Running Speed: {trial['behavior'][0, :]}")
142
+ ```
143
+
144
+ ## 📂 Internal HDF5 Structure
145
+
146
+ The database is structured to minimize redundancy by storing the video "Clip" once and linking it to multiple "Trials" across sessions.
147
+ - `/videos/`: Contains the raw video arrays and links to their session instances.
148
+ - `/sessions/`: The "source of truth" for neural activity, organized by session ID and trial index.
149
+ - `/types/`: An index group for fast lookup of videos by category (Clip, Monet2, etc.).
150
+ - `/brain_areas/`: An index group linking brain regions (V1, LM...) to the sessions where they were recorded.
151
+
152
+ ## 📝 Citation
153
+
154
+ If you use this dataset or reader in your research, please cite the original MICrONS Phase 3 release and this repository.