license: cc-by-4.0
task_categories:
- feature-extraction
- time-series-forecasting
viewer: false
language:
- en
tags:
- neuroscience
- calcium-imaging
- biology
- multimodal
pretty_name: MICrONS Functional Activity Dataset (14 Sessions)
size_categories:
- 1K<n<10K
MICrONS Functional Activity Dataset & Reader
This repository contains a curated portion of the MICrONS (Multi-Scale Networked Analysis of Cellular Responding Order) dataset. It consists of functional calcium imaging data from the visual cortex of mice in response to various visual stimuli (natural clips (Clip) and parametric videos (Monet2, Trippy)).
Videos have been downsampled to match neural activity scan frequency with frame choice corresponding to the the frame appearing at least 66ms before scan time.
The data is organized into a highly efficient, indexed HDF5 format, allowing for rapid cross-session analysis based on either stimulus identity or brain anatomy.
📊 Dataset Overview
- Sessions: 14 sessions of registered neural activity.
- Stimuli: Three categories of videos (Clip, Monet2, Trippy) identified by unique condition hashes.
- Neural Data: Calcium traces (responses) from thousands of neurons across multiple visual areas (V1, AL, LM, RL).
- Behavioral Data: Synchronized treadmill speed and pupil radius.
- Eye Tracking: Pupil center coordinates (x, y) for gaze analysis.
root/
├── 📂 BRAIN_AREAS/ # Anatomical Index
│ └── 📂 <area_name>/ # e.g., V1, AL, LM, RL
│ └── 🔗 <session_id> -> /sessions/<session_id>
│
├── 📂 SESSIONS/ # Primary Data Storage
│ └── 📂 <session_id>/ # e.g., 4_7, 5_6
│ ├── 📂 META/ # Session-wide Metadata
│ │ ├── 📂 AREA_INDICES/ # Pre-calculated neuron masks
│ │ │ └── 📄 <area_name> [Dataset: (N_area_neurons,)]
│ │ ├── 📄 brain_areas [Dataset: (N_total_neurons,)]
│ │ ├── 📄 coordinates [Dataset: (N_total_neurons, 3)]
│ │ ├── 📄 unit_ids [Dataset: (N_total_neurons,)]
│ │ ├── 📄 condition_hashes [Dataset: (N_trials,)]
│ │ └── (Attr) fps [Float: Sampling rate]
│ └── 📂 TRIALS/ # Individual trial folders
│ └── 📂 <trial_idx>/ # Chronological trial index
│ ├── 📄 responses [Dataset: (N_neurons, F_trial)]
│ ├── 📄 behavior [Dataset: (2, F_trial)]
│ ├── 📄 pupil_center [Dataset: (2, F_trial)]
│ └── (Attr) condition_hash [String: Reference to video]
│
├── 📂 TYPES/ # Stimulus Category Index
│ └── 📂 <stim_type>/ # e.g., Clip, Monet2, Trippy
│ └── 🔗 <encoded_hash> -> /videos/<encoded_hash>
│
└── 📂 VIDEOS/ # Stimulus Library (Saved once)
└── 📂 <encoded_hash>/ # Encoded version of condition_hash
├── 📄 clip [Dataset: (Frames, H, W)]
├── 📂 INSTANCES/ # Reverse-index to trials
│ └── 🔗 <session_id>_tr<trial_idx> -> /sessions/<session_id>/trials/<trial_idx>
└── (Attr) original_hash [String: The raw hash]
└── (Attr) type [String: Stimulus type]
⚙️ Setup & Installation
There are two ways of accessing the contents of this repo.
1. Clone the Repository
Since the dataset is stored as a large HDF5 file (.h5), you must have Git LFS installed.
# Install git-lfs if you haven't already
git lfs install
# Clone the repository
git clone https://huggingface.co/datasets/NeuroBLab/MICrONS
cd microns-functional
# Install required packages
pip install - r requirements.txt
2. Programmatic Access (Python)
If you don't want to clone the full repository, you can download the reader and the data file directly into your Python script using the huggingface_hub library.
1. Install the library
pip install huggingface_hub h5py numpy
2. Download and Run
import sys
import importlib.util
from huggingface_hub import hf_hub_download
# 1. Define Repository Info
REPO_ID = "NeuroBLab/MICrONS"
DATA_FILENAME = "microns.h5"
READER_FILENAME = "reader.py"
print("Downloading files from Hugging Face...")
# 2. Download the Reader script
reader_path = hf_hub_download(repo_id=REPO_ID, filename=READER_FILENAME)
# 3. Download the HDF5 Data file (this handles Git LFS automatically)
data_path = hf_hub_download(repo_id=REPO_ID, filename=DATA_FILENAME)
# 4. Dynamically import the MicronsReader class from the downloaded file
spec = importlib.util.spec_from_file_location("reader", reader_path)
reader_module = importlib.util.module_from_spec(spec)
sys.modules["reader"] = reader_module
spec.loader.exec_module(reader_module)
from reader import MicronsReader
# 5. Use the reader
with MicronsReader(data_path) as reader:
print("File downloaded and reader initialized!")
reader.print_structure(max_items=1)
🛠️ Reader API Demo
The MicronsReader class is designed to handle the hierarchical structure of the HDF5 file transparently, including internal hash encoding and SoftLink navigation.
1. Initialize the Reader
The best way to use the reader is via a context manager to ensure the HDF5 file handle is closed properly.
from reader import MicronsReader
path = "microns.h5"
with MicronsReader(path) as reader:
# Your analysis code here
pass
2. Overview of Dataset Structure
To see the internal organization of the file without loading the actual data into RAM:
with MicronsReader(path) as reader:
reader.print_structure(max_items=3)
3. Exploring Stimuli and Sessions
You can query the database by session, stimulus type, or brain area.
with MicronsReader(path) as reader:
# List available stimulus types
types = reader.get_video_types() # ['Clip', 'Monet2', 'Trippy']
# Get all unique hashes for a specific type
monet_hashes = reader.get_hashes_by_type('Monet2')
# Find which videos were shown in a specific session
session_hashes = reader.get_hashes_by_session('4_7', return_unique=True)
# Check which brain areas are recorded in a session
areas = reader.get_available_brain_areas('4_7') # ['V1', 'AL', 'LM', 'RL']
4. Loading Full Data (Stimulus + Responses)
The get_full_data_by_hash method is the most powerful tool in the library. It aggregates the video pixels and every recorded neural/behavioral repeat across all 14 sessions.
target_hash = "0JcYLY6eaQxNgD0AqyHf"
with MicronsReader(path) as reader:
# Load all data for this video, filtering for V1 neurons only
data = reader.get_full_data_by_hash(target_hash, brain_area='V1')
if data:
print(f"Video Shape: {data['clip'].shape}") # (Frames, H, W)
for trial in data['trials']:
print(f"Session: {trial['session']}")
print(f"Neural Responses: {trial['responses'].shape}") # (Neurons, Frames)
print(f"Running Speed: {trial['behavior'][0, :]}")
📂 Internal HDF5 Structure
The database is structured to minimize redundancy by storing the video "Clip" once and linking it to multiple "Trials" across sessions.
/videos/: Contains the raw video arrays and links to their session instances./sessions/: The "source of truth" for neural activity, organized by session ID and trial index./types/: An index group for fast lookup of videos by category (Clip, Monet2, etc.)./brain_areas/: An index group linking brain regions (V1, LM...) to the sessions where they were recorded.
📝 Citation
If you use this dataset or reader in your research, please cite the original MICrONS Phase 3 release and this repository.