Datasets:
configs:
- config_name: worm_data_short
data_files:
- split: train
path: worm_data_short.parquet
language:
- en
license: mit
CITATION
DATASET PROCESSING
worm_data_short.parquet is generated by aggregating the info from 12 neural activity source datasets. Each source dataset is processed as follows:
- Loading raw data in various formats (MATLAB files, JSON files, etc.).
- Extracting relevant data fields (neuron IDs, traces, time vectors, etc.).
- Cleaning data
- Resampling the data to a common time resolution. - if requested
- Smoothing the data using different methods - if requested
- Normalizing data
- Creating dictionaries to map neuron indices to neuron IDs and vice versa.
- Saving the preprocessed data into a standardized format.
DATASET CONFIG
This dataset was preprocessed with the following hyperparameters. To modify or reproduce the dataset with new settings, refer to the source code.
resample_dt:0.333— Time step for resamplinginterpolate:"linear"— Method used to fill missing datasmooth:method:"moving"— Smoothing algorithm (none,gaussian,exponential,moving)alpha:0.5— Exponential smoothing factorsigma:5— Gaussian kernel widthwindow_size:15— Window size for moving average
norm_transform:"standard"— Type of normalization (standardorcausal)
FIGURE
Compiled neural activity dataset from GCaMP calcium imaging of C. elegans from multiple experimental sources, standardized to a common sampling rate and organization format.

EXAMPLE USAGE
To use this dataset, you may load or download it using the datasets and huggingface_hub libraries, respectively.
See this notebook on an example of how to download the dataset and begin working with the data.
Google Colab notebook example of loading this dataset and then plotting a few samples of calcium data.

ORIGINAL DATA FILES
We provide a Dropbox link to download the original data that we obtained from various sources, including publicly available and unpublished data shared with us by researchers. The raw_data_file in the dataset table references these files.
If you'd like to preprocess the data from scratch using different preprocessing settings or datasets, you may do so using the code in the worm-data-preprocess repo.