Datasets:
File size: 3,690 Bytes
f724514 9cee0f7 689aa27 0bb93c4 c1e848b 689aa27 0bb93c4 f724514 833dd88 3266cf1 1d556fc 3266cf1 eec39e4 12b296a 979f920 12b296a 6c1773c 979f920 7164976 30ea79e 3a61dec b82dbc3 567b49f 3dee9cd 4e8ae5d b82dbc3 eb67af7 4e8ae5d eec39e4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | ---
configs:
- config_name: worm_data_short
data_files:
- split: train
path: "worm_data_short.parquet"
language:
- en
license: mit
---
**CITATION**
[Q. Simeon, L. Venâncio, M. A. Skuhersky, A. Nayebi, E. S. Boyden and G. R. Yang, "Scaling Properties for Artificial Neural Network Models of a Small Nervous System," SoutheastCon 2024, Atlanta, GA, USA, 2024, pp. 516-524, doi: 10.1109/SoutheastCon52093.2024.10500049.](https://ieeexplore.ieee.org/document/10500049)
<a href="https://github.com/qsimeon/worm-data-preprocess">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white" />
</a>
**DATASET PROCESSING**
worm_data_short.parquet is generated by aggregating the info from 12 neural activity source datasets.
Each source dataset is processed as follows:
1. **Loading** raw data in various formats (MATLAB files, JSON files, etc.).
1. Extracting relevant data fields (neuron IDs, traces, time vectors, etc.).
1. Cleaning data
1. **Resampling** the data to a common time resolution. - if requested
1. **Smoothing** the data using different methods - if requested
1. **Normalizing** data
1. Creating dictionaries to map neuron indices to neuron IDs and vice versa.
1. Saving the preprocessed data into a standardized format.
**DATASET CONFIG**
This dataset was preprocessed with the following hyperparameters. To modify or reproduce the dataset with new settings, refer to the [source code](https://github.com/qsimeon/worm-data-preprocess).
- `resample_dt`: `0.333` — Time step for resampling
- `interpolate`: `"linear"` — Method used to fill missing data
- `smooth`:
- `method`: `"moving"` — Smoothing algorithm (`none`, `gaussian`, `exponential`, `moving`)
- `alpha`: `0.5` — Exponential smoothing factor
- `sigma`: `5` — Gaussian kernel width
- `window_size`: `15` — Window size for moving average
- `norm_transform`: `"standard"` — Type of normalization (`standard` or `causal`)
**FIGURE**
Compiled neural activity dataset from GCaMP calcium imaging of _C. elegans_ from multiple experimental sources, standardized to a common sampling rate and organization format.

**EXAMPLE USAGE**
To use this dataset, you may load or download it using the [`datasets`](https://huggingface.co/docs/datasets/en/loading) and [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/en/guides/download) libraries, respectively.
See this notebook on an example of how to download the dataset and begin working with the data.
<a target="_blank" href="https://colab.research.google.com/drive/1z7h2gGuWhupRtjpYc7IHFD4rJ4kIsyuD#scrollTo=ZiZXMRc931oy">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Google Colab notebook example of loading this dataset and then plotting a few samples of calcium data.

**ORIGINAL DATA FILES**
We provide a [Dropbox link](https://www.dropbox.com/scl/fi/vfygz1twi1jg62cfssc0w/opensource_data.zip?rlkey=qa4vpwcoza3k9v5o2watwblth&dl=0) to download the original data that we obtained from various sources, including publicly available and unpublished data shared with us by researchers. The `raw_data_file` in the dataset table references these files.
If you'd like to preprocess the data from scratch using different preprocessing settings or datasets, you may do so using the code in the [worm-data-preprocess](https://github.com/qsimeon/worm-data-preprocess) repo. |