Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 1322890913 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MUniverse Dynamic EMG Benchmark v2

416 synthetic non-stationary surface EMG recordings for benchmarking motor-unit decomposition algorithms under dynamic joint motion. Split into 336 train (subjects 0–4, 21 MU pools) and 80 held-out val (subjects 5–9, 5 MU pools) — different simulated subjects, so the val set is a clean generalisation test.

Both splits share the same 8 conditions (sinusoidal/triangular wrist Flexion–Extension at two amplitudes × two SNR levels) and ship at 70-ch and 320-ch electrode configurations.

Files

File Size Purpose
recordings.tar.gz 4.7 GB Train: 336 .npz recordings, subjects 0–4, 21 MU pools (benchmark_v2/{ch070,ch320}/)
recordings_val.tar.gz 1.2 GB Val: 80 .npz recordings, subjects 5–9, 5 MU pools (benchmark_v2_val/{ch070,ch320}/)
reproducibility_bundle.zip 36 KB Scripts, manifests, NeuroMotion patch, PBS templates
benchmark_spec.json 7 KB Declarative spec: subjects, muscles, conditions, combos
README.md this page (also inside the bundle, as the canonical reproduction guide)

SHA256:

  • recordings.tar.gz: 715458efd70ca8932e1e45a642698fa11a4446e0352dbe1840e0a7e251544f56
  • recordings_val.tar.gz: e4a14c291f4087a90ee10a376b57c359e3507ba4cdd68bb94b6f7b9aa6cc3756
  • reproducibility_bundle.zip: f4e034726a5c2afd60f408d3e32d63be074a586235316c38926feafd3d3e0988

Splits

Split Subjects Pools Recordings (per ch config) Recordings total
Train seeds 0–4 (5 subjects) 21 (6 small + 8 medium + 7 large) 168 336
Val seeds 5–9 (5 subjects) 5 (1 small + 2 medium + 2 large) 40 80

Subjects 5–9 are completely disjoint from training subjects, sampled from the same NeuroMotion+BioMime generator with the same fiber-density distribution. Val combos:

Tier Subject Muscle Threshold N MUs
small sub5 PL 0.85 29
medium sub6 ECU 0.85 50
medium sub7 EDI 0.85 62
large sub8 FCU_u 0.85 79
large sub9 ECRB 0.90 92

Quick load

import numpy as np

# Train recording
d = np.load("benchmark_v2/ch320/SA10-hi_ECU_sub03_N055.npz", allow_pickle=True)

# Val recording
d = np.load("benchmark_v2_val/ch320/SA10-hi_ECU_sub06_N050.npz", allow_pickle=True)

emg            = d["emg"]              # (T, M) float32
spike_mu_0     = d["spike_mu_0"]       # (n_spikes,) int64 — ground-truth indices
angle_profile  = d["angle_profile"]    # (T,) joint angle in degrees
effort_profile = d["effort_profile"]   # (T,) effort, fraction MVC
fs             = int(d["fs"])          # 2048

See benchmark_spec.json for the full parameter grid.

Reproduction

The dataset is fully reproducible from open code: NeuroMotion + BioMime → MUAP libraries → crosstalk filtering → NeuroMotion synthesis. Stage-by-stage scripts and PBS templates are inside reproducibility_bundle.zip. The val split was generated with the same pipeline using seeds 5–9 instead of 0–4. The original README inside the bundle walks through all five stages.


Original reproducibility README

Below is the canonical reproduction guide as it appears inside reproducibility_bundle.zip.

Complete configuration, scripts, and metadata to regenerate the v2 benchmark: 336 non-stationary EMG recordings across 21 motor-unit pool combinations (26–113 MUs), 5 subjects, 7 muscle types, and 8 conditions (sinusoid/triangular angle × 2 difficulties × 2 SNR).

The benchmark was generated using MUniverse as-is — no modifications to the core package except a single small patch to NeuroMotion's Triangular angle-profile generator (included in scripts/patches/). The story of this bundle is:

  1. Generate 40 MUAP libraries using NeuroMotion+BioMime (5 subjects × 8 muscles at Flex-Ext DOF).
  2. Analyze pairwise MUAP similarity, filter each library to a "crosstalk-clean" subset.
  3. Use NeuroMotion to synthesize 336 EMG recordings from those clean subsets, each exposed to different rotation profiles and noise levels.

Contents

experiments/v2/
├── README.md                              # This file
├── benchmark_spec.json                    # Declarative specification (reviewer-readable)
├── combos.json                            # The 21 subject-muscle-threshold combos used
├── scripts/                               # Scripts used, in execution order
│   ├── 01_generate_muap_caches.py         # Stage 1: Run BioMime to make 40 caches (GPU)
│   ├── 02_analyze_crosstalk.py            # Stage 2: Pairwise similarity analysis
│   ├── 03_create_clean_caches.py          # Stage 3: Filter each cache to a clean subset
│   ├── 04_generate_recordings.py          # Stage 4: Run NeuroMotion on clean caches
│   ├── 05_fix_triangular.py               # Stage 5: Bug fix, regenerate triangular only
│   ├── build_metadata.py                  # (meta) Generates the manifest CSVs below
│   └── patches/
│       └── neuromotion_triangular_symmetric.patch    # NeuroMotion fix
└── metadata/
    ├── muap_caches_manifest.csv           # 40 caches with shapes, fiber density, sha256
    ├── combos_manifest.csv                # 21 clean combo pools
    ├── conditions_manifest.csv            # 8 condition definitions
    └── recordings_manifest.csv            # 336 recordings with full parameters

Prerequisites

  1. MUniverse — tested against dynamic_zi_resampling branch (see git log). Clone the repo and install:

    git clone https://github.com/dfarinagroup/muniverse muniverse-live
    cd muniverse-live
    git checkout dynamic_zi_resampling
    pip install -e .
    
  2. NeuroMotion Singularity/Docker imagepranavm19/muniverse-test:neuromotion:

    cd src/environment
    singularity pull muniverse-test_neuromotion.sif docker://pranavm19/muniverse-test:neuromotion
    
  3. NeuroMotion triangular patch — apply to src/muniverse/data_generation/_run_neuromotion.py:

    cd muniverse-live
    git apply experiments/v2/scripts/patches/neuromotion_triangular_symmetric.patch
    

    (Or just copy the Triangular branch from that file into your _run_neuromotion.py line ~434.)

  4. Hardware:

    • Stage 1 (MUAP generation): GPU — each of 40 caches takes ~2-3 min on A40 / RTX6000.
    • Stages 2–5: CPU only.
  5. Disk:

    • MUAP caches: ~95 GB permanent
    • Clean caches: ~8 GB
    • Recordings: ~5 GB
    • Raw outputs + configs: ~10 GB ephemeral (safe to delete after gen)

Reproduction Steps

Stage 1: Generate 40 MUAP caches (GPU, ~1.5 hours)

Each cache is a (num_mus, 130, 10, 32, 96) numpy array — MUAPs over 130 angle steps for each MU on a 10×32 electrode grid.

# Generate all 40 (5 subjects × 8 muscles at Flex-Ext DOF)
python experiments/v2/scripts/01_generate_muap_caches.py

The script edits these hardcoded paths — adjust before running:

  • CACHE_ROOT: where to write the caches
  • RAW_ROOT: scratch dir for container outputs (ephemeral is fine)
  • SIF_PATH: path to the NeuroMotion Singularity image

Output: <CACHE_ROOT>/cache_sub-simXX/subject_X_{MUSCLE}_Flexion-Extension_muaps.npy + _metadata.json + _mn_properties.csv.

Stage 2: Crosstalk analysis (CPU, ~5 min)

python experiments/v2/scripts/02_analyze_crosstalk.py

Discovers all Flex-Ext caches, computes pairwise cosine similarity at mid-pose, and writes clean-subset selections at thresholds 0.75 / 0.80 / 0.85 / 0.90 to: experiments/results/crosstalk_v3/exp_crosstalk_clean_subsets.json

Stage 3: Create 21 clean caches (CPU, ~1 min)

python experiments/v2/scripts/03_create_clean_caches.py

Reads the crosstalk JSON and writes 21 filtered caches to experiments/clean_caches_v2/ (organized as {tier}/sub{N}_{muscle}_thr{XX}/...). Also writes combos.json — the master list consumed by Stage 4.

Stage 4: Generate 336 recordings (CPU, ~55 min)

python experiments/v2/scripts/04_generate_recordings.py \
    --config experiments/clean_caches_v2/combos.json

For each combo, runs the container 8 times (one per condition), post-processes into 2 channel configs (ch070 + ch320), and writes: <BENCHMARK_ROOT>/ch{070,320}/{cond}_{muscle}_sub{N}_N{K}.npz

Stage 5 (only if you hit the bug): Fix triangular recordings

If you run Stage 4 without the NeuroMotion patch applied, the 168 triangular recordings will have flat 0° angle profiles. To detect: check any TA*.npz file — angle_profile should oscillate ±sin_amplitude.

If buggy, patch NeuroMotion and regenerate just the triangular recordings:

# Submit as array job (21 combos × ~3 min each)
qsub experiments/v2/run_regenerate_triangular.pbs

# Or run interactively, one combo at a time
for i in {0..20}; do
    python experiments/v2/scripts/05_fix_triangular.py --combo-idx $i
done

Rebuild metadata CSVs

python experiments/v2/scripts/build_metadata.py

Regenerates metadata/*.csv from the actual files on disk (useful for validation).

Data Format

Each recording is a .npz file following /BENCHMARK_DATA_FORMAT.md with:

Key Shape Description
emg (T, M) float32 EMG samples-first
spike_mu_{i} (n_spikes,) int64 GT spike sample indices per MU (empty for inactive)
angle_profile (T,) Joint angle over time (deg)
effort_profile (T,) Effort over time (fraction MVC)
fs, n_channels, n_samples, muscle, subject_seed, af, ... scalars Full parameters

Conditions Summary

All 8 conditions use constant effort at 50% MVC with 0.3 Hz sinusoid/triangle. Duration 10s, fs=2048.

ID Angle Profile af (fraction of ±65°) ±deg SNR (dB)
SA05-hi Sinusoid 0.5 ±32.5 25
SA05-lo Sinusoid 0.5 ±32.5 20
SA10-hi Sinusoid 1.0 ±65.0 25
SA10-lo Sinusoid 1.0 ±65.0 20
TA05-hi Triangular 0.5 ±32.5 25
TA05-lo Triangular 0.5 ±32.5 20
TA10-hi Triangular 1.0 ±65.0 25
TA10-lo Triangular 1.0 ±65.0 20

Combo Summary

21 combos grouped into 3 pool-size tiers:

Tier # Combos Pool range Typical recording
Small 6 26–45 MUs PL × 3 subjects + FCU_h + ECRL + FDSI
Medium 8 47–74 MUs ECU, FDSI, EDI, ECRB, FCU_u
Large 7 77–113 MUs EDI, ECU, ECRB, FCU_u at threshold 0.90

See combos.json / metadata/combos_manifest.csv for the exact list.

Verification

After reproduction, verify the dataset is correct:

# Compare against recordings_manifest.csv (336 rows)
python experiments/v2/scripts/build_metadata.py
diff <(sort experiments/v2/metadata/recordings_manifest.csv) <(sort <your-regenerated>/recordings_manifest.csv)

MUAP cache SHA256 hashes (first 16 hex chars) are in metadata/muap_caches_manifest.csv for integrity checking of the Stage 1 outputs.

Citation

<pending — paper in prep>

License

Same as MUniverse core (GPL-3.0).

Downloads last month
27