|
|
--- |
|
|
tags: |
|
|
- neuroscience |
|
|
- connectomics |
|
|
- timeseries |
|
|
- matlab |
|
|
- functional-connectivity |
|
|
--- |
|
|
|
|
|
# PPMI Connectivity Graphs — HF Staging (Derivatives) |
|
|
|
|
|
This dataset ships **ready-to-use functional brain connectivity graphs** derived from the PPMI cohort in a BIDS-ish *derivatives* layout. For each subject and parcellation, we include:
|
|
|
- **ROI time-series** (`*_desc-timeseries_parc-<name>.mat`)
|
|
|
- **Pearson correlation connectivity matrix** (`*_desc-correlation_matrix_parc-<name>.mat`)
|
|
|
- **JSON sidecars** with summary fields (nodes, measure, symmetric/weighted flags)
|
|
|
|
|
|
## Contents
|
|
|
```
|
|
|
|
|
|
data/
|
|
|
parc-<schema>/
|
|
|
sub-<id>/
|
|
|
sub-<id>\_desc-timeseries\_parc-<schema>.mat
|
|
|
sub-<id>\_desc-correlation\_matrix\_parc-<schema>.mat
|
|
|
\*.json
|
|
|
manifests/
|
|
|
manifest.jsonl # one JSON object per raw file (sha256, bytes, target\_rel)
|
|
|
participants.tsv
|
|
|
phenotype/ # subject-level variables (if present)
|
|
|
metadata/
|
|
|
raw/ # resources & summaries used to build the set
|
|
|
artifacts/ # inventory, checks, B5 manifest & reports
|
|
|
provenance/ # author notes, dataset summaries, exclusions
|
|
|
|
|
|
````
|
|
|
|
|
|
## Quick start (Python) |
|
|
```python
|
|
|
from huggingface_hub import snapshot_download
|
|
|
from pathlib import Path
|
|
|
from scipy.io import loadmat
|
|
|
|
|
|
root = Path(snapshot_download(repo_id="<org>/<dataset>", repo_type="dataset", revision="<tag>"))
|
|
|
pid, parc = "sub-prodromal75492", "ward100"
|
|
|
ts = loadmat(root / f"data/parc-{parc}/{pid}/{pid}_desc-timeseries_parc-{parc}.mat")
|
|
|
cm = loadmat(root / f"data/parc-{parc}/{pid}/{pid}_desc-correlation_matrix_parc-{parc}.mat")
|
|
|
|
|
|
# common variable names (fallback-friendly)
|
|
|
X = next((ts.get(k) for k in ["features_timeseries","timeseries","X"] if k in ts), None) # (nodes × time)
|
|
|
A = next((cm.get(k) for k in ["correlation_matrix","corr","A"] if k in cm), None) # (nodes × nodes)
|
|
|
print("Timeseries:", None if X is None else X.shape, " Connectivity:", None if A is None else A.shape) |
|
|
```` |
|
|
|
|
|
## Use with `datasets` (viewer‑ready, no scripts) |
|
|
|
|
|
Note: modern `datasets` (>= 3.x) does not execute local Python dataset scripts. Use `data_files=` with Parquet/JSONL as shown below. |
|
|
|
|
|
You can explore a tiny, fast preview split directly via the `datasets` library. The preview embeds a small 8×8 top‑left |
|
|
slice of the correlation matrix so the Hugging Face viewer renders rows/columns quickly. Paths to the full on‑disk |
|
|
arrays are included for downstream loading. |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Root-level Viewer splits (recommended on the Hub): |
|
|
# train.parquet — tiny preview with embedded 8×8 matrices |
|
|
# validation.parquet — metadata-only dev slice |
|
|
|
|
|
ds = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="train.parquet", split="train") |
|
|
row = ds[0] |
|
|
print(row["parcellation"], row["subject"]) # e.g., 'AAL116', 'sub-control3351' |
|
|
print(row["corr_shape"], row["ts_shape"]) # e.g., [116, 116], [116] |
|
|
corr8 = row["correlation_matrix"] # 8×8 nested list (for display) |
|
|
|
|
|
# Light dev slice (metadata+paths only). Stream to avoid downloads in CI. |
|
|
dev = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="validation.parquet", split="train", streaming=True) |
|
|
for ex in dev.take(3): |
|
|
_ = (ex["parcellation"], ex["subject"], ex["corr_path"]) # no embedded arrays |
|
|
``` |
|
|
|
|
|
You can also use the manifest entrypoints under `manifests/`: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
preview = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="manifests/preview.parquet", split="train") |
|
|
dev = load_dataset("pakkinlau/multi-modal-derived-brain-network", data_files="manifests/dev.parquet", split="train", streaming=True) |
|
|
``` |
|
|
|
|
|
To access the full arrays, load from the returned `corr_path` / `ts_path` using SciPy or `mat73` with variable name fallbacks: |
|
|
|
|
|
```python |
|
|
from pathlib import Path |
|
|
from scipy.io import loadmat |
|
|
|
|
|
root = Path(ds.cache_files[0]["filename"]).parents[2] # dataset snapshot root (one way to locate) |
|
|
row = ds[0] |
|
|
cm = loadmat(root / row["corr_path"]) # correlation matrix (.mat) |
|
|
ts = loadmat(root / row["ts_path"]) # timeseries (.mat) |
|
|
A = next((cm.get(k) for k in ["correlation_matrix","corr","A"] if k in cm), None) |
|
|
X = next((ts.get(k) for k in ["features_timeseries","timeseries","X"] if k in ts), None) |
|
|
``` |
|
|
|
|
|
### Preview vs. dev vs. full |
|
|
|
|
|
- preview: tiny split meant for the HF viewer, includes 8×8 `correlation_matrix` as a nested list plus shapes and file paths (see `manifests/preview.parquet`). |
|
|
- dev: small metadata‑only slice across 1–2 parcellations; yields `parcellation`, `subject`, shapes, and file paths (see `manifests/dev.parquet`). |
|
|
- full arrays: kept in‑repo under `data/` and referenced by the manifests; load them locally using the variable fallbacks above. |
|
|
|
|
|
If you use our main analysis repo, you can also load pairs via its adapters (if installed): |
|
|
|
|
|
```python |
|
|
from brain_graph.data import hf_pair # provided by the main repo |
|
|
# hf_pair(parcellation, subject, root=Path(...)) returns (timeseries, correlation) arrays |
|
|
X, A = hf_pair("AAL116", "sub-control3351", root=Path("/path/to/local/snapshot")) |
|
|
``` |
|
|
|
|
|
### Parcellations |
|
|
|
|
|
- AAL116 — 116 ROIs |
|
|
- harvard48 — 48 ROIs |
|
|
- kmeans100 — 100 ROIs |
|
|
- schaefer100 — 100 ROIs |
|
|
- ward100 — 100 ROIs |
|
|
|
|
|
### File layout |
|
|
|
|
|
``` |
|
|
data/ |
|
|
parc-<parc>/ |
|
|
sub-<id>/ |
|
|
<id>_desc-timeseries_parc-<parc>.mat |
|
|
<id>_desc-correlation_matrix_parc-<parc>.mat |
|
|
*.json # sidecars |
|
|
manifests/ |
|
|
manifest.jsonl # machine inventory (sha256, bytes, target_rel per file) |
|
|
preview.jsonl # tiny viewer split (subject+paths+8x8) |
|
|
preview.parquet # Parquet version (fast viewer) |
|
|
dev.jsonl # optional light split (metadata+paths only) |
|
|
dev.parquet # Parquet version (fast viewer) |
|
|
``` |
|
|
|
|
|
### Data files |
|
|
|
|
|
- Root (used by the Viewer): |
|
|
- `train.parquet` — tiny viewer‑ready preview with embedded 8×8 correlation matrices |
|
|
- `validation.parquet` — dev metadata‑only slice (no embedded arrays) |
|
|
- Manifests (secondary entrypoints): |
|
|
- `manifests/preview.parquet` — same content as `train.parquet` (if duplicated) |
|
|
- `manifests/dev.parquet` — same as `validation.parquet` (if duplicated) |
|
|
|
|
|
### Integrity & Checksums |
|
|
|
|
|
Rows in the preview/dev manifests include `*_sha256` and `*_bytes` for both `corr_path` and `ts_path`, derived from `manifests/manifest.jsonl`. |
|
|
You can verify a local copy by recomputing SHA‑256 and matching the values. |
|
|
|
|
|
Example (verify a correlation .mat): |
|
|
|
|
|
```python |
|
|
import hashlib |
|
|
from pathlib import Path |
|
|
|
|
|
def sha256(path: Path, buf=131072): |
|
|
h = hashlib.sha256() |
|
|
with open(path, 'rb') as f: |
|
|
while True: |
|
|
b = f.read(buf) |
|
|
if not b: |
|
|
break |
|
|
h.update(b) |
|
|
return h.hexdigest() |
|
|
|
|
|
# compare with row['corr_sha256'] |
|
|
``` |
|
|
|
|
|
### Scripts (optional) |
|
|
|
|
|
- `scripts/enrich_manifests.py`: Enrich preview/dev JSONL with shapes (from sidecars), embedded 8×8 tiles (from `preview/`), and checksums (from `manifests/manifest.jsonl`). |
|
|
- `scripts/jsonl_to_parquet.py`: Convert any JSONL to Parquet with a stable schema. |
|
|
- `scripts/scan_to_manifest.py`: Scan `data/` to produce a metadata-only JSONL (parcellation, subject, shapes, paths, checksums). Useful for making new dev slices. |
|
|
- `scripts/make_preview.py`: Generate 8×8 correlation previews from `.mat` files for rows in a manifest. Requires SciPy or `mat73` locally. |
|
|
|
|
|
## Cohort & Metadata
|
|
|
|
|
|
* `participants.tsv` (+ optional `participants.json`)
|
|
|
* `phenotype/` (subject-level variables)
|
|
|
* `metadata/raw/`, `metadata/artifacts/`, `metadata/provenance/` (provenance, inventories, checks)
|
|
|
* JSON sidecars colocated with `.mat` under `data/`
|
|
|
* Parquet mirrors (optional, if you add them later)
|
|
|
|
|
|
## Integrity
|
|
|
|
|
|
* A machine manifest lives at `manifests/manifest.jsonl` (one JSON object per raw file) with SHA-256 and byte size.
|
|
|
* You can re-compute and verify locally if needed.
|
|
|
|
|
|
## License
|
|
|
|
|
|
* **Data** (everything under `data/`, `participants.tsv`, `phenotype/`, and `metadata` tables): **CC BY-NC-SA 4.0**.
|
|
|
* **Docs & examples** (this README, helper scripts): **Apache-2.0**.
|
|
|
See `LICENSE` for details.
|
|
|
|
|
|
## How to cite
|
|
|
|
|
|
See `CITATION.cff`. Please also acknowledge **PPMI** and the original derivative providers.
|
|
|
|
|
|
## Changelog
|
|
|
|
|
|
* **v1.0.0** — Initial HF release: multi-schema connectivity with cohort tables & provenance.
|
|
|
|