pakkinlau commited on
Commit
650d06b
·
1 Parent(s): 11c074c

> Make dataset viewer‑ready: preview/dev Parquet, scripts, README, .gitignore

Browse files

- Add tiny preview/dev manifests with shapes, paths, and checksums:
- manifests/preview.jsonl (embeds 8×8 correlation_matrix), manifests/dev.jsonl (metadata-only)
- Parquet mirrors: manifests/preview.parquet, manifests/dev.parquet
- Add precomputed 8×8 tiles for preview: preview/<parc>__<subject>__corr8x8.json
- Add helper scripts:
- scripts/enrich_manifests.py (shapes from sidecars, checksums from manifest.jsonl, embed 8×8 for
preview)
- scripts/jsonl_to_parquet.py (JSONL → Parquet)
- scripts/scan_to_manifest.py (scan data/ to build metadata-only manifests)
- scripts/make_preview.py (generate 8×8 tiles from .mat using SciPy/mat73)
- Add parcellations.json (parcellation → ROI count)
- Update README:
- YAML tags for discoverability
- Modern datasets usage via data_files (no Python dataset script)
- Preview vs dev behavior, variable fallbacks (correlation_matrix|corr|A; features_timeseries|
timeseries|X)
- Integrity/checksums and script docs
- Add .gitignore to exclude .venv, caches, editor files
- Preserve existing on‑disk layout; large arrays remain in repo and are referenced by relative paths

.gitignore ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python virtual environments
2
+ .venv/
3
+ venv/
4
+
5
+ # Python cache and build
6
+ __pycache__/
7
+ *.py[cod]
8
+ *.egg-info/
9
+ .pytest_cache/
10
+ .mypy_cache/
11
+
12
+ # OS/editor cruft
13
+ .DS_Store
14
+ Thumbs.db
15
+ .idea/
16
+ .vscode/
17
+
18
+ # Jupyter
19
+ .ipynb_checkpoints/
20
+
21
+ # Parquet conversion caches (if any)
22
+ *.parquet.tmp
README.md CHANGED
@@ -1,5 +1,13 @@
1
-
2
- # PPMI Connectivity Graphs — HF Staging (Derivatives)
 
 
 
 
 
 
 
 
3
 
4
  This dataset ships **ready-to-use functional brain connectivity graphs** derived from the PPMI cohort in a BIDS-ish *derivatives* layout. For each subject and parcellation, we include:
5
  - **ROI time-series** (`*_desc-timeseries_parc-<name>.mat`)
@@ -26,7 +34,7 @@ provenance/ # author notes, dataset summaries, exclusions
26
 
27
  ````
28
 
29
- ## Quick start (Python)
30
  ```python
31
  from huggingface_hub import snapshot_download
32
  from pathlib import Path
@@ -40,8 +48,104 @@ cm = loadmat(root / f"data/parc-{parc}/{pid}/{pid}_desc-correlation_matrix_parc-
40
  # common variable names (fallback-friendly)
41
  X = next((ts.get(k) for k in ["features_timeseries","timeseries","X"] if k in ts), None) # (nodes × time)
42
  A = next((cm.get(k) for k in ["correlation_matrix","corr","A"] if k in cm), None) # (nodes × nodes)
43
- print("Timeseries:", None if X is None else X.shape, " Connectivity:", None if A is None else A.shape)
44
- ````
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## Cohort & Metadata
47
 
 
1
+ ---
2
+ tags:
3
+ - neuroscience
4
+ - connectomics
5
+ - timeseries
6
+ - matlab
7
+ - functional-connectivity
8
+ ---
9
+
10
+ # PPMI Connectivity Graphs — HF Staging (Derivatives)
11
 
12
  This dataset ships **ready-to-use functional brain connectivity graphs** derived from the PPMI cohort in a BIDS-ish *derivatives* layout. For each subject and parcellation, we include:
13
  - **ROI time-series** (`*_desc-timeseries_parc-<name>.mat`)
 
34
 
35
  ````
36
 
37
+ ## Quick start (Python)
38
  ```python
39
  from huggingface_hub import snapshot_download
40
  from pathlib import Path
 
48
  # common variable names (fallback-friendly)
49
  X = next((ts.get(k) for k in ["features_timeseries","timeseries","X"] if k in ts), None) # (nodes × time)
50
  A = next((cm.get(k) for k in ["correlation_matrix","corr","A"] if k in cm), None) # (nodes × nodes)
51
+ print("Timeseries:", None if X is None else X.shape, " Connectivity:", None if A is None else A.shape)
52
+ ````
53
+
54
+ ## Use with `datasets` (viewer‑ready, no scripts)
55
+
56
+ You can explore a tiny, fast preview split directly via the `datasets` library. The preview embeds a small 8×8 top‑left
57
+ slice of the correlation matrix so the Hugging Face viewer renders rows/columns quickly. Paths to the full on‑disk
58
+ arrays are included for downstream loading.
59
+
60
+ ```python
61
+ from datasets import load_dataset
62
+
63
+ # Tiny viewer-ready preview (embeds small 8×8 matrices). One split named "train".
64
+ ds = load_dataset(
65
+ "pakkinlau/multi-modal-derived-brain-network",
66
+ data_files="manifests/preview.parquet",
67
+ split="train",
68
+ )
69
+ row = ds[0]
70
+ print(row["parcellation"], row["subject"]) # e.g., 'AAL116', 'sub-control3351'
71
+ print(row["corr_shape"], row["ts_shape"]) # e.g., [116, 116], [116]
72
+ corr8 = row["correlation_matrix"] # 8×8 nested list (for display)
73
+
74
+ # Light dev slice (metadata+paths only). Stream to avoid downloads in CI.
75
+ dev = load_dataset(
76
+ "pakkinlau/multi-modal-derived-brain-network",
77
+ data_files="manifests/dev.parquet",
78
+ split="train",
79
+ streaming=True,
80
+ )
81
+ for ex in dev.take(3):
82
+ _ = (ex["parcellation"], ex["subject"], ex["corr_path"]) # no embedded arrays
83
+ ```
84
+
85
+ To access the full arrays, load from the returned `corr_path` / `ts_path` using SciPy or `mat73` with variable name fallbacks:
86
+
87
+ ```python
88
+ from pathlib import Path
89
+ from scipy.io import loadmat
90
+
91
+ root = Path(ds.cache_files[0]["filename"]).parents[2] # dataset snapshot root (one way to locate)
92
+ row = ds[0]
93
+ cm = loadmat(root / row["corr_path"]) # correlation matrix (.mat)
94
+ ts = loadmat(root / row["ts_path"]) # timeseries (.mat)
95
+ A = next((cm.get(k) for k in ["correlation_matrix","corr","A"] if k in cm), None)
96
+ X = next((ts.get(k) for k in ["features_timeseries","timeseries","X"] if k in ts), None)
97
+ ```
98
+
99
+ ### Preview vs. dev vs. full
100
+
101
+ - preview: tiny split meant for the HF viewer, includes 8×8 `correlation_matrix` as a nested list plus shapes and file paths (see `manifests/preview.parquet`).
102
+ - dev: small metadata‑only slice across 1–2 parcellations; yields `parcellation`, `subject`, shapes, and file paths (see `manifests/dev.parquet`).
103
+ - full arrays: kept in‑repo under `data/` and referenced by the manifests; load them locally using the variable fallbacks above.
104
+
105
+ If you use our main analysis repo, you can also load pairs via its adapters (if installed):
106
+
107
+ ```python
108
+ from brain_graph.data import hf_pair # provided by the main repo
109
+ # hf_pair(parcellation, subject, root=Path(...)) returns (timeseries, correlation) arrays
110
+ X, A = hf_pair("AAL116", "sub-control3351", root=Path("/path/to/local/snapshot"))
111
+ ```
112
+
113
+ ### Parcellations
114
+
115
+ - AAL116 — 116 ROIs
116
+ - harvard48 — 48 ROIs
117
+ - kmeans100 — 100 ROIs
118
+ - schaefer100 — 100 ROIs
119
+ - ward100 — 100 ROIs
120
+
121
+ ### File layout
122
+
123
+ ```
124
+ data/
125
+ parc-<parc>/
126
+ sub-<id>/
127
+ <id>_desc-timeseries_parc-<parc>.mat
128
+ <id>_desc-correlation_matrix_parc-<parc>.mat
129
+ *.json # sidecars
130
+ manifests/
131
+ manifest.jsonl # machine inventory (sha256, bytes, target_rel per file)
132
+ preview.jsonl # tiny viewer split (subject+paths+8x8)
133
+ preview.parquet # Parquet version (fast viewer)
134
+ dev.jsonl # optional light split (metadata+paths only)
135
+ dev.parquet # Parquet version (fast viewer)
136
+ ```
137
+
138
+ ### Integrity & Checksums
139
+
140
+ Rows in the preview/dev manifests include `*_sha256` and `*_bytes` for both `corr_path` and `ts_path`, derived from `manifests/manifest.jsonl`.
141
+ You can verify a local copy by recomputing SHA‑256 and matching the values.
142
+
143
+ ### Scripts (optional)
144
+
145
+ - `scripts/enrich_manifests.py`: Enrich preview/dev JSONL with shapes (from sidecars), embedded 8×8 tiles (from `preview/`), and checksums (from `manifests/manifest.jsonl`).
146
+ - `scripts/jsonl_to_parquet.py`: Convert any JSONL to Parquet with a stable schema.
147
+ - `scripts/scan_to_manifest.py`: Scan `data/` to produce a metadata-only JSONL (parcellation, subject, shapes, paths, checksums). Useful for making new dev slices.
148
+ - `scripts/make_preview.py`: Generate 8×8 correlation previews from `.mat` files for rows in a manifest. Requires SciPy or `mat73` locally.
149
 
150
  ## Cohort & Metadata
151
 
dataset_infos.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "default": {
3
+ "description": "Multi-modal derived brain network dataset (PPMI connectivity graphs) organized in a BIDS-ish derivatives layout.\n\nThis builder exposes a tiny, fast \"preview\" split for interactive exploration on huggingface.co and quick local smoke tests. The preview embeds a downsampled correlation matrix (e.g., 8×8) for each row so the viewer can render a small numeric table. It also includes metadata (parcellation, subject) and array shapes. The heavy arrays remain on disk under the repository (not moved or renamed) and can be accessed via the provided file paths.\n\nVariable name fallbacks when reading .mat files mirror the main repository conventions:\n - timeseries: features_timeseries | timeseries | X\n - correlation: correlation_matrix | corr | A\n\nFor larger slices (optional \"dev\" split), only metadata and file paths are exposed to keep the viewer light.",
4
+ "citation": "Please cite the PPMI and derivative providers as listed in CITATION.cff of this dataset repository.",
5
+ "homepage": "https://huggingface.co/datasets/pakkinlau/multi-modal-derived-brain-network",
6
+ "license": null,
7
+ "features": {
8
+ "parcellation": {"dtype": "string", "id": null, "_type": "Value"},
9
+ "subject": {"dtype": "string", "id": null, "_type": "Value"},
10
+ "corr_shape": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "_type": "Sequence"},
11
+ "ts_shape": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "_type": "Sequence"},
12
+ "corr_path": {"dtype": "string", "id": null, "_type": "Value"},
13
+ "ts_path": {"dtype": "string", "id": null, "_type": "Value"},
14
+ "correlation_matrix": {"feature": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "_type": "Sequence"}, "length": -1, "_type": "Sequence"}
15
+ },
16
+ "post_processed": null,
17
+ "supervised_keys": null,
18
+ "task_templates": null,
19
+ "builder_name": "MMDN",
20
+ "config_name": "default",
21
+ "version": {"version_str": "1.0.0", "major": 1, "minor": 0, "patch": 0},
22
+ "splits": {},
23
+ "download_checksums": {},
24
+ "download_size": 0,
25
+ "dataset_size": 0,
26
+ "size_in_bytes": 0
27
+ }
28
+ }
dataset_mmdn.py ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from typing import Dict, Iterable, Iterator, List, Optional, Tuple
4
+
5
+ import datasets
6
+
7
+
8
+ _CITATION = """\
9
+ Please cite the PPMI and derivative providers as listed in CITATION.cff of this dataset repository.
10
+ """
11
+
12
+
13
+ _DESCRIPTION = """\
14
+ Multi-modal derived brain network dataset (PPMI connectivity graphs) organized in a BIDS-ish derivatives layout.
15
+
16
+ This builder exposes a tiny, fast "preview" split for interactive exploration on huggingface.co and quick local
17
+ smoke tests. The preview embeds a downsampled correlation matrix (e.g., 8×8) for each row so the viewer can render a
18
+ small numeric table. It also includes metadata (parcellation, subject) and array shapes. The heavy arrays remain on
19
+ disk under the repository (not moved or renamed) and can be accessed via the provided file paths.
20
+
21
+ Variable name fallbacks when reading .mat files mirror the main repository conventions:
22
+ - timeseries: features_timeseries | timeseries | X
23
+ - correlation: correlation_matrix | corr | A
24
+
25
+ For larger slices (optional "dev" split), only metadata and file paths are exposed to keep the viewer light.
26
+ """
27
+
28
+
29
+ _HOMEPAGE = "https://huggingface.co/datasets/pakkinlau/multi-modal-derived-brain-network"
30
+
31
+
32
+ class MMDNConfig(datasets.BuilderConfig):
33
+ def __init__(self, **kwargs):
34
+ super().__init__(version=datasets.Version("1.0.0"), **kwargs)
35
+
36
+
37
+ class MMDN(datasets.GeneratorBasedBuilder):
38
+ BUILDER_CONFIGS = [
39
+ MMDNConfig(name="default", description="MMDN with preview (embedded tiny arrays) and optional dev metadata split"),
40
+ ]
41
+ DEFAULT_CONFIG_NAME = "default"
42
+
43
+ def _info(self) -> datasets.DatasetInfo:
44
+ # Features include a superset so both preview (with embedded small matrices) and dev (metadata-only) work.
45
+ features = datasets.Features(
46
+ {
47
+ "parcellation": datasets.Value("string"),
48
+ "subject": datasets.Value("string"),
49
+ # Shapes as [n, n] and [n, t]
50
+ "corr_shape": datasets.Sequence(datasets.Value("int32")),
51
+ "ts_shape": datasets.Sequence(datasets.Value("int32")),
52
+ # File paths (relative to repo root)
53
+ "corr_path": datasets.Value("string"),
54
+ "ts_path": datasets.Value("string"),
55
+ # Tiny preview matrix (downsampled 8x8 top-left). For non-preview, this can be an empty list.
56
+ "correlation_matrix": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
57
+ }
58
+ )
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=features,
62
+ citation=_CITATION,
63
+ homepage=_HOMEPAGE,
64
+ )
65
+
66
+ def _split_generators(self, dl_manager: datasets.DownloadManager):
67
+ base_dir = os.path.abspath(os.path.dirname(__file__))
68
+
69
+ def _maybe(path: str) -> Optional[str]:
70
+ ap = os.path.join(base_dir, path)
71
+ return ap if os.path.exists(ap) else None
72
+
73
+ preview_manifest = _maybe(os.path.join("manifests", "preview.jsonl"))
74
+ dev_manifest = _maybe(os.path.join("manifests", "dev.jsonl"))
75
+
76
+ splits = []
77
+ if preview_manifest:
78
+ splits.append(
79
+ datasets.SplitGenerator(name=datasets.Split("preview"), gen_kwargs={"manifest_path": preview_manifest, "embed_preview": True})
80
+ )
81
+ if dev_manifest:
82
+ splits.append(
83
+ datasets.SplitGenerator(name=datasets.Split("dev"), gen_kwargs={"manifest_path": dev_manifest, "embed_preview": False})
84
+ )
85
+ # If no manifest is found, raise a helpful error.
86
+ if not splits:
87
+ raise FileNotFoundError(
88
+ "No manifests found. Expected manifests/preview.jsonl (and optionally manifests/dev.jsonl) in the dataset repo."
89
+ )
90
+ return splits
91
+
92
+ def _generate_examples(self, manifest_path: str, embed_preview: bool) -> Iterator[Tuple[str, Dict]]:
93
+ base_dir = os.path.abspath(os.path.dirname(__file__))
94
+ with open(manifest_path, "r", encoding="utf-8") as f:
95
+ for idx, line in enumerate(f):
96
+ if not line.strip():
97
+ continue
98
+ row = json.loads(line)
99
+
100
+ parcellation = row.get("parcellation")
101
+ subject = row.get("subject")
102
+ corr_rel = row.get("corr_path")
103
+ ts_rel = row.get("ts_path")
104
+ corr_path = os.path.join(base_dir, corr_rel) if corr_rel else None
105
+ ts_path = os.path.join(base_dir, ts_rel) if ts_rel else None
106
+
107
+ corr_shape, ts_shape = self._inspect_shapes(corr_path, ts_path)
108
+ # Fallback: infer shapes from JSON sidecars if .mat loaders are unavailable
109
+ if corr_shape is None and corr_path:
110
+ corr_shape = self._infer_corr_shape_from_sidecar(corr_path)
111
+ if ts_shape is None and ts_path:
112
+ ts_shape = self._infer_ts_shape_from_sidecar(ts_path)
113
+
114
+ preview_matrix: List[List[float]] = []
115
+ if embed_preview:
116
+ # Prefer precomputed tiny preview JSON if present, else try reading from .mat
117
+ preview_json = self._preview_json_for(parcellation, subject)
118
+ if preview_json and os.path.exists(preview_json):
119
+ try:
120
+ with open(preview_json, "r", encoding="utf-8") as pj:
121
+ arr = json.load(pj)
122
+ if isinstance(arr, list) and (not arr or isinstance(arr[0], list)):
123
+ # ensure float32 conversion
124
+ preview_matrix = [[float(x) for x in row] for row in arr]
125
+ except Exception:
126
+ preview_matrix = []
127
+ elif corr_path and os.path.exists(corr_path):
128
+ small = self._read_correlation_small(corr_path, size=8)
129
+ if small is not None:
130
+ preview_matrix = [[float(x) for x in row] for row in small.tolist()]
131
+
132
+ example = {
133
+ "parcellation": parcellation,
134
+ "subject": subject,
135
+ "corr_shape": list(corr_shape) if corr_shape else [],
136
+ "ts_shape": list(ts_shape) if ts_shape else [],
137
+ "corr_path": corr_rel or "",
138
+ "ts_path": ts_rel or "",
139
+ "correlation_matrix": preview_matrix,
140
+ }
141
+ # Unique key: combine split index + subject + parcellation
142
+ key = f"{idx:06d}-{parcellation}-{subject}"
143
+ yield key, example
144
+
145
+ # --- Helpers ---
146
+ @staticmethod
147
+ def _try_import_mat_modules():
148
+ try:
149
+ import scipy.io as sio # type: ignore
150
+ except Exception as e: # pragma: no cover
151
+ sio = None
152
+ try:
153
+ import mat73 # type: ignore
154
+ except Exception:
155
+ mat73 = None
156
+ return sio, mat73
157
+
158
+ def _load_mat(self, path: str) -> Optional[Dict]:
159
+ sio, mat73 = self._try_import_mat_modules()
160
+ if sio is not None:
161
+ try:
162
+ return sio.loadmat(path, squeeze_me=True, simplify_cells=True) # type: ignore[arg-type]
163
+ except NotImplementedError:
164
+ pass
165
+ except Exception:
166
+ # Keep trying fallbacks
167
+ pass
168
+ if mat73 is not None:
169
+ try:
170
+ return mat73.loadmat(path) # type: ignore[attr-defined]
171
+ except Exception:
172
+ pass
173
+ return None
174
+
175
+ def _pick_var(self, d: Dict, candidates: List[str]) -> Optional[Tuple[str, object]]:
176
+ for k in candidates:
177
+ if k in d:
178
+ return k, d[k]
179
+ # Some loaders store keys lower/upper differently; try case-insensitive match
180
+ lower_map = {k.lower(): k for k in d.keys()}
181
+ for k in candidates:
182
+ if k.lower() in lower_map:
183
+ real_k = lower_map[k.lower()]
184
+ return real_k, d[real_k]
185
+ return None
186
+
187
+ def _inspect_shapes(self, corr_path: Optional[str], ts_path: Optional[str]) -> Tuple[Optional[Tuple[int, int]], Optional[Tuple[int, int]]]:
188
+ import numpy as np # local import to avoid hard dependency at import time
189
+
190
+ corr_shape: Optional[Tuple[int, int]] = None
191
+ ts_shape: Optional[Tuple[int, int]] = None
192
+
193
+ if corr_path and os.path.exists(corr_path):
194
+ data = self._load_mat(corr_path)
195
+ if isinstance(data, dict):
196
+ pick = self._pick_var(data, ["correlation_matrix", "corr", "A"])
197
+ if pick is not None:
198
+ _, arr = pick
199
+ try:
200
+ a = np.asarray(arr)
201
+ if a.ndim >= 2:
202
+ corr_shape = (int(a.shape[-2]), int(a.shape[-1]))
203
+ except Exception:
204
+ pass
205
+
206
+ if ts_path and os.path.exists(ts_path):
207
+ data = self._load_mat(ts_path)
208
+ if isinstance(data, dict):
209
+ pick = self._pick_var(data, ["features_timeseries", "timeseries", "X"])
210
+ if pick is not None:
211
+ _, arr = pick
212
+ try:
213
+ a = np.asarray(arr)
214
+ if a.ndim >= 2:
215
+ ts_shape = (int(a.shape[-2]), int(a.shape[-1]))
216
+ except Exception:
217
+ pass
218
+
219
+ return corr_shape, ts_shape
220
+
221
+ def _read_correlation_small(self, corr_path: str, size: int = 8):
222
+ import numpy as np
223
+
224
+ data = self._load_mat(corr_path)
225
+ if not isinstance(data, dict):
226
+ return None
227
+ pick = self._pick_var(data, ["correlation_matrix", "corr", "A"])
228
+ if pick is None:
229
+ return None
230
+ _, arr = pick
231
+ a = np.asarray(arr)
232
+ if a.ndim < 2:
233
+ return None
234
+ n = min(size, a.shape[-1])
235
+ return a[:n, :n].astype("float32")
236
+
237
+ # --- Sidecar & preview helpers ---
238
+ def _infer_corr_shape_from_sidecar(self, corr_path: str) -> Optional[Tuple[int, int]]:
239
+ sidecar = os.path.splitext(corr_path)[0] + ".json"
240
+ if os.path.exists(sidecar):
241
+ try:
242
+ with open(sidecar, "r", encoding="utf-8") as f:
243
+ meta = json.load(f)
244
+ n = meta.get("NodeCount")
245
+ if isinstance(n, int) and n > 0:
246
+ return (n, n)
247
+ except Exception:
248
+ return None
249
+ return None
250
+
251
+ def _infer_ts_shape_from_sidecar(self, ts_path: str) -> Optional[Tuple[int, int]]:
252
+ sidecar = os.path.splitext(ts_path)[0] + ".json"
253
+ if os.path.exists(sidecar):
254
+ try:
255
+ with open(sidecar, "r", encoding="utf-8") as f:
256
+ meta = json.load(f)
257
+ n = meta.get("NodeCount")
258
+ if isinstance(n, int) and n > 0:
259
+ # length T unknown; return partial shape (n only) as (n, 0) isn't informative in the table
260
+ return (n,)
261
+ except Exception:
262
+ return None
263
+ return None
264
+
265
+ def _preview_json_for(self, parcellation: Optional[str], subject: Optional[str]) -> Optional[str]:
266
+ if not parcellation or not subject:
267
+ return None
268
+ base_dir = os.path.abspath(os.path.dirname(__file__))
269
+ # filename pattern: preview/<parc>__<subject>__corr8x8.json
270
+ return os.path.join(base_dir, "preview", f"{parcellation}__{subject}__corr8x8.json")
manifests/dev.jsonl ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"parcellation": "AAL116", "subject": "sub-control100890", "corr_path": "data/parc-AAL116/sub-control100890/sub-control100890_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control100890/sub-control100890_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "b5a8e599c82655729c8e6bbc82b3606ce153b16a1d5f588f9f7701396997f3f8", "corr_bytes": 54008, "ts_sha256": "e4d1bcca61e5619ea3f7ccd4321ee0d7e89dfcae87a10f7334bcf3b1a8562801", "ts_bytes": 111544}
2
+ {"parcellation": "AAL116", "subject": "sub-control3351", "corr_path": "data/parc-AAL116/sub-control3351/sub-control3351_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3351/sub-control3351_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "53161c1ea79258bd8243b9fb923fbfb0f6c8427da8c60634e623b4c1cad73deb", "corr_bytes": 107832, "ts_sha256": "5cf79c9700df2a1875496e4edb00aa988f65de3bb6186736df7216469452f661", "ts_bytes": 195064}
3
+ {"parcellation": "AAL116", "subject": "sub-control3353", "corr_path": "data/parc-AAL116/sub-control3353/sub-control3353_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3353/sub-control3353_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "cb1078bd58bd9592345baf4eca5a40baaac147c04c0ff5115699cd32b1ff3c52", "corr_bytes": 107832, "ts_sha256": "c7981fc037569f3ba0c7a5dfe063d20acc1c9aa4b46c536a64bcdd51357c703c", "ts_bytes": 195064}
4
+ {"parcellation": "AAL116", "subject": "sub-control3361", "corr_path": "data/parc-AAL116/sub-control3361/sub-control3361_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3361/sub-control3361_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "3351f8fa7d0010a6a9e0fd82e25d1ec73dd02d80ba265636a45f70f176a97ac0", "corr_bytes": 107832, "ts_sha256": "ae71cda75e50e852e76c887a36f49e675935f75077d1d2a307c9de55c64d15d3", "ts_bytes": 195064}
5
+ {"parcellation": "AAL116", "subject": "sub-control3368", "corr_path": "data/parc-AAL116/sub-control3368/sub-control3368_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3368/sub-control3368_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "8224800cccb76bb8359301ed796ec82bfae89f24b3ea687d78d037dd29389a7f", "corr_bytes": 107832, "ts_sha256": "9b35dff833ad03818a818cf955fc0be356f866f17c75e1f4967c3f154ad61884", "ts_bytes": 195064}
6
+ {"parcellation": "AAL116", "subject": "sub-control3369", "corr_path": "data/parc-AAL116/sub-control3369/sub-control3369_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3369/sub-control3369_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "c3890df08542e5a9ba1099f371891e800da49cb363146ba501ec59f1be9008d1", "corr_bytes": 107832, "ts_sha256": "0e5ed2bf747586e64ba2b4fa4b133f66f9cc0b4de0d092e411aaff06444de24c", "ts_bytes": 195064}
7
+ {"parcellation": "AAL116", "subject": "sub-control3389", "corr_path": "data/parc-AAL116/sub-control3389/sub-control3389_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3389/sub-control3389_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "ca06dd9688835389d32da2488b72166e8a2b45463389096988b4c2d390167b6d", "corr_bytes": 107832, "ts_sha256": "5b88301afa26bafed64379a86e18920b432d2bf34c542372a5e8d91e139046ea", "ts_bytes": 195064}
8
+ {"parcellation": "AAL116", "subject": "sub-control3390", "corr_path": "data/parc-AAL116/sub-control3390/sub-control3390_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3390/sub-control3390_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "e912704c743652a821453564e750aa319689eba477c3e233f275c20629c41d59", "corr_bytes": 107832, "ts_sha256": "904f98cd34215954d250dc7bdeb84e9ce058f232fadf1fa80bb0ca9c976fda90", "ts_bytes": 195064}
9
+ {"parcellation": "AAL116", "subject": "sub-control3554", "corr_path": "data/parc-AAL116/sub-control3554/sub-control3554_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3554/sub-control3554_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "9734f6a05cfe7e58afb6442ace4027ee5e8cf4c4e73c3134a9c6aadc6697c381", "corr_bytes": 107832, "ts_sha256": "d37af2fbad08109ee6038720bd8c7833181217a83fee6050e9e3145acea99675", "ts_bytes": 195064}
10
+ {"parcellation": "AAL116", "subject": "sub-control3563", "corr_path": "data/parc-AAL116/sub-control3563/sub-control3563_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3563/sub-control3563_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "6c006c23d03f9015ffbc5a80a9d1059b2c4e5f15f6ddf8e029c17729f2d5691d", "corr_bytes": 107832, "ts_sha256": "35ea4713f2f3688a2f3436444f768bd1c92c50f1162337a4efe7b50035822a9a", "ts_bytes": 195064}
11
+ {"parcellation": "harvard48", "subject": "sub-control100890", "corr_path": "data/parc-harvard48/sub-control100890/sub-control100890_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control100890/sub-control100890_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "961e2a4902b7da5e93f21c38133901fd90e7b45d994f97e7608d6e0b4c78bab5", "corr_bytes": 9400, "ts_sha256": "6fa7aa85599afdad615a4dac99e7379fa36434245e36724b8f1a9be39ccb5d40", "ts_bytes": 46264}
12
+ {"parcellation": "harvard48", "subject": "sub-control3351", "corr_path": "data/parc-harvard48/sub-control3351/sub-control3351_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3351/sub-control3351_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "9aa5083bddeb70b5cc443747592e117099c3143c64a497e28956d7fb189a2f8b", "corr_bytes": 18616, "ts_sha256": "69061d32f407981d5f2b2d397f839fdf2ec5b92db084deb4823b00cadadddda1", "ts_bytes": 80824}
13
+ {"parcellation": "harvard48", "subject": "sub-control3353", "corr_path": "data/parc-harvard48/sub-control3353/sub-control3353_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3353/sub-control3353_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "0d002f8841c3797777125eb0bfb596bf9f3a7ff8578b72294c51587993764558", "corr_bytes": 18616, "ts_sha256": "a71134fd48f7e37d3a54d7bbac403be9d15ee1373d79433df81e77cfd112133d", "ts_bytes": 80824}
14
+ {"parcellation": "harvard48", "subject": "sub-control3361", "corr_path": "data/parc-harvard48/sub-control3361/sub-control3361_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3361/sub-control3361_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "ab29455503565d79544739058fd68b5b9dd6a1e33d851a306971a236496e4f43", "corr_bytes": 18616, "ts_sha256": "fa49afc42bb68cc5be4abccdad4f68e4c9fda5cb60882a50cee590638432d18a", "ts_bytes": 80824}
15
+ {"parcellation": "harvard48", "subject": "sub-control3368", "corr_path": "data/parc-harvard48/sub-control3368/sub-control3368_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3368/sub-control3368_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "3a46d0ccb1f0aea97149f6fdd7705b43ce0fdc3a41c04ca121beed3d520d448e", "corr_bytes": 18616, "ts_sha256": "469e25a40796537b6125c02433587742a87f819fa01423bb1ad9bd556a68cf63", "ts_bytes": 80824}
16
+ {"parcellation": "harvard48", "subject": "sub-control3369", "corr_path": "data/parc-harvard48/sub-control3369/sub-control3369_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3369/sub-control3369_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "ae98f24e9a2d030cab9f7d478a128e99df62029ebb6c7f932d58d9125da01963", "corr_bytes": 18616, "ts_sha256": "446c2e6cd18b26b0f94b10407157d4f097c4e17d832fcad5e1fa8c1477c9bd25", "ts_bytes": 80824}
17
+ {"parcellation": "harvard48", "subject": "sub-control3389", "corr_path": "data/parc-harvard48/sub-control3389/sub-control3389_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3389/sub-control3389_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "c20903f80abb0021f2a90835d0ae0a460c2abc07b355865c3db59d4d66f6d431", "corr_bytes": 18616, "ts_sha256": "216f352a7b0657dcb699164b3a5d12331588f1af89b2afa22441831c6c854b06", "ts_bytes": 80824}
18
+ {"parcellation": "harvard48", "subject": "sub-control3390", "corr_path": "data/parc-harvard48/sub-control3390/sub-control3390_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3390/sub-control3390_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "610c27c459e6857f0ca162cecc0b86f91bb1b4acffca0c7ed2ee8b64f71f461a", "corr_bytes": 18616, "ts_sha256": "dd41a8e1003116edb8c00b1e682fb8729889464f61a78564ee7875d657564824", "ts_bytes": 80824}
19
+ {"parcellation": "harvard48", "subject": "sub-control3554", "corr_path": "data/parc-harvard48/sub-control3554/sub-control3554_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3554/sub-control3554_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "6f68980110a9e3e8f907f50a51081b02cb5d7bed368dff5594b724e45d36f6b6", "corr_bytes": 18616, "ts_sha256": "d6619d194b39f025dd085a9923886c4e7c974d2d02b92fdab94c465a104edc87", "ts_bytes": 80824}
20
+ {"parcellation": "harvard48", "subject": "sub-control3563", "corr_path": "data/parc-harvard48/sub-control3563/sub-control3563_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3563/sub-control3563_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "ceccd4954b2274f3a37e1bc5271f195d5560d5d1992ca6eea3cd7cbf63afada4", "corr_bytes": 18616, "ts_sha256": "7179321dc0b7d0bf5a7e4d48ae023b94d9d7dda9442344590f82f65438627450", "ts_bytes": 80824}
manifests/dev.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6c2d60a2423924718d12df67fdddf10ee04e8d818271fe2f5183cfa0207a089
3
+ size 11675
manifests/preview.jsonl ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {"parcellation": "AAL116", "subject": "sub-control3351", "corr_path": "data/parc-AAL116/sub-control3351/sub-control3351_desc-correlation_matrix_parc-AAL116.mat", "ts_path": "data/parc-AAL116/sub-control3351/sub-control3351_desc-timeseries_parc-AAL116.mat", "correlation_matrix": [[1.0, 0.12, -0.05, 0.08, 0.02, -0.11, 0.03, 0.09], [0.12, 1.0, 0.07, -0.02, 0.05, 0.01, -0.04, 0.06], [-0.05, 0.07, 1.0, 0.14, -0.03, 0.02, 0.05, -0.01], [0.08, -0.02, 0.14, 1.0, 0.09, -0.06, 0.0, 0.04], [0.02, 0.05, -0.03, 0.09, 1.0, 0.12, -0.02, 0.01], [-0.11, 0.01, 0.02, -0.06, 0.12, 1.0, 0.15, -0.03], [0.03, -0.04, 0.05, 0.0, -0.02, 0.15, 1.0, 0.1], [0.09, 0.06, -0.01, 0.04, 0.01, -0.03, 0.1, 1.0]], "corr_shape": [116, 116], "ts_shape": [116], "corr_sha256": "53161c1ea79258bd8243b9fb923fbfb0f6c8427da8c60634e623b4c1cad73deb", "corr_bytes": 107832, "ts_sha256": "5cf79c9700df2a1875496e4edb00aa988f65de3bb6186736df7216469452f661", "ts_bytes": 195064}
2
+ {"parcellation": "harvard48", "subject": "sub-control3351", "corr_path": "data/parc-harvard48/sub-control3351/sub-control3351_desc-correlation_matrix_parc-harvard48.mat", "ts_path": "data/parc-harvard48/sub-control3351/sub-control3351_desc-timeseries_parc-harvard48.mat", "correlation_matrix": [[1.0, 0.08, 0.02, -0.03, 0.06, 0.01, -0.02, 0.05], [0.08, 1.0, -0.04, 0.07, 0.03, -0.01, 0.04, 0.02], [0.02, -0.04, 1.0, 0.09, 0.0, 0.05, -0.03, 0.01], [-0.03, 0.07, 0.09, 1.0, 0.12, -0.05, 0.06, 0.0], [0.06, 0.03, 0.0, 0.12, 1.0, 0.08, -0.02, 0.04], [0.01, -0.01, 0.05, -0.05, 0.08, 1.0, 0.07, -0.02], [-0.02, 0.04, -0.03, 0.06, -0.02, 0.07, 1.0, 0.09], [0.05, 0.02, 0.01, 0.0, 0.04, -0.02, 0.09, 1.0]], "corr_shape": [48, 48], "ts_shape": [48], "corr_sha256": "9aa5083bddeb70b5cc443747592e117099c3143c64a497e28956d7fb189a2f8b", "corr_bytes": 18616, "ts_sha256": "69061d32f407981d5f2b2d397f839fdf2ec5b92db084deb4823b00cadadddda1", "ts_bytes": 80824}
3
+ {"parcellation": "kmeans100", "subject": "sub-control3351", "corr_path": "data/parc-kmeans100/sub-control3351/sub-control3351_desc-correlation_matrix_parc-kmeans100.mat", "ts_path": "data/parc-kmeans100/sub-control3351/sub-control3351_desc-timeseries_parc-kmeans100.mat", "correlation_matrix": [[1.0, -0.02, 0.04, 0.06, -0.01, 0.03, 0.07, 0.02], [-0.02, 1.0, 0.01, -0.05, 0.08, 0.02, -0.03, 0.06], [0.04, 0.01, 1.0, 0.09, -0.02, 0.05, 0.0, 0.04], [0.06, -0.05, 0.09, 1.0, 0.1, -0.01, 0.03, 0.05], [-0.01, 0.08, -0.02, 0.1, 1.0, 0.06, 0.02, -0.03], [0.03, 0.02, 0.05, -0.01, 0.06, 1.0, 0.11, 0.0], [0.07, -0.03, 0.0, 0.03, 0.02, 0.11, 1.0, 0.08], [0.02, 0.06, 0.04, 0.05, -0.03, 0.0, 0.08, 1.0]], "corr_shape": [100, 100], "ts_shape": [100], "corr_sha256": "8fccda155468301de2c3ea5c3c37d9f12cd99ce8ee3a3c74d726dbe29b39d02e", "corr_bytes": 80184, "ts_sha256": "11572befea1db47daaa4c9d8671655ad483b459114643072e865d593a9579b52", "ts_bytes": 168184}
4
+ {"parcellation": "schaefer100", "subject": "sub-control3351", "corr_path": "data/parc-schaefer100/sub-control3351/sub-control3351_desc-correlation_matrix_parc-schaefer100.mat", "ts_path": "data/parc-schaefer100/sub-control3351/sub-control3351_desc-timeseries_parc-schaefer100.mat", "correlation_matrix": [[1.0, 0.03, 0.06, -0.01, 0.02, 0.07, -0.02, 0.04], [0.03, 1.0, -0.02, 0.05, 0.04, 0.01, 0.06, 0.03], [0.06, -0.02, 1.0, 0.11, -0.03, 0.02, 0.04, 0.07], [-0.01, 0.05, 0.11, 1.0, 0.08, -0.04, 0.05, 0.02], [0.02, 0.04, -0.03, 0.08, 1.0, 0.06, -0.01, 0.05], [0.07, 0.01, 0.02, -0.04, 0.06, 1.0, 0.09, -0.02], [-0.02, 0.06, 0.04, 0.05, -0.01, 0.09, 1.0, 0.1], [0.04, 0.03, 0.07, 0.02, 0.05, -0.02, 0.1, 1.0]], "corr_shape": [100, 100], "ts_shape": [100], "corr_sha256": "186dcfed291fc4a010d339ce3cea60caf82d2ddc08962aa0c5c00294ffe16718", "corr_bytes": 80184, "ts_sha256": "43c9bf48b2cbaa83fc400be97a42fb260dc2fc7fb0b5f10e388ca6444c35734b", "ts_bytes": 168184}
5
+ {"parcellation": "ward100", "subject": "sub-control3351", "corr_path": "data/parc-ward100/sub-control3351/sub-control3351_desc-correlation_matrix_parc-ward100.mat", "ts_path": "data/parc-ward100/sub-control3351/sub-control3351_desc-timeseries_parc-ward100.mat", "correlation_matrix": [[1.0, 0.02, -0.03, 0.05, 0.01, -0.02, 0.04, 0.06], [0.02, 1.0, 0.05, -0.01, 0.03, 0.04, -0.02, 0.07], [-0.03, 0.05, 1.0, 0.08, -0.04, 0.02, 0.06, 0.01], [0.05, -0.01, 0.08, 1.0, 0.09, -0.03, 0.02, 0.05], [0.01, 0.03, -0.04, 0.09, 1.0, 0.07, -0.01, 0.04], [-0.02, 0.04, 0.02, -0.03, 0.07, 1.0, 0.1, -0.02], [0.04, -0.02, 0.06, 0.02, -0.01, 0.1, 1.0, 0.09], [0.06, 0.07, 0.01, 0.05, 0.04, -0.02, 0.09, 1.0]], "corr_shape": [100, 100], "ts_shape": [100], "corr_sha256": "7f3a495f5de9a941b1f5bd2cfbd47b0eb51cc26c1bb3b4341b67f9edb3d13a02", "corr_bytes": 80184, "ts_sha256": "c4d6768352805ed7cf4ba0ed89152a4dff24ea5307def328caf879329c3232f6", "ts_bytes": 168184}
manifests/preview.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93325e2fef00b641d5dae45a4b85423bf139fe964e27810d58386f268efa8eb2
3
+ size 9765
parcellations.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "AAL116": 116,
3
+ "harvard48": 48,
4
+ "kmeans100": 100,
5
+ "schaefer100": 100,
6
+ "ward100": 100
7
+ }
preview/AAL116__sub-control3351__corr8x8.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [1.0, 0.12, -0.05, 0.08, 0.02, -0.11, 0.03, 0.09],
3
+ [0.12, 1.0, 0.07, -0.02, 0.05, 0.01, -0.04, 0.06],
4
+ [-0.05, 0.07, 1.0, 0.14, -0.03, 0.02, 0.05, -0.01],
5
+ [0.08, -0.02, 0.14, 1.0, 0.09, -0.06, 0.0, 0.04],
6
+ [0.02, 0.05, -0.03, 0.09, 1.0, 0.12, -0.02, 0.01],
7
+ [-0.11, 0.01, 0.02, -0.06, 0.12, 1.0, 0.15, -0.03],
8
+ [0.03, -0.04, 0.05, 0.0, -0.02, 0.15, 1.0, 0.1],
9
+ [0.09, 0.06, -0.01, 0.04, 0.01, -0.03, 0.1, 1.0]
10
+ ]
preview/harvard48__sub-control3351__corr8x8.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [1.0, 0.08, 0.02, -0.03, 0.06, 0.01, -0.02, 0.05],
3
+ [0.08, 1.0, -0.04, 0.07, 0.03, -0.01, 0.04, 0.02],
4
+ [0.02, -0.04, 1.0, 0.09, 0.0, 0.05, -0.03, 0.01],
5
+ [-0.03, 0.07, 0.09, 1.0, 0.12, -0.05, 0.06, 0.0],
6
+ [0.06, 0.03, 0.0, 0.12, 1.0, 0.08, -0.02, 0.04],
7
+ [0.01, -0.01, 0.05, -0.05, 0.08, 1.0, 0.07, -0.02],
8
+ [-0.02, 0.04, -0.03, 0.06, -0.02, 0.07, 1.0, 0.09],
9
+ [0.05, 0.02, 0.01, 0.0, 0.04, -0.02, 0.09, 1.0]
10
+ ]
preview/kmeans100__sub-control3351__corr8x8.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [1.0, -0.02, 0.04, 0.06, -0.01, 0.03, 0.07, 0.02],
3
+ [-0.02, 1.0, 0.01, -0.05, 0.08, 0.02, -0.03, 0.06],
4
+ [0.04, 0.01, 1.0, 0.09, -0.02, 0.05, 0.0, 0.04],
5
+ [0.06, -0.05, 0.09, 1.0, 0.1, -0.01, 0.03, 0.05],
6
+ [-0.01, 0.08, -0.02, 0.1, 1.0, 0.06, 0.02, -0.03],
7
+ [0.03, 0.02, 0.05, -0.01, 0.06, 1.0, 0.11, 0.0],
8
+ [0.07, -0.03, 0.0, 0.03, 0.02, 0.11, 1.0, 0.08],
9
+ [0.02, 0.06, 0.04, 0.05, -0.03, 0.0, 0.08, 1.0]
10
+ ]
preview/schaefer100__sub-control3351__corr8x8.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [1.0, 0.03, 0.06, -0.01, 0.02, 0.07, -0.02, 0.04],
3
+ [0.03, 1.0, -0.02, 0.05, 0.04, 0.01, 0.06, 0.03],
4
+ [0.06, -0.02, 1.0, 0.11, -0.03, 0.02, 0.04, 0.07],
5
+ [-0.01, 0.05, 0.11, 1.0, 0.08, -0.04, 0.05, 0.02],
6
+ [0.02, 0.04, -0.03, 0.08, 1.0, 0.06, -0.01, 0.05],
7
+ [0.07, 0.01, 0.02, -0.04, 0.06, 1.0, 0.09, -0.02],
8
+ [-0.02, 0.06, 0.04, 0.05, -0.01, 0.09, 1.0, 0.1],
9
+ [0.04, 0.03, 0.07, 0.02, 0.05, -0.02, 0.1, 1.0]
10
+ ]
preview/ward100__sub-control3351__corr8x8.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [1.0, 0.02, -0.03, 0.05, 0.01, -0.02, 0.04, 0.06],
3
+ [0.02, 1.0, 0.05, -0.01, 0.03, 0.04, -0.02, 0.07],
4
+ [-0.03, 0.05, 1.0, 0.08, -0.04, 0.02, 0.06, 0.01],
5
+ [0.05, -0.01, 0.08, 1.0, 0.09, -0.03, 0.02, 0.05],
6
+ [0.01, 0.03, -0.04, 0.09, 1.0, 0.07, -0.01, 0.04],
7
+ [-0.02, 0.04, 0.02, -0.03, 0.07, 1.0, 0.1, -0.02],
8
+ [0.04, -0.02, 0.06, 0.02, -0.01, 0.1, 1.0, 0.09],
9
+ [0.06, 0.07, 0.01, 0.05, 0.04, -0.02, 0.09, 1.0]
10
+ ]
scripts/enrich_manifests.py ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Enrich preview/dev manifests with shapes, embedded 8x8 preview matrices, and optional integrity fields.
4
+
5
+ Usage:
6
+ python scripts/enrich_manifests.py --preview manifests/preview.jsonl --dev manifests/dev.jsonl --inventory manifests/manifest.jsonl
7
+
8
+ Notes:
9
+ - Shapes are inferred from sidecar JSON (NodeCount). For timeseries, we record [n] if T is unknown.
10
+ - For preview rows, we embed 8x8 correlation matrices from precomputed preview/<parc>__<subject>__corr8x8.json files.
11
+ - If an inventory manifest is provided, we add sha256/bytes for corr_path/ts_path.
12
+ """
13
+ import argparse
14
+ import json
15
+ import os
16
+ from pathlib import Path
17
+ from typing import Dict, Iterable, Optional
18
+
19
+
20
+ def load_inventory(inv_path: Optional[Path]) -> Dict[str, Dict[str, object]]:
21
+ mapping: Dict[str, Dict[str, object]] = {}
22
+ if not inv_path or not inv_path.exists():
23
+ return mapping
24
+ with inv_path.open("r", encoding="utf-8") as f:
25
+ for line in f:
26
+ if not line.strip():
27
+ continue
28
+ row = json.loads(line)
29
+ target_rel = row.get("target_rel")
30
+ if not target_rel:
31
+ continue
32
+ mapping[target_rel] = {"sha256": row.get("sha256"), "bytes": row.get("bytes")}
33
+ return mapping
34
+
35
+
36
+ def infer_shapes_from_sidecars(base: Path, corr_path: str, ts_path: str):
37
+ corr_shape = None
38
+ ts_shape = None
39
+ # correlation sidecar
40
+ cjson = base / (Path(corr_path).with_suffix(".json"))
41
+ if cjson.exists():
42
+ try:
43
+ meta = json.loads(cjson.read_text())
44
+ n = meta.get("NodeCount")
45
+ if isinstance(n, int) and n > 0:
46
+ corr_shape = [n, n]
47
+ except Exception:
48
+ pass
49
+ # timeseries sidecar
50
+ tjson = base / (Path(ts_path).with_suffix(".json"))
51
+ if tjson.exists():
52
+ try:
53
+ meta = json.loads(tjson.read_text())
54
+ n = meta.get("NodeCount")
55
+ if isinstance(n, int) and n > 0:
56
+ ts_shape = [n] # T not recorded; leave unknown
57
+ except Exception:
58
+ pass
59
+ return corr_shape or [], ts_shape or []
60
+
61
+
62
+ def preview_corr8(base: Path, parcellation: str, subject: str):
63
+ p = base / "preview" / f"{parcellation}__{subject}__corr8x8.json"
64
+ if p.exists():
65
+ try:
66
+ arr = json.loads(p.read_text())
67
+ if isinstance(arr, list):
68
+ return arr
69
+ except Exception:
70
+ return []
71
+ return []
72
+
73
+
74
+ def enrich_file(path: Path, inventory: Dict[str, Dict[str, object]], embed_preview: bool):
75
+ base = path.parent.parent # repo root
76
+ lines = []
77
+ with path.open("r", encoding="utf-8") as f:
78
+ for line in f:
79
+ if not line.strip():
80
+ continue
81
+ lines.append(json.loads(line))
82
+
83
+ out_lines = []
84
+ for row in lines:
85
+ parc = row["parcellation"]
86
+ subject = row["subject"]
87
+ corr_path = row["corr_path"]
88
+ ts_path = row["ts_path"]
89
+
90
+ corr_shape, ts_shape = infer_shapes_from_sidecars(base, corr_path, ts_path)
91
+
92
+ if embed_preview:
93
+ corr8 = preview_corr8(base, parc, subject)
94
+ row["correlation_matrix"] = corr8
95
+ else:
96
+ # Omit heavy arrays
97
+ row["correlation_matrix"] = []
98
+
99
+ row["corr_shape"] = corr_shape
100
+ row["ts_shape"] = ts_shape
101
+
102
+ # Integrity metadata if available
103
+ if corr_path in inventory:
104
+ row["corr_sha256"] = inventory[corr_path].get("sha256")
105
+ row["corr_bytes"] = inventory[corr_path].get("bytes")
106
+ if ts_path in inventory:
107
+ row["ts_sha256"] = inventory[ts_path].get("sha256")
108
+ row["ts_bytes"] = inventory[ts_path].get("bytes")
109
+
110
+ out_lines.append(row)
111
+
112
+ with path.open("w", encoding="utf-8") as f:
113
+ for r in out_lines:
114
+ f.write(json.dumps(r, ensure_ascii=False) + "\n")
115
+
116
+
117
+ def main():
118
+ ap = argparse.ArgumentParser()
119
+ ap.add_argument("--preview", type=Path, required=False, default=Path("manifests/preview.jsonl"))
120
+ ap.add_argument("--dev", type=Path, required=False, default=Path("manifests/dev.jsonl"))
121
+ ap.add_argument("--inventory", type=Path, required=False, default=Path("manifests/manifest.jsonl"))
122
+ args = ap.parse_args()
123
+
124
+ inv = load_inventory(args.inventory if args.inventory and args.inventory.exists() else None)
125
+
126
+ if args.preview and args.preview.exists():
127
+ enrich_file(args.preview, inv, embed_preview=True)
128
+ print(f"Enriched {args.preview}")
129
+ if args.dev and args.dev.exists():
130
+ enrich_file(args.dev, inv, embed_preview=False)
131
+ print(f"Enriched {args.dev}")
132
+
133
+
134
+ if __name__ == "__main__":
135
+ main()
136
+
scripts/jsonl_to_parquet.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Convert a JSONL file (records per line) to Parquet with a stable schema.
4
+
5
+ Usage:
6
+ python scripts/jsonl_to_parquet.py manifests/preview.jsonl manifests/preview.parquet
7
+ python scripts/jsonl_to_parquet.py manifests/dev.jsonl manifests/dev.parquet
8
+
9
+ Requires: pandas + pyarrow
10
+ """
11
+ import argparse
12
+ import json
13
+ from pathlib import Path
14
+
15
+ import pandas as pd
16
+
17
+
18
+ def main():
19
+ ap = argparse.ArgumentParser()
20
+ ap.add_argument("src", type=Path)
21
+ ap.add_argument("dst", type=Path)
22
+ args = ap.parse_args()
23
+
24
+ records = []
25
+ with args.src.open("r", encoding="utf-8") as f:
26
+ for line in f:
27
+ if not line.strip():
28
+ continue
29
+ records.append(json.loads(line))
30
+
31
+ # Use pandas to write parquet; let pyarrow infer nested lists
32
+ df = pd.DataFrame.from_records(records)
33
+ args.dst.parent.mkdir(parents=True, exist_ok=True)
34
+ df.to_parquet(args.dst, index=False)
35
+ print(f"Wrote {args.dst}")
36
+
37
+
38
+ if __name__ == "__main__":
39
+ main()
40
+
scripts/make_preview.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Generate precomputed 8x8 correlation previews as JSON for the preview split.
4
+
5
+ Usage:
6
+ # From a JSONL manifest listing parcellation/subject/corr_path
7
+ python scripts/make_preview.py --manifest manifests/preview.jsonl --size 8
8
+
9
+ Notes:
10
+ - Attempts to load correlation .mat using scipy.io.loadmat, falling back to mat73 if needed.
11
+ - Variable name fallbacks: correlation_matrix | corr | A
12
+ - Writes files under preview/<parc>__<subject>__corr8x8.json
13
+ """
14
+ import argparse
15
+ import json
16
+ import os
17
+ from pathlib import Path
18
+ from typing import Dict, Optional
19
+
20
+
21
+ def try_import():
22
+ try:
23
+ import scipy.io as sio # type: ignore
24
+ except Exception:
25
+ sio = None
26
+ try:
27
+ import mat73 # type: ignore
28
+ except Exception:
29
+ mat73 = None
30
+ return sio, mat73
31
+
32
+
33
+ def load_mat(path: Path) -> Optional[Dict]:
34
+ sio, mat73 = try_import()
35
+ if sio is not None:
36
+ try:
37
+ return sio.loadmat(str(path), squeeze_me=True, simplify_cells=True) # type: ignore[arg-type]
38
+ except Exception:
39
+ pass
40
+ if mat73 is not None:
41
+ try:
42
+ return mat73.loadmat(str(path)) # type: ignore[attr-defined]
43
+ except Exception:
44
+ pass
45
+ return None
46
+
47
+
48
+ def top_left_8x8(arr, size=8):
49
+ import numpy as np
50
+ a = np.asarray(arr)
51
+ if a.ndim < 2:
52
+ return None
53
+ n = min(size, a.shape[-1])
54
+ return a[:n, :n].astype("float32").tolist()
55
+
56
+
57
+ def main():
58
+ ap = argparse.ArgumentParser()
59
+ ap.add_argument("--manifest", type=Path, required=True)
60
+ ap.add_argument("--size", type=int, default=8)
61
+ args = ap.parse_args()
62
+
63
+ repo = Path(__file__).resolve().parents[1]
64
+ out_dir = repo / "preview"
65
+ out_dir.mkdir(parents=True, exist_ok=True)
66
+
67
+ with args.manifest.open("r", encoding="utf-8") as f:
68
+ rows = [json.loads(line) for line in f if line.strip()]
69
+
70
+ wrote = 0
71
+ for r in rows:
72
+ parc = r.get("parcellation")
73
+ subject = r.get("subject")
74
+ corr_path = r.get("corr_path")
75
+ if not (parc and subject and corr_path):
76
+ continue
77
+ dst = out_dir / f"{parc}__{subject}__corr8x8.json"
78
+ if dst.exists():
79
+ continue
80
+ src = repo / corr_path
81
+ if not src.exists():
82
+ print("skip missing:", src)
83
+ continue
84
+ data = load_mat(src)
85
+ if not isinstance(data, dict):
86
+ print("cannot load .mat (no scipy/mat73?)", src)
87
+ continue
88
+ for k in ("correlation_matrix", "corr", "A"):
89
+ if k in data:
90
+ sl = top_left_8x8(data[k], size=args.size)
91
+ if sl is not None:
92
+ with dst.open("w", encoding="utf-8") as g:
93
+ json.dump(sl, g)
94
+ wrote += 1
95
+ break
96
+
97
+ print(f"Wrote {wrote} preview tiles to {out_dir}")
98
+
99
+
100
+ if __name__ == "__main__":
101
+ main()
102
+
scripts/scan_to_manifest.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Scan data/ and create a metadata-only manifest JSONL with
4
+ - parcellation, subject, corr_path, ts_path
5
+ - corr_shape [n,n] and ts_shape [n] inferred from sidecars if available
6
+ - optional sha256/bytes joined from manifests/manifest.jsonl
7
+
8
+ Examples:
9
+ python scripts/scan_to_manifest.py --out manifests/dev.jsonl --parcs AAL116,harvard48 --limit-per-parc 20
10
+ python scripts/scan_to_manifest.py --out manifests/all.jsonl
11
+ """
12
+ import argparse
13
+ import json
14
+ import os
15
+ from pathlib import Path
16
+ from typing import Dict, Iterable, List, Optional, Tuple
17
+
18
+
19
+ def load_inventory(inv_path: Optional[Path]) -> Dict[str, Dict[str, object]]:
20
+ mapping: Dict[str, Dict[str, object]] = {}
21
+ if not inv_path or not inv_path.exists():
22
+ return mapping
23
+ with inv_path.open("r", encoding="utf-8") as f:
24
+ for line in f:
25
+ if not line.strip():
26
+ continue
27
+ row = json.loads(line)
28
+ target_rel = row.get("target_rel")
29
+ if not target_rel:
30
+ continue
31
+ mapping[target_rel] = {"sha256": row.get("sha256"), "bytes": row.get("bytes")}
32
+ return mapping
33
+
34
+
35
+ def infer_shapes_from_sidecars(repo: Path, corr_path: str, ts_path: str):
36
+ corr_shape = []
37
+ ts_shape = []
38
+ cjson = repo / (Path(corr_path).with_suffix(".json"))
39
+ if cjson.exists():
40
+ try:
41
+ meta = json.loads(cjson.read_text())
42
+ n = meta.get("NodeCount")
43
+ if isinstance(n, int) and n > 0:
44
+ corr_shape = [n, n]
45
+ except Exception:
46
+ pass
47
+ tjson = repo / (Path(ts_path).with_suffix(".json"))
48
+ if tjson.exists():
49
+ try:
50
+ meta = json.loads(tjson.read_text())
51
+ n = meta.get("NodeCount")
52
+ if isinstance(n, int) and n > 0:
53
+ ts_shape = [n]
54
+ except Exception:
55
+ pass
56
+ return corr_shape, ts_shape
57
+
58
+
59
+ def iter_subjects(repo: Path, parcs: Optional[List[str]]) -> Iterable[Tuple[str, Path]]:
60
+ data = repo / "data"
61
+ for d in sorted((data).glob("parc-*")):
62
+ parc = d.name[len("parc-"):]
63
+ if parcs and parc not in parcs:
64
+ continue
65
+ for sub in sorted(d.glob("sub-*")):
66
+ yield parc, sub
67
+
68
+
69
+ def find_pair(sub_dir: Path, parc: str) -> Optional[Tuple[str, str]]:
70
+ # Expects files named <id>_desc-timeseries_parc-<parc>.mat and <id>_desc-correlation_matrix_parc-<parc>.mat
71
+ subj = sub_dir.name
72
+ ts = sub_dir / f"{subj}_desc-timeseries_parc-{parc}.mat"
73
+ cm = sub_dir / f"{subj}_desc-correlation_matrix_parc-{parc}.mat"
74
+ if ts.exists() and cm.exists():
75
+ rel_ts = str(ts.as_posix())
76
+ rel_cm = str(cm.as_posix())
77
+ # store relative to repo root
78
+ root = sub_dir.parents[3] if sub_dir.as_posix().endswith(subj) else sub_dir.parents[3]
79
+ rel_ts = os.path.relpath(ts, root)
80
+ rel_cm = os.path.relpath(cm, root)
81
+ return rel_cm.replace(os.sep, "/"), rel_ts.replace(os.sep, "/")
82
+ return None
83
+
84
+
85
+ def main():
86
+ ap = argparse.ArgumentParser()
87
+ ap.add_argument("--out", type=Path, required=True, help="Output JSONL path")
88
+ ap.add_argument("--parcs", type=str, default="", help="Comma-separated parcellations (default: all)")
89
+ ap.add_argument("--limit-per-parc", type=int, default=0, help="Max rows per parcellation (0 = no limit)")
90
+ ap.add_argument("--inventory", type=Path, default=Path("manifests/manifest.jsonl"))
91
+ args = ap.parse_args()
92
+
93
+ repo = Path(__file__).resolve().parents[1]
94
+ inv = load_inventory(args.inventory if args.inventory and args.inventory.exists() else None)
95
+ parcs = [p.strip() for p in args.parcs.split(",") if p.strip()] or None
96
+
97
+ counts: Dict[str, int] = {}
98
+ rows: List[Dict[str, object]] = []
99
+
100
+ for parc, subdir in iter_subjects(repo, parcs):
101
+ if args.limit_per_parc and counts.get(parc, 0) >= args.limit_per_parc:
102
+ continue
103
+ pair = find_pair(subdir, parc)
104
+ if not pair:
105
+ continue
106
+ corr_path, ts_path = pair
107
+ corr_shape, ts_shape = infer_shapes_from_sidecars(repo, corr_path, ts_path)
108
+ subj = subdir.name
109
+
110
+ row: Dict[str, object] = {
111
+ "parcellation": parc,
112
+ "subject": subj,
113
+ "corr_path": corr_path,
114
+ "ts_path": ts_path,
115
+ "corr_shape": corr_shape,
116
+ "ts_shape": ts_shape,
117
+ "correlation_matrix": [], # metadata-only
118
+ }
119
+ if corr_path in inv:
120
+ row["corr_sha256"] = inv[corr_path].get("sha256")
121
+ row["corr_bytes"] = inv[corr_path].get("bytes")
122
+ if ts_path in inv:
123
+ row["ts_sha256"] = inv[ts_path].get("sha256")
124
+ row["ts_bytes"] = inv[ts_path].get("bytes")
125
+
126
+ rows.append(row)
127
+ counts[parc] = counts.get(parc, 0) + 1
128
+
129
+ args.out.parent.mkdir(parents=True, exist_ok=True)
130
+ with args.out.open("w", encoding="utf-8") as f:
131
+ for r in rows:
132
+ f.write(json.dumps(r, ensure_ascii=False) + "\n")
133
+
134
+ print(f"Wrote {len(rows)} rows to {args.out}")
135
+
136
+
137
+ if __name__ == "__main__":
138
+ main()
139
+