ConnectomeBench2 / README.md
jeffbbrown2's picture
readme: add citation block
b135aa1 verified
metadata
license: other
pretty_name: ConnectomeBench2
tags:
  - connectomics
  - proofreading
  - 3d
  - electron-microscopy
  - mesh
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train
        path: train/train-*.parquet
      - split: validation
        path: val/val-*.parquet
      - split: test
        path: test/test-*.parquet

ConnectomeBench2

ConnectomeBench2 is a unified benchmark for automated proofreading of connectomic neural-segmentation data. 401,170 samples across 4 species (mouse, fly, human, zebrafish) and 5 sample types (real merge edits, real split edits, synthetic adjacent / junction / synapse controls), with the associated mesh geometry and electron-microscopy (EM) renderings.

Downstream trainers should treat this dataset as the single source of truth for sample identity, labels, train/validation/test split, and which task(s) a row is valid for.

Context: Connectomic Proofreading

Connectomics scans and automatically segments neurons to create large-scale brain maps at cellular resolution. Two types of segmentation errors can occur in this process, which need to be corrected (= proofreading):

  • False Splits — corrected via merge corrections
  • False Merges — corrected via split corrections

Merge corrections (of false splits) are applied to multiple segments that need to be correctly merged together. Split corrections (of false merges) are applied to single segments that need to be correctly split apart.

For this reason, this dataset contains renderings of both single-segment (pre-split or post-merge) and dual-segment (post-split or pre-merge) mesh geometry, where possible. EM data is provided in dual format only — segmentation on imaging level is contiguous, so the single-version can be derived from the union of the dual.

Renderings (geometry and EM imaging data)

channel decomposition: synapse 2-mask vs junction single-mask

(top: synapse merge-pair — both masks populated; bottom: junction control — single-mask only, mask B / seg B empty)

Geometry files (the geometry and geometry_single columns) are compressed .npz payloads that decode to (3, 7, 224, 224) float16 arrays — three 2D views (front, side, top) × seven channels:

ch content
0 silhouette
1 depth
2 normal_x
3 normal_y
4 normal_z
5 mask A
6 mask B (empty in single-segment renders)

Note that single and dual segment renders differ not only in mask channels, but also subtly differ in all other channels, due to slight differences in mesh geometry from merging/splitting.

Free split-mask labels. For split_edit rows, the dual-segment render (post-split) provides ground-truth split-mask labels (Mask A / Mask B channels) for the corresponding single-segment render (pre-split) — split-mask-generation tasks get pixel-level supervision without extra labeling.

EM coverage. EM views are not present on every sample. Coverage by sample_type (full dataset):

sample_type rows has_em
adjacent_control 121,333 100%
junction_control 38,272 100%
synapse_control 18,182 100%
merge_edit 146,461 38%
split_edit 77,213 23%
total 401,170 63% (37% null)

real human edits (merge_edit, split_edit) only got EM rendered on a stratified subset; synthetic controls all have EM. Filter by has_em if your task requires it.

EM imaging files (em_xy / em_xz / em_yz / em_best columns) are PNG-encoded 3-channel slices:

ch content
0 raw EM intensity
1 segment A mask
2 segment B mask

Four imaging views per sample: three cardinal slices (xy, xz, yz) + a best slice at an oblique angle that maximizes the visible area of both segments (sum of their logs).

For single-segment tasks, segment A and B should be merged (and B zeroed). The best view may leak some dual-label information (it takes both labels into account); we advise against testing single-segment tasks on em_best.

Loading

from datasets import load_dataset

ds = load_dataset("jeffbbrown2/connectomebench2-smoke", split="train")
sample = ds[0]
# sample["em_xy"] is a PIL Image (HF auto-decodes)
# sample["geometry"] is bytes — decode with:
import io, numpy as np
geom = np.load(io.BytesIO(sample["geometry"]))["arr_0"]   # shape (3, 7, 224, 224) float16

Or with raw pyarrow:

import pyarrow.parquet as pq
import numpy as np, io
df = pq.read_table("train/train-00000.parquet").to_pandas()
geom = np.load(io.BytesIO(df.iloc[0]["geometry"]))["arr_0"]

The metadata/{train,val,test}.parquet sidecars contain identifier/label/modality columns only (no image bytes) — useful for fast filtering or inspection.

Columns

Identifiers

  • combined_sample_hash — primary key (md5 hex 32-char of f"{source_archive}|{source_archive_sample_hash}"); guaranteed unique across the dataset.
  • source_archive_sample_hash — legacy 12-char hex hash from upstream; kept for traceability, not unique alone.
  • source_archive — name of the originating render archive (e.g. edits_and_adj_controls_fly, junction_controls_mouse, synapse_controls_fly). 10 distinct values (5 archives × species).

Sample identity

  • sample_type: str — single source of truth for what kind of sample this row is. Five values:
    • merge_edit — positive merge-correction edit
    • split_edit — positive split-correction edit
    • adjacent_control — synthetic negative for merge-correction (segments adjacent to genuine correction)
    • junction_control — putative junction in proofread neuron (negative merge-error-id sample)
    • synapse_control — synapse pair across neurons (negative merge-correction)
  • same_neuron: bool — derived from sample_type:
    • True for merge_edit, junction_control
    • False for split_edit, adjacent_control, synapse_control
  • species: strfly / mouse / human / zebrafish.

Image content

  • geometry — bytes; compressed npz (key "arr_0") decoding to (3, 7, 224, 224) float16. Null when the sample has no dual-segment render.
  • geometry_single — same shape/dtype, single-segment version. Null when not present.
  • em_xy / em_xz / em_yz / em_best — PIL Images (3-channel PNG, (224, 224, 3) uint8). Null when the row has no EM views.
  • has_single_mask: bool — convenience flag.
  • has_dual_mask: bool — convenience flag.
  • has_em: bool — true if any em_* column is non-null.
  • present_slots: list[str] — modality tags actually present (e.g. ["geometry", "geometry_single", "em_xy", "em_xz", "em_yz", "em_best"]).

Task routing & labels

  • task_routing: list[str] — which downstream task(s) this row can serve as training data for:
    • false_split_correction — merge-correction task; fires when sample_type ∈ {merge_edit, synapse_control, adjacent_control} AND has_dual_mask.
    • false_merge_identification — merge-error binary classification; fires when sample_type ∈ {split_edit, junction_control} AND has_single_mask.
    • split_mask_generation — pixel-level split prediction; fires when sample_type == split_edit AND has_single_mask.
  • false_split_correction_label: bool = same_neuron. Populated for all rows; trainers filter by task_routing.
  • false_merge_identification_label: bool = not same_neuron. Populated for all rows; trainers filter by task_routing.

Usage note. Downstream training scripts must load the appropriate geometry render per task:

  • Merge Correction of false splits should use dual-segment renders
  • Split Correction of false merges should use single-segment renders
    • Furthermore, fuse A/B channels of EM images and discard em_best (it sees both labels at oblique angle and can leak ground truth)

Otherwise, ground-truth task or label information may leak to the model and bias performance.

Train/val/test split

  • split: strtrain / validation / test. ~80/10/10 split assigned by spatial location of the proofreading sample (interface_point_nm), matched via cube splits (50µm cubes tiling the volume and randomly split).

Other

  • metadata: str — JSON-stringified original metadata struct. Parse with json.loads. Useful keys: operation_id, source_operation_id, strategy, image_types, interface_point_nm, before_root_ids, after_root_ids, …

Counts

  • 401,170 rows total · ~80/11/9 train (319,727) / validation (43,517) / test (37,926)
  • 251,499 rows with EM views; all 401,170 have geometry
  • ~2.2M model-level samples (EM × 4 views + geom × 3 views), or ~2.8M counting dual + single geom separately
  • 506 parquet shards (~240 MB each)

Layout

README.md
shards.csv                    metadata across shards (path, sha256, n_samples, size)
train/train-*.parquet         WebDataset-style parquet shards with image bytes
val/val-*.parquet
test/test-*.parquet
metadata/                     sidecar parquets with identifiers + labels (no bytes)
  train.parquet
  val.parquet
  test.parquet
demo.parquet                  stratified mini-shard (one-line preview)
figures/
  channel_decomposition.png

Sources & License

Derived from the following upstream connectomic proofreading datasets:

  • MICrONS (mouse cortex)
  • FlyWire (Drosophila brain)
  • H01 (human cortex)
  • Zebrafish larval connectome

License = other; users must comply with upstream licenses (which may differ across species/sources). Final outbound license will be set after upstream license review.

Citation

If you use ConnectomeBench2, please cite:

Brown, J., Farkas, T., Razgar, G., Boyden, E. S.
ConnectomeBench2: A unified benchmark for automated connectomic proofreading.
(2026, in submission). Brown J. and Farkas T. contributed equally as first authors.

Please also cite the upstream connectome sources used by this dataset: