Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
TreeUQ
TreeUQ is a large-scale Earth observation benchmark for tree species mapping and tree structure (height, density, variance) with uncertainty-relevant targets, built over Bavaria, Germany (CRS EPSG:25832, 10 m reference grid, 128×128 pixel patches). Each sample bundles four-season Sentinel-2 (10 bands), four-season Sentinel-1 GRD (VV, VH), a tree species raster, per-pixel height / count / density targets from the Bavarian Einzelbäume individual-tree inventory, and (when available) a co-registered DOP20 RGB orthophoto at 20 cm as a 6400×6400×3 patch.
Splits (spatial blocks, test buffered): train 31,092 | validation 8,150 | test 6,177 (45,419 patches total).
License: CC-BY-4.0.
Croissant 1.0 metadata (machine-readable, for tooling & venues):
https://huggingface.co/datasets/mammmarahmed/TreeUQ/resolve/main/croissant.json
Repository layout
| Path | Role |
|---|---|
data/index/train.parquet, validation.parquet, test.parquet |
Patch index (one row per patch): geometry / block split / quality stats / sample_key / shard_relpath |
data/shards/*.tar |
WebDataset-style shards; members are raw little-endian tensors + optional JSON sidecar per sample (see data/schema.json) |
data/schema.json |
Tensor schema: member suffix → dtype, shape, band names, units, optional modalities (e.g. DOP20) |
croissant.json |
Croissant JSON-LD (record set patches, RAI + provenance) |
Index Parquet structure (data/index/*.parquet)
Each row is one patch. Use these columns to filter by location, join to tensors, or download only the shards you need.
| Column | Type (typical) | Meaning |
|---|---|---|
patch_id |
int64 | Globally unique patch ID on the Bavaria grid |
center_x, center_y |
float64 | Patch centre in EPSG:25832 (metres, easting / northing) |
row_start, row_end, col_start, col_end |
int64 | Patch extent in the master 10 m pixel grid (row_end / col_end are exclusive) |
valid_pixel_pct |
float64 | Share of pixels with valid Sentinel-2 observations ([0,1]) |
tree_pixel_pct |
float64 | Share of pixels with at least one inventory tree ([0,1]) |
mean_tree_count |
float64 | Mean inventory tree count per 10 m pixel in the patch |
mean_tree_count_variance |
float64 | Mean spatial variance of tree count in the patch |
split |
string | Split label as stored in the table (e.g. train, val, test) — file names are still train.parquet, validation.parquet, test.parquet |
block_col, block_row, block_id |
int64 | Spatial block used for deterministic train/val/test assignment |
distance_to_nearest_test_km |
float64 | Distance from patch centre to nearest test patch centre (km) |
buffered |
bool | Whether the patch lies in the buffer around test blocks |
in_bavaria |
bool | Whether the patch centre lies inside the Bavaria administrative boundary |
dop20_available |
bool | Whether a DOP20 RGB chip is expected for this patch (still check tensors in the shard) |
sample_key |
string | WebDataset key for this patch inside the tar (e.g. 000410) — use as the prefix for member filenames |
shard_relpath |
string | Repo-relative path to the .tar file that holds this patch (e.g. data/shards/train-000000.tar) |
Subset downloads: filter rows (e.g. by patch_id, by bounding box on center_x/center_y, or by polygon in EPSG:25832), then take unique shard_relpath values and download only those tar files from the Hub (see below).
How to load
Dependencies
numpy>=1.23
pandas>=2.0
pyarrow>=14.0
huggingface_hub>=0.20.0
Optional: webdataset if you prefer its iterators (not required for the examples below).
Decode rule
For each tensor member listed under schema.json → members, decode as:
numpy.frombuffer(raw_bytes, dtype=np.dtype(spec["dtype"])).reshape(tuple(spec["shape"])).
Use band_names in schema.json for channel order where provided.
Loading from Hugging Face Hub
Repository: mammmarahmed/TreeUQ (repo_type="dataset").
1 — Download small files first (index + schema)
Use these everywhere you only need metadata:
from huggingface_hub import hf_hub_download
REPO = "mammmarahmed/TreeUQ"
REV = "main" # pin a commit hash for exact reproducibility
schema_path = hf_hub_download(
repo_id=REPO, filename="data/schema.json", repo_type="dataset", revision=REV
)
train_idx_path = hf_hub_download(
repo_id=REPO, filename="data/index/train.parquet", repo_type="dataset", revision=REV
)
2 — Download only the shard tar files you need
After filtering the index (see Parquet table above), collect unique shard_relpath strings and download each:
import os
import pandas as pd
idx = pd.read_parquet(train_idx_path)
# Example: bounding box in EPSG:25832 (metres)
sub = idx[
(idx["center_x"] >= 600_000) & (idx["center_x"] <= 610_000)
& (idx["center_y"] >= 5_400_000) & (idx["center_y"] <= 5_410_000)
]
shard_paths = sub["shard_relpath"].unique()
cache_dir = os.path.expanduser("~/.cache/huggingface/hub") # default; set HF_HOME to control
local_tars = []
for rel in shard_paths:
local_tars.append(
hf_hub_download(repo_id=REPO, filename=rel, repo_type="dataset", revision=REV)
)
3 — Read samples from local tar paths
Use the same row’s sample_key to find members {sample_key}.<suffix> inside that tar (suffixes from schema.json).
CLI alternative (single files)
export HF_TOKEN=hf_xxx # only if the repo were gated; public datasets usually need no token
huggingface-cli download mammmarahmed/TreeUQ \
data/index/train.parquet data/schema.json \
--repo-type dataset --local-dir ./treeuq_snippet
Download specific shards by path:
huggingface-cli download mammmarahmed/TreeUQ \
data/shards/train-000000.tar \
--repo-type dataset --local-dir ./treeuq_snippet
Use revision=<git_sha> when you need a frozen snapshot for papers.
Example: local clone — iterate patches from an index split
Point DATA_ROOT at the repository root (the folder that contains data/).
from pathlib import Path
import json
import tarfile
import numpy as np
import pandas as pd
DATA_ROOT = Path("/path/to/TreeUQ/repo") # contains data/index, data/shards, data/schema.json
def load_schema(path: Path) -> dict:
with open(path, "r", encoding="utf-8") as f:
return json.load(f)
def decode_member(schema: dict, suffix: str, raw: bytes) -> np.ndarray:
spec = schema["members"][suffix]
dtype = np.dtype(spec["dtype"])
return np.frombuffer(raw, dtype=dtype).reshape(tuple(spec["shape"]))
def iter_patches(split: str):
schema = load_schema(DATA_ROOT / "data/schema.json")
meta_ext = schema.get("meta_member", ".json").lstrip(".") # e.g. "json"
idx = pd.read_parquet(DATA_ROOT / "data/index" / f"{split}.parquet")
for _, row in idx.iterrows():
tar_path = DATA_ROOT / row["shard_relpath"]
key = str(row["sample_key"])
with tarfile.open(tar_path, "r:*") as tf:
members = {m.name.split("/")[-1]: m for m in tf.getmembers() if m.isfile()}
arrays = {}
for suffix, spec in schema["members"].items():
fname = f"{key}.{suffix}"
if fname not in members:
if spec.get("optional"):
continue
raise FileNotFoundError(f"Missing required member {fname} in {tar_path}")
raw = tf.extractfile(members[fname]).read()
arrays[suffix] = decode_member(schema, suffix, raw)
meta_name = f"{key}.{meta_ext}"
sidecar = {}
if meta_name in members:
sidecar = json.loads(tf.extractfile(members[meta_name]).read().decode("utf-8"))
yield {"index": row.to_dict(), "arrays": arrays, "sidecar_json": sidecar}
# Example: first training patch
first = next(iter_patches("train"))
print(first["index"]["patch_id"], list(first["arrays"].keys()))
If JSON sidecars use a different naming pattern, inspect one shard (tar tf data/shards/train-000000.tar | head) and align meta_ext with schema.json’s meta_member.
Shapes & units (see data/schema.json for authoritative detail)
| Kind | Typical shape | Notes |
|---|---|---|
| Sentinel-2 seasonal stacks | (128, 128, 10) |
Band order in schema.json; DN reflectance — divide by 10 000 where the schema says so |
| Sentinel-1 seasonal stacks | (128, 128, 2) |
VV, VH — linear gamma-0, not dB |
| Species / structure rasters | (128, 128) |
Per-pixel targets |
| DOP20 RGB (optional) | (6400, 6400, 3) |
uint8 RGB — only if present in shard and dop20_available is true in the index row |
Limitations
- Geography: Models trained on TreeUQ may not generalize outside Bavaria without domain adaptation.
- Labels: Supervision is sparse and biased toward inventoried trees (urban / managed contexts); large parts of the landscape have weak or no label signal at 10 m.
- Resolution: Targets aggregate inventory information at 10 m; individual trees are not fully resolved at pixel level.
Extended RAI text is embedded in croissant.json (rai:* fields).
- Downloads last month
- 64