Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1821, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 781, in finalize
self.write_rows_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 663, in write_rows_on_file
self._write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 771, in _write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 812, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
shape list | data_type string | chunk_grid dict | chunk_key_encoding dict | fill_value bool | codecs list | attributes dict | zarr_format int64 | node_type string | storage_transformers list |
|---|---|---|---|---|---|---|---|---|---|
[
55
] | bool | {
"name": "regular",
"configuration": {
"chunk_shape": [
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | false | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": false
}
}
] | {} | 3 | array | [] |
[
55
] | bool | {
"name": "regular",
"configuration": {
"chunk_shape": [
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | false | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": false
}
}
] | {} | 3 | array | [] |
[
55
] | bool | {
"name": "regular",
"configuration": {
"chunk_shape": [
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | false | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": false
}
}
] | {} | 3 | array | [] |
[
55
] | bool | {
"name": "regular",
"configuration": {
"chunk_shape": [
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | false | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": false
}
}
] | {} | 3 | array | [] |
calvingdb — Calving Front Benchmark
First benchmark dataset for data-driven annual calving front forecasting, covering 123 marine-terminating glaciers in Svalbard from 2013 to 2023.
Each benchmark sample asks: given five past calving front observations (spanning roughly four years), predict where the calving front will be 365 days in the future.
Dataset at a glance
| Property | Value |
|---|---|
| Glaciers | 123 (RGI 6.0, region 07 — Svalbard) |
| Total observations | 17,358 calving front scenes |
| Benchmark samples | 1,234 (866 train / 109 val / 259 test) |
| Temporal coverage | 2013–2023 |
| Spatial resolution | 30 m |
| CRS | EPSG:3995 (Arctic Polar Stereographic) |
| Prediction horizon | 365 days |
| Input sequence length | 5 snapshots |
| Version | 1.0.0 |
| License | CC BY 4.0 |
Repository layout
calvdb/
├── zarr_zipped/ # per-glacier zip archives (for download)
│ └── RGI60-07.XXXXX.zarr.zip
├── splits/
│ ├── train.json # 866 benchmark samples
│ ├── val.json # 109 benchmark samples
│ ├── test.json # 259 benchmark samples
│ ├── normalisation_stats.json # channel statistics (train split only)
│ └── sampling_params.json # reproducibility config
├── croissant.json # MLCommons Croissant metadata
└── README.md
Normalisation
splits/normalisation_stats.json contains per-channel mean and standard deviation computed from the training split only. Apply z-score normalisation before training.
License
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
- Downloads last month
- 48