Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'29719'}) and 1 missing columns ({'2358'}).

This happened while the csv dataset builder was generating data using

hf://datasets/zhwang1/CarbonGlobe/val.csv (at revision 99abfc6f48769e7b5adfa0999e2938c2f3872907), ['hf://datasets/zhwang1/CarbonGlobe@99abfc6f48769e7b5adfa0999e2938c2f3872907/val.csv']

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1800, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              29719: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 368
              to
              {'2358': Value('int64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1802, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'29719'}) and 1 missing columns ({'2358'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/zhwang1/CarbonGlobe/val.csv (at revision 99abfc6f48769e7b5adfa0999e2938c2f3872907), ['hf://datasets/zhwang1/CarbonGlobe@99abfc6f48769e7b5adfa0999e2938c2f3872907/val.csv']
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

2358
int64
37,587
13,037
35,555
37,611
49,019
42,138
42,669
52,661
7,813
11,367
8,040
4,034
20,813
47,563
44,631
36,951
44,080
19,372
23,868
33,832
46,386
3,994
44,700
13,081
38,389
6,059
29,667
44,012
47,664
14,469
27,432
12,877
40,329
5,936
53,098
17,767
4,022
564
5,808
40,998
28,422
41,583
4,090
28,580
11,620
45,272
50,297
42,117
76
53,770
40,938
31,942
16,264
17,775
14,435
29,739
38,922
32,807
13,121
17,703
3,856
36,234
46,410
49,031
28,545
26,342
28,565
2,396
11,710
48,267
23,710
45,959
3,764
9,703
3,919
49,584
22,652
7,928
13,188
35,648
43,976
37,632
29,639
22,540
19,348
1,261
15,933
33,816
16,270
16,175
26,427
37,005
14,463
7,758
17,602
2,274
35,678
17,609
17,827
26,194
End of preview.

CarbonGlobe: A Global-Scale, Multi-Decade Dataset and Benchmark for Carbon Forecasting in Forest Ecosystems

CarbonGlobe is a global-scale, multi-decade, machine-learning-ready dataset and benchmark for forecasting carbon dynamics in forest ecosystems. The dataset provides harmonized environmental drivers and carbon-related ecosystem outputs simulated by the Ecosystem Demography model version 3 (ED v3), enabling the development, evaluation, and comparison of deep learning models for global forest carbon forecasting.

CarbonGlobe was accepted to the NeurIPS 2025 Datasets & Benchmarks Track.

Project page: https://github.com/zhwang0/carbon-globe

Dataset Description

Forest ecosystems play a central role in the global carbon cycle, but forecasting their long-term dynamics remains challenging because process-based ecosystem models are computationally expensive and difficult to scale across large spatial domains and many scenarios.

CarbonGlobe addresses this challenge by providing a reproducible benchmark for learning from process-based ecosystem simulations. It bridges Earth system science and machine learning by transforming global ED v3 simulations and associated environmental drivers into a standardized dataset for carbon forecasting.

The dataset is designed to support the development of machine learning emulators and forecasting models that can predict multivariate ecosystem trajectories under diverse climate, regional, and ecological conditions.

Dataset Summary

CarbonGlobe includes:

  • Global 0.5° spatial coverage, with approximately 70,000 grid cells
  • Multi-decade temporal coverage, spanning more than 40 years from 1980 to 2020
  • Monthly time-series data for long-term ecosystem forecasting
  • 100+ environmental input variables from meteorological, atmospheric, soil, and vegetation-related sources
  • ED v3-simulated ecosystem output variables, including carbon stocks, vegetation structure, and ecosystem fluxes
  • ML-ready samples for training and evaluating time-series forecasting models
  • Scenario-based evaluation protocols for assessing model robustness under climate, forest-age, and regional domain shifts

Supported Tasks

CarbonGlobe supports the following machine learning tasks:

  • Multivariate time-series forecasting
  • Forest carbon forecasting
  • Ecosystem model emulation
  • Long-horizon sequence prediction
  • Spatiotemporal environmental modeling
  • Domain generalization across climate zones, regions, and forest conditions
  • Benchmarking deep learning models for Earth system applications

Dataset Structure

Each sample contains environmental input drivers, ED-simulated target variables, temporal information, and geographic/ecological metadata.

Input variables include environmental drivers such as:

Meteorological variables, atmospheric CO2, soil properties, vegetation-related variables, and other environmental covariates.

Target variables include ED v3-simulated ecosystem states and fluxes, such as:

Vegetation height, soil carbon (SC), above-ground biomass (AGB), leaf area index (LAI), gross primary productivity (GPP), net primary productivity (NPP), heterotrophic respiration (RH).

Data Sources

CarbonGlobe is constructed from harmonized environmental input variables and outputs simulated by the Ecosystem Demography model version 3 (ED v3).

The dataset is intended to provide a standardized machine learning benchmark for carbon forecasting in forest ecosystems. It enables users to train data-driven forecasting models using the same input-output structure as process-based ecosystem simulations, while supporting controlled evaluation across ecological and geographic domains.

Dataset Splits

CarbonGlobe supports multiple evaluation settings for both standard forecasting and domain generalization.

Recommended split types include:

  • Random split: standard train/validation/test evaluation across global samples
  • Regional split: evaluation across geographic regions
  • Climate-zone split: evaluation across Köppen–Geiger climate domains
  • Forest-age or ecosystem-condition split: evaluation across different ecosystem development stages
  • Temporal forecasting split: training on historical sequences and evaluating long-horizon future prediction

Please refer to the accompanying benchmark code for the exact split definitions and evaluation protocol.

Metadata

Each sample may include metadata such as:

  • Geographic coordinates
  • Grid identifier
  • Köppen–Geiger climate zone
  • Dominant forest type
  • Monthly time index
  • Train/validation/test split indicator

These metadata enable controlled evaluation of model generalization across environmental domains, including:

  • Tropical to temperate transfer
  • Humid to arid climate transfer
  • Cross-region forecasting
  • Cross-forest-type forecasting
  • Generalization across forest structural or developmental conditions

Benchmark Models

CarbonGlobe includes benchmark results from representative forecasting models across multiple modeling paradigms.

Evaluated models include:

  • LSTM
  • LSTNet
  • DeepED, a physics-guided deep learning emulator for ecosystem dynamics
  • Transformer
  • Informer
  • Crossformer
  • TimeXer
  • DLinear

These models are evaluated for multivariate ecosystem trajectory prediction under both standard and domain-shift settings.

Evaluation

CarbonGlobe is designed for multivariate, long-horizon ecosystem forecasting. Recommended evaluation metrics include:

  • Root mean squared error (RMSE)
  • Mean absolute error (MAE)
  • Delta error
  • Cumulative error

In addition to pointwise prediction accuracy, users are encouraged to report trajectory-level metrics that evaluate whether models preserve long-term ecosystem dynamics, temporal changes, and accumulated carbon-cycle behavior.

Intended Uses

CarbonGlobe is intended for research and benchmarking in:

  • Forest carbon forecasting
  • Ecosystem model emulation
  • Climate impact assessment
  • Long-term ecological forecasting
  • Earth system machine learning
  • Development and evaluation of deep learning models for environmental time series
  • Benchmarking generalization under climate, regional, and ecological domain shifts
  • Scalable approximation of process-based ecosystem model outputs

Ethical and Environmental Considerations

CarbonGlobe does not contain personal or sensitive human information. The dataset is based on environmental drivers and process-based ecosystem model simulations.

Potential positive impacts include improved accessibility of global carbon forecasting benchmarks, reduced computational barriers for ecosystem model emulation, and stronger collaboration between machine learning and Earth system science communities.

Potential risks include over-interpreting model predictions as direct real-world forecasts, using outputs without accounting for uncertainty, or applying benchmark-trained models to policy-sensitive decisions without additional validation.

How to Use

Example loading workflow:

from datasets import load_dataset

dataset = load_dataset("zhwang1/CarbonGlobe")
print(dataset)
print(dataset["train"][0])

The GitHub repository is available here: https://github.com/zhwang0/carbon-globe.

Citation

If you use CarbonGlobe in your research, please cite:

@inproceedings{wang2025carbonglobe,
  title     = {CarbonGlobe: A Global-Scale, Multi-Decade Dataset and Benchmark for Carbon Forecasting in Forest Ecosystems},
  author    = {Wang, Zhihao and Ma, Lei and Hurtt, George and Jia, Xiaowei and Li, Yanhua and Li, Ruohan and Li, Zhili and Xu, Shuo and Xie, Yiqun},
  booktitle = {Proceedings of the 39th Conference on Neural Information Processing Systems (NeurIPS 2025), Datasets and Benchmarks Track},
  year      = {2025}
}

Dataset Contact

For questions, issues, or collaboration inquiries, please open an issue in the associated GitHub repository or contact the dataset authors.

Acknowledgements

CarbonGlobe was developed to support reproducible machine learning research for forest carbon forecasting and ecosystem model emulation. We thank the collaborators, data providers, and research communities that supported dataset development, simulation, benchmarking, and validation.

Downloads last month
135