Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/input/steps/[]/geometry/coordinates/[]) changed from array to number in row 20
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 362, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/input/steps/[]/geometry/coordinates/[]) changed from array to number in row 20

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GEIANT Geospatial Agent Benchmark

The first benchmark dataset for geospatial AI agent orchestration.

Built on the GNS Protocol — the decentralized identity system that proves humanity through Proof-of-Trajectory.

Overview

Every AI orchestrator (LangChain, CrewAI, AutoGPT) routes tasks based on capability and availability. None of them understand where the task originates, what regulatory framework governs that location, or whether the geometry the agent produced is actually valid.

GEIANT fixes this. This benchmark tests three capabilities no other orchestrator has:

Capability What it tests
Jurisdictional Routing H3 cell → country → regulatory framework → agent selection
Geometry Mutation Integrity Multi-step geometry workflows with injected corruption
Delegation Chain Validation Human→agent authorization cert validity

Dataset Statistics

Total records: 40

By Family

Family Count
jurisdictional_routing 14
geometry_mutation 11
delegation_chain 15

By Difficulty

Difficulty Count
easy 16
medium 13
hard 4
adversarial 7

By Expected Outcome

Outcome Count
route_success 15
reject_delegation 10
reject_geometry 7
reject_no_ant 4
reject_tier 2
reject_no_jurisdiction 1
flag_boundary_crossing 1

Schema

Each record is a DatasetRecord with the following fields:

{
  id: string;                    // UUID
  family: DatasetFamily;         // which benchmark
  description: string;           // human-readable scenario description
  input: object;                 // the task/cert/geometry submitted
  expected_outcome: string;      // what GEIANT should do
  ground_truth: {
    expected_ant_handle?: string;
    expected_country?: string;
    expected_frameworks?: string[];
    geometry_valid?: boolean;
    delegation_valid?: boolean;
    explanation: string;         // WHY this is the correct answer
  };
  difficulty: string;            // easy | medium | hard | adversarial
  tags: string[];
}

Regulatory Frameworks Covered

Framework Jurisdiction Max Autonomy Tier
GDPR EU trusted
EU AI Act EU trusted
eIDAS2 EU certified
FINMA Switzerland certified
Swiss DPA Switzerland certified
UK GDPR United Kingdom trusted
US EO 14110 United States sovereign
CCPA California, USA sovereign
LGPD Brazil trusted
PDPA-SG Singapore trusted
Italian Civil Code Italy trusted

Usage

from datasets import load_dataset

ds = load_dataset("GNS-Foundation/geiant-geospatial-agent-benchmark")

# Filter by family
routing = ds.filter(lambda x: x["family"] == "jurisdictional_routing")

# Filter by difficulty
adversarial = ds.filter(lambda x: x["difficulty"] == "adversarial")

# Get all rejection scenarios
rejections = ds.filter(lambda x: x["expected_outcome"].startswith("reject_"))

Geospatial Moat

This dataset uses H3 hexagonal hierarchical spatial indexing (Uber H3) at resolution 5–9. Each agent is assigned a territory as a set of H3 cells. Routing validates that the task origin cell is contained within the agent's territory — not just lat/lng bounding boxes.

The H3 cells in this dataset are generated from real coordinates:

import h3
rome_cell = h3.latlng_to_cell(41.902, 12.496, 7)
# → '871e805003fffff'

Citation

@dataset{geiant_benchmark_2026,
  author    = {Ayerbe, Camilo},
  title     = {GEIANT Geospatial Agent Benchmark},
  year      = {2026},
  version   = {0.1.0},
  publisher = {GNS Foundation / ULISSY s.r.l.},
  url       = {https://huggingface.co/datasets/GNS-Foundation/geiant-geospatial-agent-benchmark}
}

License

Apache 2.0 — free for research and commercial use.


Built with GEIANT — Geo-Identity Agent Navigation & Tasking. Part of the GNS Protocol ecosystem.

Downloads last month
4