Datasets:
language:
- en
license: apache-2.0
tags:
- geospatial
- agent-benchmark
- jurisdictional-routing
- geometry-validation
- delegation-chain
- gdpr
- eu-ai-act
- h3
- gns-protocol
pretty_name: GEIANT Geospatial Agent Benchmark
size_categories:
- n<1K
task_categories:
- text-classification
- question-answering
GEIANT Geospatial Agent Benchmark
The first benchmark dataset for geospatial AI agent orchestration.
Built on the GNS Protocol — the decentralized identity system that proves humanity through Proof-of-Trajectory.
Overview
Every AI orchestrator (LangChain, CrewAI, AutoGPT) routes tasks based on capability and availability. None of them understand where the task originates, what regulatory framework governs that location, or whether the geometry the agent produced is actually valid.
GEIANT fixes this. This benchmark tests three capabilities no other orchestrator has:
| Capability | What it tests |
|---|---|
| Jurisdictional Routing | H3 cell → country → regulatory framework → agent selection |
| Geometry Mutation Integrity | Multi-step geometry workflows with injected corruption |
| Delegation Chain Validation | Human→agent authorization cert validity |
Dataset Statistics
Total records: 40
By Family
| Family | Count |
|---|---|
jurisdictional_routing |
14 |
geometry_mutation |
11 |
delegation_chain |
15 |
By Difficulty
| Difficulty | Count |
|---|---|
easy |
16 |
medium |
13 |
hard |
4 |
adversarial |
7 |
By Expected Outcome
| Outcome | Count |
|---|---|
route_success |
15 |
reject_delegation |
10 |
reject_geometry |
7 |
reject_no_ant |
4 |
reject_tier |
2 |
reject_no_jurisdiction |
1 |
flag_boundary_crossing |
1 |
Schema
Each record is a DatasetRecord with the following fields:
{
id: string; // UUID
family: DatasetFamily; // which benchmark
description: string; // human-readable scenario description
input: object; // the task/cert/geometry submitted
expected_outcome: string; // what GEIANT should do
ground_truth: {
expected_ant_handle?: string;
expected_country?: string;
expected_frameworks?: string[];
geometry_valid?: boolean;
delegation_valid?: boolean;
explanation: string; // WHY this is the correct answer
};
difficulty: string; // easy | medium | hard | adversarial
tags: string[];
}
Regulatory Frameworks Covered
| Framework | Jurisdiction | Max Autonomy Tier |
|---|---|---|
| GDPR | EU | trusted |
| EU AI Act | EU | trusted |
| eIDAS2 | EU | certified |
| FINMA | Switzerland | certified |
| Swiss DPA | Switzerland | certified |
| UK GDPR | United Kingdom | trusted |
| US EO 14110 | United States | sovereign |
| CCPA | California, USA | sovereign |
| LGPD | Brazil | trusted |
| PDPA-SG | Singapore | trusted |
| Italian Civil Code | Italy | trusted |
Usage
from datasets import load_dataset
ds = load_dataset("GNS-Foundation/geiant-geospatial-agent-benchmark")
# Filter by family
routing = ds.filter(lambda x: x["family"] == "jurisdictional_routing")
# Filter by difficulty
adversarial = ds.filter(lambda x: x["difficulty"] == "adversarial")
# Get all rejection scenarios
rejections = ds.filter(lambda x: x["expected_outcome"].startswith("reject_"))
Geospatial Moat
This dataset uses H3 hexagonal hierarchical spatial indexing (Uber H3) at resolution 5–9. Each agent is assigned a territory as a set of H3 cells. Routing validates that the task origin cell is contained within the agent's territory — not just lat/lng bounding boxes.
The H3 cells in this dataset are generated from real coordinates:
import h3
rome_cell = h3.latlng_to_cell(41.902, 12.496, 7)
# → '871e805003fffff'
Citation
@dataset{geiant_benchmark_2026,
author = {Ayerbe, Camilo},
title = {GEIANT Geospatial Agent Benchmark},
year = {2026},
version = {0.1.0},
publisher = {GNS Foundation / ULISSY s.r.l.},
url = {https://huggingface.co/datasets/GNS-Foundation/geiant-geospatial-agent-benchmark}
}
License
Apache 2.0 — free for research and commercial use.
Built with GEIANT — Geo-Identity Agent Navigation & Tasking. Part of the GNS Protocol ecosystem.