Datasets:
File size: 4,468 Bytes
25c416e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | ---
language:
- en
license: apache-2.0
tags:
- geospatial
- agent-benchmark
- jurisdictional-routing
- geometry-validation
- delegation-chain
- gdpr
- eu-ai-act
- h3
- gns-protocol
pretty_name: GEIANT Geospatial Agent Benchmark
size_categories:
- n<1K
task_categories:
- text-classification
- question-answering
---
# GEIANT Geospatial Agent Benchmark
**The first benchmark dataset for geospatial AI agent orchestration.**
Built on the [GNS Protocol](https://gcrumbs.com) — the decentralized identity system that proves humanity through Proof-of-Trajectory.
## Overview
Every AI orchestrator (LangChain, CrewAI, AutoGPT) routes tasks based on capability and availability. None of them understand *where* the task originates, *what regulatory framework* governs that location, or *whether the geometry the agent produced is actually valid*.
GEIANT fixes this. This benchmark tests three capabilities no other orchestrator has:
| Capability | What it tests |
|---|---|
| **Jurisdictional Routing** | H3 cell → country → regulatory framework → agent selection |
| **Geometry Mutation Integrity** | Multi-step geometry workflows with injected corruption |
| **Delegation Chain Validation** | Human→agent authorization cert validity |
## Dataset Statistics
**Total records:** 40
### By Family
| Family | Count |
|---|---|
| `jurisdictional_routing` | 14 |
| `geometry_mutation` | 11 |
| `delegation_chain` | 15 |
### By Difficulty
| Difficulty | Count |
|---|---|
| `easy` | 16 |
| `medium` | 13 |
| `hard` | 4 |
| `adversarial` | 7 |
### By Expected Outcome
| Outcome | Count |
|---|---|
| `route_success` | 15 |
| `reject_delegation` | 10 |
| `reject_geometry` | 7 |
| `reject_no_ant` | 4 |
| `reject_tier` | 2 |
| `reject_no_jurisdiction` | 1 |
| `flag_boundary_crossing` | 1 |
## Schema
Each record is a `DatasetRecord` with the following fields:
```typescript
{
id: string; // UUID
family: DatasetFamily; // which benchmark
description: string; // human-readable scenario description
input: object; // the task/cert/geometry submitted
expected_outcome: string; // what GEIANT should do
ground_truth: {
expected_ant_handle?: string;
expected_country?: string;
expected_frameworks?: string[];
geometry_valid?: boolean;
delegation_valid?: boolean;
explanation: string; // WHY this is the correct answer
};
difficulty: string; // easy | medium | hard | adversarial
tags: string[];
}
```
## Regulatory Frameworks Covered
| Framework | Jurisdiction | Max Autonomy Tier |
|---|---|---|
| GDPR | EU | trusted |
| EU AI Act | EU | trusted |
| eIDAS2 | EU | certified |
| FINMA | Switzerland | certified |
| Swiss DPA | Switzerland | certified |
| UK GDPR | United Kingdom | trusted |
| US EO 14110 | United States | sovereign |
| CCPA | California, USA | sovereign |
| LGPD | Brazil | trusted |
| PDPA-SG | Singapore | trusted |
| Italian Civil Code | Italy | trusted |
## Usage
```python
from datasets import load_dataset
ds = load_dataset("GNS-Foundation/geiant-geospatial-agent-benchmark")
# Filter by family
routing = ds.filter(lambda x: x["family"] == "jurisdictional_routing")
# Filter by difficulty
adversarial = ds.filter(lambda x: x["difficulty"] == "adversarial")
# Get all rejection scenarios
rejections = ds.filter(lambda x: x["expected_outcome"].startswith("reject_"))
```
## Geospatial Moat
This dataset uses **H3 hexagonal hierarchical spatial indexing** (Uber H3) at resolution 5–9. Each agent is assigned a territory as a set of H3 cells. Routing validates that the task origin cell is contained within the agent's territory — not just lat/lng bounding boxes.
The H3 cells in this dataset are generated from real coordinates:
```python
import h3
rome_cell = h3.latlng_to_cell(41.902, 12.496, 7)
# → '871e805003fffff'
```
## Citation
```bibtex
@dataset{geiant_benchmark_2026,
author = {Ayerbe, Camilo},
title = {GEIANT Geospatial Agent Benchmark},
year = {2026},
version = {0.1.0},
publisher = {GNS Foundation / ULISSY s.r.l.},
url = {https://huggingface.co/datasets/GNS-Foundation/geiant-geospatial-agent-benchmark}
}
```
## License
Apache 2.0 — free for research and commercial use.
---
*Built with [GEIANT](https://github.com/GNS-Foundation/geiant) — Geo-Identity Agent Navigation & Tasking.*
*Part of the [GNS Protocol](https://gcrumbs.com) ecosystem.*
|