Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
File size: 2,341 Bytes
3ce17dc
 
0bcfcf9
3ce17dc
 
 
 
03a80b6
 
 
 
 
 
 
 
 
ca35dd3
 
03a80b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5dc6da3
03a80b6
ca35dd3
5dc6da3
0bcfcf9
03a80b6
0bcfcf9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
task_categories:
- visual-question-answering
language:
- en
size_categories:
- n<1K
configs:
- config_name: default
  data_files:
  - split: data
    path: data/data-*
dataset_info:
  features:
  - name: question_ref
    dtype: string
  - name: images
    list: string
  - name: question_text
    dtype: string
  - name: expected_answer
    dtype: string
  - name: map_count
    dtype: string
  - name: spatial_relationship
    dtype: string
  - name: answer_type
    dtype: string
  - name: domain
    dtype: string
  - name: map_elements
    list: string
  - name: context_images
    list: string
  splits:
  - name: data
    num_bytes: 576010
    num_examples: 500
  download_size: 120231
  dataset_size: 576010
pretty_name: FRIEDA
---

[![arXiv](https://img.shields.io/badge/arXiv-2512.08016-111111?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2512.08016)
[![Website](https://img.shields.io/badge/Website-Webpage-111111?style=for-the-badge&logo=googlechrome&logoColor=white)](https://knowledge-computing.github.io/FRIEDA/)
[![Code](https://img.shields.io/badge/Code-GitHub-111111?style=for-the-badge&logo=github&logoColor=white)](https://github.com/knowledge-computing/FRIEDA)

**FRIEDA** is a multimodal benchmark for **open-ended cartographic reasoning** over real-world map images.  
Each example pairs reference maps (and optional contextual maps) with a natural-language question and a reference answer. The benchmark targets common GIS relation types (i.e., **topological**, **metric**, **directional**) and includes questions that require multi-step reasoning and cross-map grounding.

### Dataset Summary

- **Modality:** image + text  
- **# Examples:** 500  
- **Input:** map image(s) + question text  
- **Output:** expected answer (textual)
- **Metadata:** map_count, domain, relationship type, map elements

### Languages

The dataset questions and answers are in **English**.

---

## How to use it

```python
from datasets import load_dataset

# Full dataset (split name = "data")
ds = load_dataset("knowledge-computing/FRIEDA", split="data")
print(ds[0].keys())
print(ds[0]["question_text"])    # Actual question being asked
print(ds[0]["images"])           # List of string paths to images (e.g., "images/...png")
print(ds[0]["context_images"])   # List of string paths to contextual images