Upload folder using huggingface_hub
Browse files- .argilla/dataset.json +16 -0
- .argilla/settings.json +102 -0
- .argilla/version.json +3 -0
- README.md +186 -39
.argilla/dataset.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"id": "25a3e230-cc98-4072-8c34-38e2870f18bf",
|
| 3 |
+
"name": "geotagging_reranker",
|
| 4 |
+
"guidelines": "# OSM Entity Reranking Annotation Guidelines\n\n## 1 Task Description\nYou will be shown:\n* A **geographic mention** extracted from free text — e.g. “Alexanderplatz”, “Strait of Messina”.\n* A short **context snippet** (± 1–2 sentences) providing local clues.\n* A **candidate list** of OpenStreetMap (OSM) objects that our retrieval pipeline thinks might match, each with:\n - OSM ID, object type (node / way / relation)\n - Primary name and known alternate names\n - Key location tags (place, amenity, natural, boundary, etc.)\n - Lat/long, containing admin areas, and distance to any coordinates mentioned in the text (if available)\n - System-generated similarity score (descending order)\n\nYour job is to **verify and (re)rank** these candidates so that the true match is at rank 1 or, if missing, to supply the correct OSM ID.\n\n---\n\n## 2 What to Deliver\nFor every record you must:\n\n| Field | What to enter |\n|---------------------------|-------------------------------------------------------------------------------------------|\n| `top_candidate_score` | A quality score **1–5** for the best candidate (1 = wrong object, 5 = perfect match). |\n| `correct_osm_id_if_none` | If no candidate is correct, paste the OSM ID (node/way/relation) you found; else leave blank. |\n| `feedback` | Free-text comments, ambiguous cases, or anything helpful for model improvement. |\n\n---\n\n## 3 Detailed Instructions\n1. **Read the mention & context** \n - Note nearby place names, feature type (city, mountain, river, square, etc.), and any coordinate clues.\n\n2. **Open each candidate** (the tool links to the OSM web viewer): \n - Confirm the feature’s geometry, tags, and admin location. \n - Check alternate names (`name:*`, `alt_name`, `official_name`) and language variants.\n\n3. **Decide correctness & rerank** \n - If one candidate is an exact semantic match, place it first. \n - If several are plausible, order them by:\n 1. Name agreement (including abbreviations & translations) \n 2. Correct feature type (e.g., “Lake” ≠ “Town”) \n 3. Spatial closeness to any coordinates or larger place mentioned in context \n 4. Popularity / prominence when all else is equal",
|
| 5 |
+
"allow_extra_metadata": true,
|
| 6 |
+
"status": "ready",
|
| 7 |
+
"distribution": {
|
| 8 |
+
"strategy": "overlap",
|
| 9 |
+
"min_submitted": 1
|
| 10 |
+
},
|
| 11 |
+
"metadata": null,
|
| 12 |
+
"workspace_id": "0617b7ed-4e77-492f-bc8f-711684fe73ef",
|
| 13 |
+
"last_activity_at": "2025-07-25T12:45:42.406729",
|
| 14 |
+
"inserted_at": "2025-06-24T11:53:45.428326",
|
| 15 |
+
"updated_at": "2025-06-24T11:53:46.446311"
|
| 16 |
+
}
|
.argilla/settings.json
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"guidelines": "# OSM Entity Reranking Annotation Guidelines\n\n## 1 Task Description\nYou will be shown:\n* A **geographic mention** extracted from free text — e.g. “Alexanderplatz”, “Strait of Messina”.\n* A short **context snippet** (± 1–2 sentences) providing local clues.\n* A **candidate list** of OpenStreetMap (OSM) objects that our retrieval pipeline thinks might match, each with:\n - OSM ID, object type (node / way / relation)\n - Primary name and known alternate names\n - Key location tags (place, amenity, natural, boundary, etc.)\n - Lat/long, containing admin areas, and distance to any coordinates mentioned in the text (if available)\n - System-generated similarity score (descending order)\n\nYour job is to **verify and (re)rank** these candidates so that the true match is at rank 1 or, if missing, to supply the correct OSM ID.\n\n---\n\n## 2 What to Deliver\nFor every record you must:\n\n| Field | What to enter |\n|---------------------------|-------------------------------------------------------------------------------------------|\n| `top_candidate_score` | A quality score **1–5** for the best candidate (1 = wrong object, 5 = perfect match). |\n| `correct_osm_id_if_none` | If no candidate is correct, paste the OSM ID (node/way/relation) you found; else leave blank. |\n| `feedback` | Free-text comments, ambiguous cases, or anything helpful for model improvement. |\n\n---\n\n## 3 Detailed Instructions\n1. **Read the mention & context** \n - Note nearby place names, feature type (city, mountain, river, square, etc.), and any coordinate clues.\n\n2. **Open each candidate** (the tool links to the OSM web viewer): \n - Confirm the feature’s geometry, tags, and admin location. \n - Check alternate names (`name:*`, `alt_name`, `official_name`) and language variants.\n\n3. **Decide correctness & rerank** \n - If one candidate is an exact semantic match, place it first. \n - If several are plausible, order them by:\n 1. Name agreement (including abbreviations & translations) \n 2. Correct feature type (e.g., “Lake” ≠ “Town”) \n 3. Spatial closeness to any coordinates or larger place mentioned in context \n 4. Popularity / prominence when all else is equal",
|
| 3 |
+
"allow_extra_metadata": true,
|
| 4 |
+
"distribution": {
|
| 5 |
+
"strategy": "overlap",
|
| 6 |
+
"min_submitted": 1
|
| 7 |
+
},
|
| 8 |
+
"fields": [
|
| 9 |
+
{
|
| 10 |
+
"id": "445bebf4-b9e3-4f3c-a0da-fb352d6063bf",
|
| 11 |
+
"name": "text",
|
| 12 |
+
"title": "text",
|
| 13 |
+
"required": false,
|
| 14 |
+
"settings": {
|
| 15 |
+
"type": "text",
|
| 16 |
+
"use_markdown": true
|
| 17 |
+
},
|
| 18 |
+
"dataset_id": "25a3e230-cc98-4072-8c34-38e2870f18bf",
|
| 19 |
+
"inserted_at": "2025-06-24T11:53:45.951732",
|
| 20 |
+
"updated_at": "2025-06-24T11:53:45.951732"
|
| 21 |
+
},
|
| 22 |
+
{
|
| 23 |
+
"id": "e7d323c2-68fb-4446-8b66-0327200b838d",
|
| 24 |
+
"name": "candidates",
|
| 25 |
+
"title": "Candidate organizations",
|
| 26 |
+
"required": true,
|
| 27 |
+
"settings": {
|
| 28 |
+
"type": "text",
|
| 29 |
+
"use_markdown": true
|
| 30 |
+
},
|
| 31 |
+
"dataset_id": "25a3e230-cc98-4072-8c34-38e2870f18bf",
|
| 32 |
+
"inserted_at": "2025-06-24T11:53:46.058413",
|
| 33 |
+
"updated_at": "2025-06-24T11:53:46.058413"
|
| 34 |
+
}
|
| 35 |
+
],
|
| 36 |
+
"questions": [
|
| 37 |
+
{
|
| 38 |
+
"id": "55536afb-6e02-41dc-8802-c3a5c01c35f5",
|
| 39 |
+
"name": "candidate_rating",
|
| 40 |
+
"title": "Candidate match (0 for no-match)",
|
| 41 |
+
"description": "Select which of the candidates match the organization mention",
|
| 42 |
+
"required": true,
|
| 43 |
+
"settings": {
|
| 44 |
+
"type": "rating",
|
| 45 |
+
"options": [
|
| 46 |
+
{
|
| 47 |
+
"value": 0
|
| 48 |
+
},
|
| 49 |
+
{
|
| 50 |
+
"value": 1
|
| 51 |
+
},
|
| 52 |
+
{
|
| 53 |
+
"value": 2
|
| 54 |
+
},
|
| 55 |
+
{
|
| 56 |
+
"value": 3
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"value": 4
|
| 60 |
+
},
|
| 61 |
+
{
|
| 62 |
+
"value": 5
|
| 63 |
+
},
|
| 64 |
+
{
|
| 65 |
+
"value": 6
|
| 66 |
+
},
|
| 67 |
+
{
|
| 68 |
+
"value": 7
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"value": 8
|
| 72 |
+
},
|
| 73 |
+
{
|
| 74 |
+
"value": 9
|
| 75 |
+
},
|
| 76 |
+
{
|
| 77 |
+
"value": 10
|
| 78 |
+
}
|
| 79 |
+
]
|
| 80 |
+
},
|
| 81 |
+
"dataset_id": "25a3e230-cc98-4072-8c34-38e2870f18bf",
|
| 82 |
+
"inserted_at": "2025-06-24T11:53:46.182967",
|
| 83 |
+
"updated_at": "2025-06-24T11:53:46.182967"
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"id": "878efe1c-ea6b-4907-982f-7348dfe31775",
|
| 87 |
+
"name": "feedback",
|
| 88 |
+
"title": "Additional feedback",
|
| 89 |
+
"description": "Any other observations about this record",
|
| 90 |
+
"required": false,
|
| 91 |
+
"settings": {
|
| 92 |
+
"type": "text",
|
| 93 |
+
"use_markdown": false
|
| 94 |
+
},
|
| 95 |
+
"dataset_id": "25a3e230-cc98-4072-8c34-38e2870f18bf",
|
| 96 |
+
"inserted_at": "2025-06-24T11:53:46.304361",
|
| 97 |
+
"updated_at": "2025-06-24T11:53:46.304361"
|
| 98 |
+
}
|
| 99 |
+
],
|
| 100 |
+
"metadata": [],
|
| 101 |
+
"vectors": []
|
| 102 |
+
}
|
.argilla/version.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"argilla": "2.6.0"
|
| 3 |
+
}
|
README.md
CHANGED
|
@@ -1,41 +1,188 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
- name: status
|
| 7 |
-
dtype: string
|
| 8 |
-
- name: inserted_at
|
| 9 |
-
dtype: timestamp[us]
|
| 10 |
-
- name: updated_at
|
| 11 |
-
dtype: timestamp[us]
|
| 12 |
-
- name: _server_id
|
| 13 |
-
dtype: string
|
| 14 |
-
- name: text
|
| 15 |
-
dtype: string
|
| 16 |
-
- name: candidates
|
| 17 |
-
dtype: string
|
| 18 |
-
- name: candidate_rating.responses
|
| 19 |
-
sequence: int64
|
| 20 |
-
- name: candidate_rating.responses.users
|
| 21 |
-
sequence: string
|
| 22 |
-
- name: candidate_rating.responses.status
|
| 23 |
-
sequence: string
|
| 24 |
-
- name: feedback.responses
|
| 25 |
-
sequence: string
|
| 26 |
-
- name: feedback.responses.users
|
| 27 |
-
sequence: string
|
| 28 |
-
- name: feedback.responses.status
|
| 29 |
-
sequence: string
|
| 30 |
-
splits:
|
| 31 |
-
- name: train
|
| 32 |
-
num_bytes: 5794260
|
| 33 |
-
num_examples: 1855
|
| 34 |
-
download_size: 1146782
|
| 35 |
-
dataset_size: 5794260
|
| 36 |
-
configs:
|
| 37 |
-
- config_name: default
|
| 38 |
-
data_files:
|
| 39 |
-
- split: train
|
| 40 |
-
path: data/train-*
|
| 41 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
tags:
|
| 3 |
+
- rlfh
|
| 4 |
+
- argilla
|
| 5 |
+
- human-feedback
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
+
|
| 8 |
+
# Dataset Card for geotagging_reranking
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## Using this dataset with Argilla
|
| 20 |
+
|
| 21 |
+
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
|
| 22 |
+
|
| 23 |
+
```python
|
| 24 |
+
import argilla as rg
|
| 25 |
+
|
| 26 |
+
ds = rg.Dataset.from_hub("SIRIS-Lab/geotagging_reranking", settings="auto")
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
|
| 30 |
+
|
| 31 |
+
## Using this dataset with `datasets`
|
| 32 |
+
|
| 33 |
+
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
from datasets import load_dataset
|
| 37 |
+
|
| 38 |
+
ds = load_dataset("SIRIS-Lab/geotagging_reranking")
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
This will only load the records of the dataset, but not the Argilla settings.
|
| 42 |
+
|
| 43 |
+
## Dataset Structure
|
| 44 |
+
|
| 45 |
+
This dataset repo contains:
|
| 46 |
+
|
| 47 |
+
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
|
| 48 |
+
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
|
| 49 |
+
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
|
| 50 |
+
|
| 51 |
+
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
|
| 52 |
+
|
| 53 |
+
### Fields
|
| 54 |
+
|
| 55 |
+
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
|
| 56 |
+
|
| 57 |
+
| Field Name | Title | Type | Required |
|
| 58 |
+
| ---------- | ----- | ---- | -------- |
|
| 59 |
+
| text | text | text | False |
|
| 60 |
+
| candidates | Candidate organizations | text | True |
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
### Questions
|
| 64 |
+
|
| 65 |
+
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
|
| 66 |
+
|
| 67 |
+
| Question Name | Title | Type | Required | Description | Values/Labels |
|
| 68 |
+
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
|
| 69 |
+
| candidate_rating | Candidate match (0 for no-match) | rating | True | Select which of the candidates match the organization mention | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
|
| 70 |
+
| feedback | Additional feedback | text | False | Any other observations about this record | N/A |
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
<!-- check length of metadata properties -->
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
### Data Splits
|
| 79 |
+
|
| 80 |
+
The dataset contains a single split, which is `train`.
|
| 81 |
+
|
| 82 |
+
## Dataset Creation
|
| 83 |
+
|
| 84 |
+
### Curation Rationale
|
| 85 |
+
|
| 86 |
+
[More Information Needed]
|
| 87 |
+
|
| 88 |
+
### Source Data
|
| 89 |
+
|
| 90 |
+
#### Initial Data Collection and Normalization
|
| 91 |
+
|
| 92 |
+
[More Information Needed]
|
| 93 |
+
|
| 94 |
+
#### Who are the source language producers?
|
| 95 |
+
|
| 96 |
+
[More Information Needed]
|
| 97 |
+
|
| 98 |
+
### Annotations
|
| 99 |
+
|
| 100 |
+
#### Annotation guidelines
|
| 101 |
+
|
| 102 |
+
# OSM Entity Reranking Annotation Guidelines
|
| 103 |
+
|
| 104 |
+
## 1 Task Description
|
| 105 |
+
You will be shown:
|
| 106 |
+
* A **geographic mention** extracted from free text — e.g. “Alexanderplatz”, “Strait of Messina”.
|
| 107 |
+
* A short **context snippet** (± 1–2 sentences) providing local clues.
|
| 108 |
+
* A **candidate list** of OpenStreetMap (OSM) objects that our retrieval pipeline thinks might match, each with:
|
| 109 |
+
- OSM ID, object type (node / way / relation)
|
| 110 |
+
- Primary name and known alternate names
|
| 111 |
+
- Key location tags (place, amenity, natural, boundary, etc.)
|
| 112 |
+
- Lat/long, containing admin areas, and distance to any coordinates mentioned in the text (if available)
|
| 113 |
+
- System-generated similarity score (descending order)
|
| 114 |
+
|
| 115 |
+
Your job is to **verify and (re)rank** these candidates so that the true match is at rank 1 or, if missing, to supply the correct OSM ID.
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
## 2 What to Deliver
|
| 120 |
+
For every record you must:
|
| 121 |
+
|
| 122 |
+
| Field | What to enter |
|
| 123 |
+
|---------------------------|-------------------------------------------------------------------------------------------|
|
| 124 |
+
| `top_candidate_score` | A quality score **1–5** for the best candidate (1 = wrong object, 5 = perfect match). |
|
| 125 |
+
| `correct_osm_id_if_none` | If no candidate is correct, paste the OSM ID (node/way/relation) you found; else leave blank. |
|
| 126 |
+
| `feedback` | Free-text comments, ambiguous cases, or anything helpful for model improvement. |
|
| 127 |
+
|
| 128 |
+
---
|
| 129 |
+
|
| 130 |
+
## 3 Detailed Instructions
|
| 131 |
+
1. **Read the mention & context**
|
| 132 |
+
- Note nearby place names, feature type (city, mountain, river, square, etc.), and any coordinate clues.
|
| 133 |
+
|
| 134 |
+
2. **Open each candidate** (the tool links to the OSM web viewer):
|
| 135 |
+
- Confirm the feature’s geometry, tags, and admin location.
|
| 136 |
+
- Check alternate names (`name:*`, `alt_name`, `official_name`) and language variants.
|
| 137 |
+
|
| 138 |
+
3. **Decide correctness & rerank**
|
| 139 |
+
- If one candidate is an exact semantic match, place it first.
|
| 140 |
+
- If several are plausible, order them by:
|
| 141 |
+
1. Name agreement (including abbreviations & translations)
|
| 142 |
+
2. Correct feature type (e.g., “Lake” ≠ “Town”)
|
| 143 |
+
3. Spatial closeness to any coordinates or larger place mentioned in context
|
| 144 |
+
4. Popularity / prominence when all else is equal
|
| 145 |
+
|
| 146 |
+
#### Annotation process
|
| 147 |
+
|
| 148 |
+
[More Information Needed]
|
| 149 |
+
|
| 150 |
+
#### Who are the annotators?
|
| 151 |
+
|
| 152 |
+
[More Information Needed]
|
| 153 |
+
|
| 154 |
+
### Personal and Sensitive Information
|
| 155 |
+
|
| 156 |
+
[More Information Needed]
|
| 157 |
+
|
| 158 |
+
## Considerations for Using the Data
|
| 159 |
+
|
| 160 |
+
### Social Impact of Dataset
|
| 161 |
+
|
| 162 |
+
[More Information Needed]
|
| 163 |
+
|
| 164 |
+
### Discussion of Biases
|
| 165 |
+
|
| 166 |
+
[More Information Needed]
|
| 167 |
+
|
| 168 |
+
### Other Known Limitations
|
| 169 |
+
|
| 170 |
+
[More Information Needed]
|
| 171 |
+
|
| 172 |
+
## Additional Information
|
| 173 |
+
|
| 174 |
+
### Dataset Curators
|
| 175 |
+
|
| 176 |
+
[More Information Needed]
|
| 177 |
+
|
| 178 |
+
### Licensing Information
|
| 179 |
+
|
| 180 |
+
[More Information Needed]
|
| 181 |
+
|
| 182 |
+
### Citation Information
|
| 183 |
+
|
| 184 |
+
[More Information Needed]
|
| 185 |
+
|
| 186 |
+
### Contributions
|
| 187 |
+
|
| 188 |
+
[More Information Needed]
|