tags:
- rlfh
- argilla
- human-feedback
Dataset Card for geotagging_reranking
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds = rg.Dataset.from_hub("SIRIS-Lab/geotagging_reranking", settings="auto")
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
Using this dataset with datasets
To load the records of this dataset with datasets, you'll just need to install datasets as pip install datasets --upgrade and then use the following code:
from datasets import load_dataset
ds = load_dataset("SIRIS-Lab/geotagging_reranking")
This will only load the records of the dataset, but not the Argilla settings.
Dataset Structure
This dataset repo contains:
- Dataset records in a format compatible with HuggingFace
datasets. These records will be loaded automatically when usingrg.Dataset.from_huband can be loaded independently using thedatasetslibrary viaload_dataset. - The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
- A dataset configuration folder conforming to the Argilla dataset format in
.argilla.
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
Fields
The fields are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required |
|---|---|---|---|
| text | text | text | False |
| candidates | Candidate organizations | text | True |
Questions
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
|---|---|---|---|---|---|
| candidate_rating | Candidate match (0 for no-match) | rating | True | Select which of the candidates match the organization mention | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| feedback | Additional feedback | text | False | Any other observations about this record | N/A |
Data Splits
The dataset contains a single split, which is train.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation guidelines
OSM Entity Reranking Annotation Guidelines
1 Task Description
You will be shown:
- A geographic mention extracted from free text — e.g. “Alexanderplatz”, “Strait of Messina”.
- A short context snippet (± 1–2 sentences) providing local clues.
- A candidate list of OpenStreetMap (OSM) objects that our retrieval pipeline thinks might match, each with:
- OSM ID, object type (node / way / relation)
- Primary name and known alternate names
- Key location tags (place, amenity, natural, boundary, etc.)
- Lat/long, containing admin areas, and distance to any coordinates mentioned in the text (if available)
- System-generated similarity score (descending order)
Your job is to verify and (re)rank these candidates so that the true match is at rank 1 or, if missing, to supply the correct OSM ID.
2 What to Deliver
For every record you must:
| Field | What to enter |
|---|---|
top_candidate_score |
A quality score 1–5 for the best candidate (1 = wrong object, 5 = perfect match). |
correct_osm_id_if_none |
If no candidate is correct, paste the OSM ID (node/way/relation) you found; else leave blank. |
feedback |
Free-text comments, ambiguous cases, or anything helpful for model improvement. |
3 Detailed Instructions
Read the mention & context
- Note nearby place names, feature type (city, mountain, river, square, etc.), and any coordinate clues.
Open each candidate (the tool links to the OSM web viewer):
- Confirm the feature’s geometry, tags, and admin location.
- Check alternate names (
name:*,alt_name,official_name) and language variants.
Decide correctness & rerank
- If one candidate is an exact semantic match, place it first.
- If several are plausible, order them by:
- Name agreement (including abbreviations & translations)
- Correct feature type (e.g., “Lake” ≠ “Town”)
- Spatial closeness to any coordinates or larger place mentioned in context
- Popularity / prominence when all else is equal
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
[More Information Needed]