tags:
- rlfh
- argilla
- human-feedback
Dataset Card for affilgood_el_reranking
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds = rg.Dataset.from_hub("SIRIS-Lab/affilgood_el_reranking", settings="auto")
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
Using this dataset with datasets
To load the records of this dataset with datasets, you'll just need to install datasets as pip install datasets --upgrade and then use the following code:
from datasets import load_dataset
ds = load_dataset("SIRIS-Lab/affilgood_el_reranking")
This will only load the records of the dataset, but not the Argilla settings.
Dataset Structure
This dataset repo contains:
- Dataset records in a format compatible with HuggingFace
datasets. These records will be loaded automatically when usingrg.Dataset.from_huband can be loaded independently using thedatasetslibrary viaload_dataset. - The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
- A dataset configuration folder conforming to the Argilla dataset format in
.argilla.
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
Fields
The fields are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required |
|---|---|---|---|
| span_text | Affiliation span | text | True |
| original_text | Original affiliation string | text | False |
| organization_mention | Organization mention | text | True |
| candidates | Candidate organizations | text | True |
Questions
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
|---|---|---|---|---|---|
| candidate_rating | Candidate match (0 for no-match) | rating | True | Select which of the candidates match the organization mention | [0, 1, 2, 3, 4, 5] |
| error_categories | Errors | multi_label_selection | False | Select any errors you identify in the processing pipeline | ['Translation error', 'Span detection error', 'NER error'] |
| correct_ror_id | Correct ROR ID | text | False | If none of the candidates is correct, provide the correct ROR ID if available (https://ror.org/XXXXXXXX) | N/A |
| feedback | Additional feedback | text | False | Any other observations about this record | N/A |
| entities | Entity spans | span | False | Entity spans identified in the affiliation text | ['ORG', 'SUB', 'SUBORG', 'CITY', 'ADDRESS', 'REGION', 'COUNTRY', 'POSTALCODE'] |
Data Splits
The dataset contains a single split, which is train.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation guidelines
AffilGood Entity Linking Annotation Guidelines
Task Description
Your task is to evaluate and improve the entity linking capabilities of the AffilGood system. You will be presented with organization mentions extracted from academic affiliation strings, along with candidate organizations from the Research Organization Registry (ROR).
What to Validate
For each record, please evaluate:
- The quality of candidate matches: Rate how well each candidate matches the extracted organization mention
- Potential errors in the processing pipeline: Identify if there are issues with span detection, NER, or candidate generation
- Provide the correct ROR ID: If none of the candidates is correct, provide the correct ROR ID
Instructions
- First, review the span text and the mention text to understand the context
- Examine each candidate organization and its score
- Rate the quality of the candidates from 1-5 (1=poor match, 5=perfect match)
- If you notice any errors in how the text was processed, select the appropriate error categories
- If none of the candidates is the correct organization, provide the correct ROR ID
- Add any additional notes or observations in the feedback field
Tips for Evaluation
- Consider both the textual similarity and semantic match between the mention and candidates
- Remember that abbreviations, alternative names, and translations may be valid matches
- Location information (city, country) can help distinguish between organizations with similar names
- If a mention includes both a department and parent organization, the correct link should usually be to the parent organization
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
[More Information Needed]