PabloAccuosto commited on
Commit
fb797af
·
verified ·
1 Parent(s): b7d4b66

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. .argilla/dataset.json +16 -0
  2. .argilla/settings.json +218 -0
  3. .argilla/version.json +3 -0
  4. README.md +173 -81
.argilla/dataset.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "393de76d-8f7c-43ff-97f6-080b04604eac",
3
+ "name": "affilgood_el_reranking",
4
+ "guidelines": "# AffilGood Entity Linking Annotation Guidelines\n\n## Task Description\nYour task is to evaluate and improve the entity linking capabilities of the AffilGood system. You will be presented with organization mentions extracted from academic affiliation strings, along with candidate organizations from the Research Organization Registry (ROR).\n\n## What to Validate\nFor each record, please evaluate:\n\n1. **The quality of candidate matches**: Rate how well each candidate matches the extracted organization mention\n2. **Potential errors in the processing pipeline**: Identify if there are issues with span detection, NER, or candidate generation\n3. **Provide the correct ROR ID**: If none of the candidates is correct, provide the correct ROR ID\n\n## Instructions\n1. First, review the **span text** and the **mention text** to understand the context\n2. Examine each **candidate organization** and its score\n3. Rate the quality of the candidates from 1-5 (1=poor match, 5=perfect match)\n4. If you notice any errors in how the text was processed, select the appropriate error categories\n5. If none of the candidates is the correct organization, provide the correct ROR ID\n6. Add any additional notes or observations in the feedback field\n\n## Tips for Evaluation\n- Consider both the textual similarity and semantic match between the mention and candidates\n- Remember that abbreviations, alternative names, and translations may be valid matches\n- Location information (city, country) can help distinguish between organizations with similar names\n- If a mention includes both a department and parent organization, the correct link should usually be to the parent organization",
5
+ "allow_extra_metadata": true,
6
+ "status": "ready",
7
+ "distribution": {
8
+ "strategy": "overlap",
9
+ "min_submitted": 1
10
+ },
11
+ "metadata": null,
12
+ "workspace_id": "0617b7ed-4e77-492f-bc8f-711684fe73ef",
13
+ "last_activity_at": "2025-05-26T14:59:29.080301",
14
+ "inserted_at": "2025-04-23T14:03:48.369581",
15
+ "updated_at": "2025-04-23T14:03:49.524979"
16
+ }
.argilla/settings.json ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "guidelines": "# AffilGood Entity Linking Annotation Guidelines\n\n## Task Description\nYour task is to evaluate and improve the entity linking capabilities of the AffilGood system. You will be presented with organization mentions extracted from academic affiliation strings, along with candidate organizations from the Research Organization Registry (ROR).\n\n## What to Validate\nFor each record, please evaluate:\n\n1. **The quality of candidate matches**: Rate how well each candidate matches the extracted organization mention\n2. **Potential errors in the processing pipeline**: Identify if there are issues with span detection, NER, or candidate generation\n3. **Provide the correct ROR ID**: If none of the candidates is correct, provide the correct ROR ID\n\n## Instructions\n1. First, review the **span text** and the **mention text** to understand the context\n2. Examine each **candidate organization** and its score\n3. Rate the quality of the candidates from 1-5 (1=poor match, 5=perfect match)\n4. If you notice any errors in how the text was processed, select the appropriate error categories\n5. If none of the candidates is the correct organization, provide the correct ROR ID\n6. Add any additional notes or observations in the feedback field\n\n## Tips for Evaluation\n- Consider both the textual similarity and semantic match between the mention and candidates\n- Remember that abbreviations, alternative names, and translations may be valid matches\n- Location information (city, country) can help distinguish between organizations with similar names\n- If a mention includes both a department and parent organization, the correct link should usually be to the parent organization",
3
+ "allow_extra_metadata": true,
4
+ "distribution": {
5
+ "strategy": "overlap",
6
+ "min_submitted": 1
7
+ },
8
+ "fields": [
9
+ {
10
+ "id": "a7603a75-f062-4bf9-90aa-855c06edaf70",
11
+ "name": "span_text",
12
+ "title": "Affiliation span",
13
+ "required": true,
14
+ "settings": {
15
+ "type": "text",
16
+ "use_markdown": false
17
+ },
18
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
19
+ "inserted_at": "2025-04-23T14:03:48.574351",
20
+ "updated_at": "2025-04-23T14:03:48.574351"
21
+ },
22
+ {
23
+ "id": "aa4d0011-3420-4495-b925-888cba7f2136",
24
+ "name": "original_text",
25
+ "title": "Original affiliation string",
26
+ "required": false,
27
+ "settings": {
28
+ "type": "text",
29
+ "use_markdown": false
30
+ },
31
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
32
+ "inserted_at": "2025-04-23T14:03:48.692745",
33
+ "updated_at": "2025-04-23T14:03:48.692745"
34
+ },
35
+ {
36
+ "id": "1dd69b98-f535-459a-9843-668528eee88a",
37
+ "name": "organization_mention",
38
+ "title": "Organization mention",
39
+ "required": true,
40
+ "settings": {
41
+ "type": "text",
42
+ "use_markdown": true
43
+ },
44
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
45
+ "inserted_at": "2025-04-23T14:03:48.825232",
46
+ "updated_at": "2025-04-23T14:03:48.825232"
47
+ },
48
+ {
49
+ "id": "2ada9250-690c-43f5-8168-34ecd4a54d93",
50
+ "name": "candidates",
51
+ "title": "Candidate organizations",
52
+ "required": true,
53
+ "settings": {
54
+ "type": "text",
55
+ "use_markdown": true
56
+ },
57
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
58
+ "inserted_at": "2025-04-23T14:03:48.925995",
59
+ "updated_at": "2025-04-23T14:03:48.925995"
60
+ }
61
+ ],
62
+ "questions": [
63
+ {
64
+ "id": "54e40749-e83d-45a2-86b2-fde8cd2a0f1d",
65
+ "name": "candidate_rating",
66
+ "title": "Candidate match (0 for no-match)",
67
+ "description": "Select which of the candidates match the organization mention",
68
+ "required": true,
69
+ "settings": {
70
+ "type": "rating",
71
+ "options": [
72
+ {
73
+ "value": 0
74
+ },
75
+ {
76
+ "value": 1
77
+ },
78
+ {
79
+ "value": 2
80
+ },
81
+ {
82
+ "value": 3
83
+ },
84
+ {
85
+ "value": 4
86
+ },
87
+ {
88
+ "value": 5
89
+ }
90
+ ]
91
+ },
92
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
93
+ "inserted_at": "2025-04-23T14:03:49.025572",
94
+ "updated_at": "2025-04-23T14:03:49.025572"
95
+ },
96
+ {
97
+ "id": "00945ddb-aa5a-40ec-b2e6-6c106799376f",
98
+ "name": "error_categories",
99
+ "title": "Errors",
100
+ "description": "Select any errors you identify in the processing pipeline",
101
+ "required": false,
102
+ "settings": {
103
+ "type": "multi_label_selection",
104
+ "options": [
105
+ {
106
+ "value": "Translation error",
107
+ "text": "Translation error",
108
+ "description": null
109
+ },
110
+ {
111
+ "value": "Span detection error",
112
+ "text": "Span detection error",
113
+ "description": null
114
+ },
115
+ {
116
+ "value": "NER error",
117
+ "text": "NER error",
118
+ "description": null
119
+ }
120
+ ],
121
+ "visible_options": 3,
122
+ "options_order": "natural"
123
+ },
124
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
125
+ "inserted_at": "2025-04-23T14:03:49.132560",
126
+ "updated_at": "2025-04-23T14:03:49.132560"
127
+ },
128
+ {
129
+ "id": "bd82aafc-cd32-4e24-af8e-2088bd6123f9",
130
+ "name": "correct_ror_id",
131
+ "title": "Correct ROR ID",
132
+ "description": "If none of the candidates is correct, provide the correct ROR ID if available (https://ror.org/XXXXXXXX)",
133
+ "required": false,
134
+ "settings": {
135
+ "type": "text",
136
+ "use_markdown": false
137
+ },
138
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
139
+ "inserted_at": "2025-04-23T14:03:49.226173",
140
+ "updated_at": "2025-04-23T14:03:49.226173"
141
+ },
142
+ {
143
+ "id": "6f661f07-21b7-43d8-8fa5-4a1fb618e9b7",
144
+ "name": "feedback",
145
+ "title": "Additional feedback",
146
+ "description": "Any other observations about this record",
147
+ "required": false,
148
+ "settings": {
149
+ "type": "text",
150
+ "use_markdown": false
151
+ },
152
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
153
+ "inserted_at": "2025-04-23T14:03:49.327283",
154
+ "updated_at": "2025-04-23T14:03:49.327283"
155
+ },
156
+ {
157
+ "id": "625877d2-ed0f-42be-a136-216cfde931c7",
158
+ "name": "entities",
159
+ "title": "Entity spans",
160
+ "description": "Entity spans identified in the affiliation text",
161
+ "required": false,
162
+ "settings": {
163
+ "type": "span",
164
+ "field": "span_text",
165
+ "options": [
166
+ {
167
+ "value": "ORG",
168
+ "text": "ORG",
169
+ "description": null
170
+ },
171
+ {
172
+ "value": "SUB",
173
+ "text": "SUB",
174
+ "description": null
175
+ },
176
+ {
177
+ "value": "SUBORG",
178
+ "text": "SUBORG",
179
+ "description": null
180
+ },
181
+ {
182
+ "value": "CITY",
183
+ "text": "CITY",
184
+ "description": null
185
+ },
186
+ {
187
+ "value": "ADDRESS",
188
+ "text": "ADDRESS",
189
+ "description": null
190
+ },
191
+ {
192
+ "value": "REGION",
193
+ "text": "REGION",
194
+ "description": null
195
+ },
196
+ {
197
+ "value": "COUNTRY",
198
+ "text": "COUNTRY",
199
+ "description": null
200
+ },
201
+ {
202
+ "value": "POSTALCODE",
203
+ "text": "POSTALCODE",
204
+ "description": null
205
+ }
206
+ ],
207
+ "visible_options": 8,
208
+ "allow_overlapping": false,
209
+ "allow_character_annotation": true
210
+ },
211
+ "dataset_id": "393de76d-8f7c-43ff-97f6-080b04604eac",
212
+ "inserted_at": "2025-04-23T14:03:49.424669",
213
+ "updated_at": "2025-04-23T14:03:49.424669"
214
+ }
215
+ ],
216
+ "metadata": [],
217
+ "vectors": []
218
+ }
.argilla/version.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "argilla": "2.6.0"
3
+ }
README.md CHANGED
@@ -1,83 +1,175 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: status
7
- dtype: string
8
- - name: inserted_at
9
- dtype: timestamp[us]
10
- - name: updated_at
11
- dtype: timestamp[us]
12
- - name: _server_id
13
- dtype: string
14
- - name: span_text
15
- dtype: string
16
- - name: original_text
17
- dtype: string
18
- - name: organization_mention
19
- dtype: string
20
- - name: candidates
21
- dtype: string
22
- - name: candidate_rating.responses
23
- sequence: int64
24
- - name: candidate_rating.responses.users
25
- sequence: string
26
- - name: candidate_rating.responses.status
27
- sequence: string
28
- - name: error_categories.responses
29
- sequence:
30
- sequence: string
31
- - name: error_categories.responses.users
32
- sequence: string
33
- - name: error_categories.responses.status
34
- sequence: string
35
- - name: correct_ror_id.responses
36
- sequence: string
37
- - name: correct_ror_id.responses.users
38
- sequence: string
39
- - name: correct_ror_id.responses.status
40
- sequence: string
41
- - name: feedback.responses
42
- sequence: string
43
- - name: feedback.responses.users
44
- sequence: string
45
- - name: feedback.responses.status
46
- sequence: string
47
- - name: entities.responses
48
- list:
49
- list:
50
- - name: end
51
- dtype: int64
52
- - name: label
53
- dtype: string
54
- - name: start
55
- dtype: int64
56
- - name: entities.responses.users
57
- sequence: string
58
- - name: entities.responses.status
59
- sequence: string
60
- - name: entities.suggestion
61
- list:
62
- - name: end
63
- dtype: int64
64
- - name: label
65
- dtype: string
66
- - name: start
67
- dtype: int64
68
- - name: entities.suggestion.agent
69
- dtype: string
70
- - name: entities.suggestion.score
71
- dtype: 'null'
72
- splits:
73
- - name: train
74
- num_bytes: 2993217
75
- num_examples: 2281
76
- download_size: 771334
77
- dataset_size: 2993217
78
- configs:
79
- - config_name: default
80
- data_files:
81
- - split: train
82
- path: data/train-*
83
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - rlfh
4
+ - argilla
5
+ - human-feedback
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
+
8
+ # Dataset Card for affilgood_el_reranking
9
+
10
+
11
+
12
+
13
+
14
+
15
+
16
+ This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
17
+
18
+
19
+ ## Using this dataset with Argilla
20
+
21
+ To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
22
+
23
+ ```python
24
+ import argilla as rg
25
+
26
+ ds = rg.Dataset.from_hub("SIRIS-Lab/affilgood_el_reranking", settings="auto")
27
+ ```
28
+
29
+ This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
30
+
31
+ ## Using this dataset with `datasets`
32
+
33
+ To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+
38
+ ds = load_dataset("SIRIS-Lab/affilgood_el_reranking")
39
+ ```
40
+
41
+ This will only load the records of the dataset, but not the Argilla settings.
42
+
43
+ ## Dataset Structure
44
+
45
+ This dataset repo contains:
46
+
47
+ * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
48
+ * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
49
+ * A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
50
+
51
+ The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
52
+
53
+ ### Fields
54
+
55
+ The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
56
+
57
+ | Field Name | Title | Type | Required |
58
+ | ---------- | ----- | ---- | -------- |
59
+ | span_text | Affiliation span | text | True |
60
+ | original_text | Original affiliation string | text | False |
61
+ | organization_mention | Organization mention | text | True |
62
+ | candidates | Candidate organizations | text | True |
63
+
64
+
65
+ ### Questions
66
+
67
+ The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
68
+
69
+ | Question Name | Title | Type | Required | Description | Values/Labels |
70
+ | ------------- | ----- | ---- | -------- | ----------- | ------------- |
71
+ | candidate_rating | Candidate match (0 for no-match) | rating | True | Select which of the candidates match the organization mention | [0, 1, 2, 3, 4, 5] |
72
+ | error_categories | Errors | multi_label_selection | False | Select any errors you identify in the processing pipeline | ['Translation error', 'Span detection error', 'NER error'] |
73
+ | correct_ror_id | Correct ROR ID | text | False | If none of the candidates is correct, provide the correct ROR ID if available (https://ror.org/XXXXXXXX) | N/A |
74
+ | feedback | Additional feedback | text | False | Any other observations about this record | N/A |
75
+ | entities | Entity spans | span | False | Entity spans identified in the affiliation text | ['ORG', 'SUB', 'SUBORG', 'CITY', 'ADDRESS', 'REGION', 'COUNTRY', 'POSTALCODE'] |
76
+
77
+
78
+ <!-- check length of metadata properties -->
79
+
80
+
81
+
82
+
83
+ ### Data Splits
84
+
85
+ The dataset contains a single split, which is `train`.
86
+
87
+ ## Dataset Creation
88
+
89
+ ### Curation Rationale
90
+
91
+ [More Information Needed]
92
+
93
+ ### Source Data
94
+
95
+ #### Initial Data Collection and Normalization
96
+
97
+ [More Information Needed]
98
+
99
+ #### Who are the source language producers?
100
+
101
+ [More Information Needed]
102
+
103
+ ### Annotations
104
+
105
+ #### Annotation guidelines
106
+
107
+ # AffilGood Entity Linking Annotation Guidelines
108
+
109
+ ## Task Description
110
+ Your task is to evaluate and improve the entity linking capabilities of the AffilGood system. You will be presented with organization mentions extracted from academic affiliation strings, along with candidate organizations from the Research Organization Registry (ROR).
111
+
112
+ ## What to Validate
113
+ For each record, please evaluate:
114
+
115
+ 1. **The quality of candidate matches**: Rate how well each candidate matches the extracted organization mention
116
+ 2. **Potential errors in the processing pipeline**: Identify if there are issues with span detection, NER, or candidate generation
117
+ 3. **Provide the correct ROR ID**: If none of the candidates is correct, provide the correct ROR ID
118
+
119
+ ## Instructions
120
+ 1. First, review the **span text** and the **mention text** to understand the context
121
+ 2. Examine each **candidate organization** and its score
122
+ 3. Rate the quality of the candidates from 1-5 (1=poor match, 5=perfect match)
123
+ 4. If you notice any errors in how the text was processed, select the appropriate error categories
124
+ 5. If none of the candidates is the correct organization, provide the correct ROR ID
125
+ 6. Add any additional notes or observations in the feedback field
126
+
127
+ ## Tips for Evaluation
128
+ - Consider both the textual similarity and semantic match between the mention and candidates
129
+ - Remember that abbreviations, alternative names, and translations may be valid matches
130
+ - Location information (city, country) can help distinguish between organizations with similar names
131
+ - If a mention includes both a department and parent organization, the correct link should usually be to the parent organization
132
+
133
+ #### Annotation process
134
+
135
+ [More Information Needed]
136
+
137
+ #### Who are the annotators?
138
+
139
+ [More Information Needed]
140
+
141
+ ### Personal and Sensitive Information
142
+
143
+ [More Information Needed]
144
+
145
+ ## Considerations for Using the Data
146
+
147
+ ### Social Impact of Dataset
148
+
149
+ [More Information Needed]
150
+
151
+ ### Discussion of Biases
152
+
153
+ [More Information Needed]
154
+
155
+ ### Other Known Limitations
156
+
157
+ [More Information Needed]
158
+
159
+ ## Additional Information
160
+
161
+ ### Dataset Curators
162
+
163
+ [More Information Needed]
164
+
165
+ ### Licensing Information
166
+
167
+ [More Information Needed]
168
+
169
+ ### Citation Information
170
+
171
+ [More Information Needed]
172
+
173
+ ### Contributions
174
+
175
+ [More Information Needed]