File size: 6,579 Bytes
70f3bdf 7fb2afd 70f3bdf 7fb2afd 70f3bdf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
---
language:
- de
- en
- fr
- nl
- sv
license: mit
task_categories:
- text-retrieval
- sentence-similarity
tags:
- entity-linking
- skills
- multilingual
- ranking
- information-retrieval
- ESCO
configs:
- config_name: bel_q_fr_c_en
data_files:
- split: queries
path: "bel_q_fr_c_en/queries-00000-of-00001.parquet"
- split: corpus
path: "bel_q_fr_c_en/corpus-00000-of-00001.parquet"
- config_name: bel_q_fr_c_fr
data_files:
- split: queries
path: "bel_q_fr_c_fr/queries-00000-of-00001.parquet"
- split: corpus
path: "bel_q_fr_c_fr/corpus-00000-of-00001.parquet"
- config_name: bel_q_nl_c_en
data_files:
- split: queries
path: "bel_q_nl_c_en/queries-00000-of-00001.parquet"
- split: corpus
path: "bel_q_nl_c_en/corpus-00000-of-00001.parquet"
- config_name: bel_q_nl_c_nl
data_files:
- split: queries
path: "bel_q_nl_c_nl/queries-00000-of-00001.parquet"
- split: corpus
path: "bel_q_nl_c_nl/corpus-00000-of-00001.parquet"
- config_name: deu_q_de_c_de
data_files:
- split: queries
path: "deu_q_de_c_de/queries-00000-of-00001.parquet"
- split: corpus
path: "deu_q_de_c_de/corpus-00000-of-00001.parquet"
- config_name: deu_q_de_c_en
data_files:
- split: queries
path: "deu_q_de_c_en/queries-00000-of-00001.parquet"
- split: corpus
path: "deu_q_de_c_en/corpus-00000-of-00001.parquet"
- config_name: swe_q_sv_c_en
data_files:
- split: queries
path: "swe_q_sv_c_en/queries-00000-of-00001.parquet"
- split: corpus
path: "swe_q_sv_c_en/corpus-00000-of-00001.parquet"
- config_name: swe_q_sv_c_sv
data_files:
- split: queries
path: "swe_q_sv_c_sv/queries-00000-of-00001.parquet"
- split: corpus
path: "swe_q_sv_c_sv/corpus-00000-of-00001.parquet"
---
# MELS: Multilingual Entity Linking of Skills
MELS is a collection of 8 datasets for evaluating the linking of skill mentions to the
ESCO Skills taxonomy. It covers 3 countries and 4 languages.
## Background
MELS is a sibling dataset to [MELO (Multilingual Entity Linking of Occupations)](https://huggingface.co/datasets/federetyk/MELO-Benchmark).
Both datasets were built using the same methodology and the same type of source data:
crosswalks between national taxonomies and ESCO, published by official labor-related
organizations from EU member states.
The difference is the entity type~~:~~
- **MELO** links occupation mentions (job titles) to ESCO Occupations
- **MELS** links skill mentions to ESCO Skills
MELS covers fewer countries than MELO because fewer EU member states have published
ESCO skill crosswalks. While MELO includes crosswalks from 21+ countries, only 3
countries (Belgium, Germany, Sweden) have published skill crosswalks that could be
used for MELS. This limited scope is why MELS was not published as a standalone
benchmark, but the data remains useful for skill entity linking evaluation.
**2026-01-01 Update**: Austria, Czechia, and Estonia have recently uploaded crosswalks for skills
as well [[*](https://esco.ec.europa.eu/en/use-esco/eures-countries-mapping-tables)]. We plan to
include these in a future version of MELS.
## Dataset Structure
Each subset (configuration) contains two splits:
- **`queries`**: Skill mentions from national taxonomies, with indices of matching ESCO skills
- **`corpus`**: ESCO skill labels
### Schema
**queries split:**
| Column | Type | Description |
|--------|------|-------------|
| `text` | `string` | The skill mention (surface form) |
| `labels` | `list[int]` | Indices of relevant corpus elements |
**corpus split:**
| Column | Type | Description |
|--------|------|-------------|
| `text` | `string` | The ESCO skill label (surface form) |
## Available Subsets
The subset names follow the pattern: `{country}_q_{query_lang}_c_{corpus_lang}`
| Subset | Country | Query Lang | Corpus Lang | # Queries | # Corpus |
|--------|---------|------------|-------------|-----------|----------|
| `bel_q_fr_c_fr` | Belgium | fr | fr | 2,247 | 17,312 |
| `bel_q_fr_c_en` | Belgium | fr | en | 2,247 | 97,520 |
| `bel_q_nl_c_nl` | Belgium | nl | nl | 2,247 | 25,748 |
| `bel_q_nl_c_en` | Belgium | nl | en | 2,247 | 97,520 |
| `deu_q_de_c_de` | Germany | de | de | 1,722 | 19,466 |
| `deu_q_de_c_en` | Germany | de | en | 1,722 | 97,520 |
| `swe_q_sv_c_sv` | Sweden | sv | sv | 4,381 | 19,251 |
| `swe_q_sv_c_en` | Sweden | sv | en | 4,381 | 100,273 |
### Subset Naming Convention
- `{country}`: ISO 3166-1 alpha-3 country code (e.g., `deu` for Germany)
- `q_{lang}`: Query language (ISO 639-1 code)
- `c_{lang}`: Corpus language (ISO 639-1 code)
## Usage
```python
from datasets import load_dataset
# Load a specific subset
ds = load_dataset("Avature/MELS-Benchmark", "deu_q_de_c_de")
# Access the data
query_surface_forms = ds["queries"]["text"]
corpus_surface_forms = ds["corpus"]["text"]
label_lists = ds["queries"]["labels"]
# Example: Get relevant corpus texts for the first query
query_idx = 0
print(f"Query: {query_surface_forms[query_idx]}")
print(f"Relevant ESCO skills:")
for corpus_idx in label_lists[query_idx]:
print(f" - {corpus_surface_forms[corpus_idx]}")
```
## Relation to MELO
MELS uses the same methodology as MELO. For details on how the datasets were
constructed from ESCO crosswalks, see the MELO paper and repository:
- **Paper:** [MELO: An Evaluation Benchmark for Multilingual Entity Linking of Occupations](https://recsyshr.aau.dk/wp-content/uploads/2024/10/RecSysHR2024-paper_2.pdf)
- **Repository:** [github.com/Avature/melo-benchmark](https://github.com/Avature/melo-benchmark)
- **HuggingFace:** [Avature/MELO-Benchmark](https://huggingface.co/datasets/federetyk/MELO-Benchmark)
## Citation
If you use this dataset, please cite the MELO paper (which describes the methodology
used to construct both MELO and MELS):
```bibtex
@inproceedings{retyk2024melo,
title = {{MELO: An Evaluation Benchmark for Multilingual Entity Linking of Occupations}},
author = {Federico Retyk and Luis Gasco and Casimiro Pio Carrino and Daniel Deniz and Rabih Zbib},
booktitle = {Proceedings of the 4th Workshop on Recommender Systems for Human Resources
(RecSys in {HR} 2024), in conjunction with the 18th {ACM} Conference on
Recommender Systems},
year = {2024},
url = {https://recsyshr.aau.dk/wp-content/uploads/2024/10/RecSysHR2024-paper_2.pdf},
}
```
## License
This dataset is licensed under the MIT License. See the [LICENSE](https://github.com/Avature/melo-benchmark/blob/main/LICENSE) file for more information.
|