Datasets:

Modalities:
Text
Formats:
csv
Languages:
Catalan
ArXiv:
Libraries:
Datasets
pandas
License:
carmentano commited on
Commit
6d8a4e2
·
1 Parent(s): 6a37dd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -32,7 +32,7 @@ task_ids:
32
 
33
  ### Dataset Summary
34
 
35
- STS corpus is a benchmark for evaluating Semantic Text Similarity in Catalan. This dataset was developed by BSC TeMU as part of the AINA project, to enrich the Catalan Language Understanding Benchmark (CLUB).
36
 
37
 
38
  ### Supported Tasks and Leaderboards
@@ -82,7 +82,7 @@ This dataset follows [SemEval](https://www.aclweb.org/anthology/S13-1004.pdf) ch
82
 
83
  ### Methodology
84
 
85
- Random sentences were extracted from 3 Catalan corpus: ACN, Oscar and Wikipedia, and we generated candidate pairs using a combination of metrics from Doc2Vec, Jaccard and a BERT-like model (“[distiluse-base-multilingual-cased-v2](https://huggingface.co/distilbert-base-multilingual-cased)”). Finally, we manually reviewed the generated pairs to reject non-relevant pairs (identical or ungrammatical sentences, etc.) before providing them to the annotation team.
86
 
87
  The average of the four annotations was selected as a “ground truth” for each sentence pair, except when an annotator diverged in more than one unit from the average. In these cases, we discarded the divergent annotation and recalculated the average without it. We also discarded 45 sentence pairs because the annotators disagreed too much.
88
 
 
32
 
33
  ### Dataset Summary
34
 
35
+ STS corpus is a benchmark for evaluating Semantic Text Similarity in Catalan. This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of the [projecte Aina](https://politiquesdigitals.gencat.cat/ca/tic/aina-el-projecte-per-garantir-el-catala-en-lera-digital/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://temu-bsc.github.io/catalan-language-understanding-benchmark/).
36
 
37
 
38
  ### Supported Tasks and Leaderboards
 
82
 
83
  ### Methodology
84
 
85
+ Random sentences were extracted from 3 Catalan corpus: [ACN](https://www.acn.cat/), [Oscar](https://oscar-corpus.com/) and [Wikipedia](ca.wikipedia.org), and we generated candidate pairs using a combination of metrics from Doc2Vec, Jaccard and a BERT-like model (“[distiluse-base-multilingual-cased-v2](https://huggingface.co/distilbert-base-multilingual-cased)”). Finally, we manually reviewed the generated pairs to reject non-relevant pairs (identical or ungrammatical sentences, etc.) before providing them to the annotation team.
86
 
87
  The average of the four annotations was selected as a “ground truth” for each sentence pair, except when an annotator diverged in more than one unit from the average. In these cases, we discarded the divergent annotation and recalculated the average without it. We also discarded 45 sentence pairs because the annotators disagreed too much.
88