author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
naver-clova-conversation-ul | null | null | null | false | 10 | false | naver-clova-conversation-ul/klue-tc-dev | 2021-05-26T08:57:19.000Z | null | false | db43d6f16deec9fbcda9919d25c4f56018175cc9 | [] | [] | https://huggingface.co/datasets/naver-clova-conversation-ul/klue-tc-dev/resolve/main/README.md | This is a in-house development version of KLUE Topic Classification benchmark, as the test split is not released by the KLUE team.
We randomly split the original validation set (9,107 instances) into in-house validation set (5,107 instances) and the in-house test set (4,000 instances).
|
navjordj | null | null | null | false | 2 | false | navjordj/nak_nb | 2021-11-18T18:39:13.000Z | null | false | 5a918d7d41b3cdad342d07f2b58d5b87e73c9e37 | [] | [] | https://huggingface.co/datasets/navjordj/nak_nb/resolve/main/README.md | Norsk Avis Korpus 2012-2019
* https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-4/
Hentet ut artiklene på bokmål
Parset xml og hentet ut all teksten inni <p>-tags.
|
ncats | null | John JN, Sid E, Zhu Q. Recurrent Neural Networks to Automatically Identify Rare Disease Epidemiologic Studies from PubMed. AMIA Jt Summits Transl Sci Proc. 2021 May 17;2021:325-334. PMID: 34457147; PMCID: PMC8378621. | INSERT DESCRIPTION | false | 1 | false | ncats/EpiSet4BinaryClassification | 2022-10-25T09:51:14.000Z | glue | false | 956069bd8059d11100450dad2b8d3ec2a0f558d6 | [] | [
"annotations_creators:unknown",
"language_creators:unknown",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:unknown"
] | https://huggingface.co/datasets/ncats/EpiSet4BinaryClassification/resolve/main/README.md | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- unknown
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
---
# DOCUMENTATION UPDATES IN PROGRESS! IGNORE BELOW
# Dataset Card for GLUE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
### Languages
The language data in is in English
## Dataset Structure
### Data Instances
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
### Data Fields
The data fields are the same among all splits.
#### cola
- `abstract`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 8551| 1043|1063|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
Rare Disease Curators from the [National Institutes of Health (NIH) Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
Rare Disease Curators from the [National Institutes of Health (NIH) Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{john2021recurrent,
title={Recurrent Neural Networks to Automatically Identify Rare Disease Epidemiologic Studies from PubMed},
author={John, Jennifer N and Sid, Eric and Zhu, Qian},
booktitle={AMIA Annual Symposium Proceedings},
volume={2021},
pages={325},
year={2021},
organization={American Medical Informatics Association}
}
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
```
from datasets import load_dataset
epiclassify = load_dataset("ncats/EpiSet4BinaryClassification")
``` |
ncats | null | *REDO*
@inproceedings{wang2019crossweigh,
title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations},
author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={5157--5166},
year={2019}
} | **REWRITE*
EpiSet4NER is a dataset generated from 620 rare disease abstracts labeled using statistical and rule-base methods. The test set was then manually corrected by a rare disease expert.
For more details see *INSERT PAPER* and https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard | false | 1 | false | ncats/EpiSet4NER-v1 | 2022-09-20T14:08:28.000Z | null | false | 5f626f0ef45a3f8f0f0c1224cdb82b596c540d3b | [] | [
"language_creators:found",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/ncats/EpiSet4NER-v1/resolve/main/README.md | ---
annotations_creators:
- train: programmatically-generated
- val: programmatically-generated
- test: programmatically-generated, expert-validated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard)
- **Paper:** Pending
### Dataset Summary
EpiSet4NER is a bronze-standard dataset for epidemiological entity recognition of location, epidemiologic types (e.g. "prevalence", "annual incidence", "estimated occurrence"), and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%") created by the [Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/), a program in [the National Center for Advancing Translational Sciences](https://ncats.nih.gov/), one of the 27 [National Institutes of Health](https://www.nih.gov/). It was labeled programmatically using spaCy NER and rule-based methods. This weakly-supervised teaching method allowed us to construct this imprecise dataset with minimal manual effort and achieve satisfactory performance on a multi-type token classification problem. The test set was manually corrected by 3 NCATS researchers and a GARD curator (genetic and rare disease expert). It was used to train [EpiExtract4GARD](https://huggingface.co/ncats/EpiExtract4GARD), a BioBERT-based model fine-tuned for NER.
An [example](https://pubmed.ncbi.nlm.nih.gov/24237863/) of 'train' looks as follows.
```
{
"id": "333",
"tokens": ['Conclusions', 'The', 'birth', 'prevalence', 'of', 'CLD', 'in', 'the', 'northern', 'Netherlands', 'was', '21.1/10,000', 'births', '.'],
"ner_tags": [0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 5, 6, 0],
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature that indicates sentence number.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
### Data Splits
|name |train |validation|test|
|---------|-----:|----:|----:|
|EpiSet \# of abstracts|456|114|50|
|EpiSet \# tokens |117888|31262|13910|
## Dataset Creation

*Figure 1:* Creation of EpiSet4NER by NIH/NCATS
Comparing the programmatically labeled test set to the manually corrected test set allowed us to measure the precision, recall, and F1 of the programmatic labeling.
*Table 1:* Programmatic labeling of EpiSet4NER
| Evaluation Level | Entity | Precision | Recall | F1 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
| | Location | 0.597 | 0.661 | 0.627 |
| | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
| | Location | 0.868 | 0.713 | 0.783 |
| | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
| | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
An example of the text labeling:

*Figure 2:* Text Labeling using spaCy and rule-based labeling. Ideal labeling is bolded on the left. Actual programmatic output is on the right. [\[Figure citation\]](https://pubmed.ncbi.nlm.nih.gov/33649778/)
### Curation Rationale
To train ML/DL models that automate the process of rare disease epidemiological curation. This is crucial information to patients & families, researchers, grantors, and policy makers, primarily for funding purposes.
### Source Data
620 rare disease abstracts classified as epidemiological by a LSTM RNN rare disease epi classifier from 488 diseases. See Figure 1.
#### Initial Data Collection and Normalization
A random sample of 500 disease names were gathered from a list of ~6061 rare diseases tracked by GARD until ≥50 abstracts had been returned for each disease or the EBI RESTful API results were exhausted. Though we called ~25,000 abstracts from PubMed's db, only 7699 unique abstracts were returned for 488 diseases. Out of 7699 abstracts, only 620 were classified as epidemiological by the LSTM RNN epidemiological classifier.
### Annotations
#### Annotation process
Programmatic labeling. See [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/create_labeled_dataset_V2.ipynb) and then [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/modify_existing_labels.ipynb). The test set was manually corrected after creation.
#### Who are the annotators?
Programmatic labeling was done by [@William Kariampuzha](https://github.com/wzkariampuzha), one of the NCATS researchers.
The test set was manually corrected by 2 more NCATS researchers and a GARD curator (genetic and rare disease expert).
### Personal and Sensitive Information
None. These are freely available abstracts from PubMed.
## Considerations for Using the Data
### Social Impact of Dataset
Assisting 25-30 millions Americans with rare diseases. Additionally can be useful for Orphanet or CDC researchers/curators.
### Discussion of Biases and Limitations
- There were errors in the source file that contained rare disease synonyms of names, which may have led to some unrelated abstracts being included in the training, validation, and test sets.
- The abstracts were gathered through the EBI API and is thus subject to any biases that the EBI API had. The NCBI API returns very different results as shown by an API analysis here.
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems, especially those focusing on numeracy, to compete on.
## Additional Information
### Dataset Curators
[NIH GARD](https://rarediseases.info.nih.gov/about-gard/pages/23/about-gard)
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at NCATS/Axle Informatics for adding this dataset. |
ncduy | null | null | null | false | 5 | false | ncduy/mt-en-vi | 2022-10-22T15:08:45.000Z | null | false | 7cd5eb8359df27b6e68c9400642764ff07e89e3b | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"language:vi",
"license:mit",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:own",
"source_datasets:open_subtitles",
"source_datasets:tatoeba",
"source_datasets:opus_tedtalks",
"source_datasets... | https://huggingface.co/datasets/ncduy/mt-en-vi/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- vi
license:
- mit
multilinguality:
- translation
pretty_name: "Machine Translation Paired English-Vietnamese Sentences"
size_categories:
- 1M<n<10M
source_datasets:
- own
- open_subtitles
- tatoeba
- opus_tedtalks
- qed_amara
- opus_wikipedia
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for Machine Translation Paired English-Vietnamese Sentences
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is English ('en') and Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'en': 'And what I think the world needs now is more connections.',
'vi': 'Và tôi nghĩ điều thế giới đang cần bây giờ là nhiều sự kết nối hơn.',
'source': 'TED2020 v1'
}
```
### Data Fields
- `en` (str): English sentence
- `vi` (str): Vietnamese sentence
- `source` (str): Source.
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples |2884451| 11316| 11225|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ncduy0303](https://github.com/ncduy0303) for adding this dataset. |
neelalex | null | \\n@InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | \\nThis dataset contains a corpus of AI papers. The first task is to determine\\n whether or not a datapoint is an AI safety paper. The second task is to\\n determine what type of paper it is. | false | 2 | false | neelalex/raft-predictions | 2021-08-04T22:25:12.000Z | null | false | cd74da4bdfd725d7c21a8ab8d7a128097e0d3771 | [] | [
"benchmark:raft"
] | https://huggingface.co/datasets/neelalex/raft-predictions/resolve/main/README.md | ---
benchmark: raft
---
# Dummy predictions for RAFT |
nferruz | null | null | null | false | 2 | false | nferruz/UR50_2021_04 | 2022-07-22T13:44:04.000Z | null | false | 603e7e491c203758566b0c7d264ba93f8bd72810 | [] | [
"size_categories:unknown"
] | https://huggingface.co/datasets/nferruz/UR50_2021_04/resolve/main/README.md | ---
YAML tags:
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality: []
pretty_name: ''
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for UR50_2021_04
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
https://ftp.uniprot.org/pub/databases/uniprot/uniref/uniref50/
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.uniprot.org/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Uniref50 (UR50) dataset version 2021/04 is a biological dataset taken from the Uniprot database: https://www.uniprot.org/
### Supported Tasks and Leaderboards
The UR50 dataset contains 48 Million protein sequences. It is a useful dataset to train protein language models.
### Languages
Proteins
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
Train, validation
## Dataset Creation
### Curation Rationale
Substituted FASTA headers by <endoftext> tag.
The dataset was tokenized using BPE and further split into train and validation datasets (ratio 90/10) choosing random sequences for the latter.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
UniProt
### Annotations
#### Annotation process
UniProt contains annotations but no labels/annotations were used for this dataset.
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to UniProt for curating this dataset. https://www.uniprot.org/
|
nickmuchi | null | null | null | false | 47 | false | nickmuchi/financial-classification | 2022-10-24T01:05:49.000Z | null | false | 3212e172737ca693859a84b347f4606860782b46 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/nickmuchi/financial-classification/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
train-eval-index:
- config: sentences_50agree
- task: text-classification
- task_ids: multi_class_classification
- splits:
eval_split: train
- col_mapping:
sentence: text
label: target
---
## Dataset Creation
This [dataset](https://huggingface.co/datasets/nickmuchi/financial-classification) combines financial phrasebank dataset and a financial text dataset from [Kaggle](https://www.kaggle.com/datasets/percyzheng/sentiment-classification-selflabel-dataset).
Given the financial phrasebank dataset does not have a validation split, I thought this might help to validate finance models and also capture the impact of COVID on financial earnings with the more recent Kaggle dataset. |
nid989 | null | null | null | false | 2 | false | nid989/FNC-1 | 2021-12-27T11:04:06.000Z | null | false | 485bc494bfc0f2bc3c0a878c84a83fc1f156d8b2 | [] | [] | https://huggingface.co/datasets/nid989/FNC-1/resolve/main/README.md | ### Dataset Summary
The data provided is (headline, body, stance) instances, where the stance is one of {unrelated, discuss, agree, disagree}.
**Input**
* A headline and a body text - either from the same news article or from two different articles.
**Output**
* Classify the stance of the body text relative to the claim made in the headline into one of four categories:
* Agrees: The body text agrees with the headline.
* Disagrees: The body text disagrees with the headline.
* Discusses: The body text discuss the same topic as the headline, but does not take a position
* Unrelated: The body text discusses a different topic than the headline
The distribution of Stance classes in the entire dataset is as follows:
| rows | unrelated | discuss | agree | disagree |
|---------|-----------|---------|-----------|----------- |
| 49972 | 0.73131 | 0.17828 | 0.0736012 | 0.016809 |
### Source Data
[FNC-1 Official webpage.](http://www.fakenewschallenge.org/)
- annotations_creators: found
- language_creators: found
- languages: en-US
- licenses: apache-2.0
- multilingualism: monolingual
- pretty_name: FNC-1
- size_categories: unknown
- source_datasets: original
- task_categories:text-classification
- task_ids
- multi-class-classification
- natural-language-inference
- multi-label-classification
- intent-classification |
nielsr | null | @article{Jaume2019FUNSDAD,
title={FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents},
author={Guillaume Jaume and H. K. Ekenel and J. Thiran},
journal={2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)},
year={2019},
volume={2},
pages={1-6}
} | https://guillaumejaume.github.io/FUNSD/ | false | 235 | false | nielsr/FUNSD_layoutlmv2 | 2022-10-25T09:51:20.000Z | funsd | false | 1941df61807cb4a99712d25115704fda9a0f8b25 | [] | [
"arxiv:1905.13538",
"language:en"
] | https://huggingface.co/datasets/nielsr/FUNSD_layoutlmv2/resolve/main/README.md | ---
language:
- en
paperswithcode_id: funsd
---
# Dataset Card for "FUNSD"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset, with one difference compared to the original dataset, each document image is resized to 224x224.
The FUNSD dataset is a collection of annotated forms.
This dataset loading script is taken from the [official LayoutLMv2 implementation](https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/funsd.py), and updated to not include any Detectron2 dependencies.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### conll2000
- **Size of downloaded dataset files:** 3.32 MB
- **Size of the generated dataset:** 6.25 MB
- **Total amount of disk used:** 9.57 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"chunk_tags": [11, 13, 11, 12, 21, 22, 22, 22, 22, 11, 12, 12, 17, 11, 12, 13, 11, 0, 1, 13, 11, 11, 0, 21, 22, 22, 11, 12, 12, 13, 11, 12, 12, 11, 12, 12, 0],
"id": "0",
"pos_tags": [19, 14, 11, 19, 39, 27, 37, 32, 34, 11, 15, 19, 14, 19, 22, 14, 20, 5, 15, 14, 19, 19, 5, 34, 32, 34, 11, 15, 19, 14, 20, 9, 20, 24, 15, 22, 6],
"tokens": "[\"Confidence\", \"in\", \"the\", \"pound\", \"is\", \"widely\", \"expected\", \"to\", \"take\", \"another\", \"sharp\", \"dive\", \"if\", \"trade\", \"figur..."
}
```
### Data Fields
The data fields are the same among all splits.
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1905-13538,
author = {Guillaume Jaume and
Hazim Kemal Ekenel and
Jean{-}Philippe Thiran},
title = {{FUNSD:} {A} Dataset for Form Understanding in Noisy Scanned Documents},
journal = {CoRR},
volume = {abs/1905.13538},
year = {2019},
url = {http://arxiv.org/abs/1905.13538},
archivePrefix = {arXiv},
eprint = {1905.13538},
timestamp = {Mon, 03 Jun 2019 13:42:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1905-13538.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset. |
nlpufg | null | null | null | false | 1 | false | nlpufg/brwac-pt | 2021-07-12T23:25:24.000Z | null | false | e4806d1eaf1dcab208ac30a6c921c710bc104374 | [] | [] | https://huggingface.co/datasets/nlpufg/brwac-pt/resolve/main/README.md | preprocessed removing mojibake texts |
nlpufg | null | null | null | false | 2 | false | nlpufg/oscar-pt | 2021-07-12T23:26:15.000Z | null | false | 73c8546913e7e34d1378c8ac74795539d55aa837 | [] | [] | https://huggingface.co/datasets/nlpufg/oscar-pt/resolve/main/README.md | preprocessed removing mojibake texts |
nlpyeditepe | null | null | null | false | 2 | false | nlpyeditepe/tr-qnli | 2022-07-01T15:28:44.000Z | null | false | 5331951940a16c65ff8a1bfaff0724d2944065a9 | [] | [
"annotations_creators:found",
"language_creators:machine-generated",
"language:tr-TR",
"license:mit",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"task_categories:text-classification",
"task_ids:natural-language-inference"
] | https://huggingface.co/datasets/nlpyeditepe/tr-qnli/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- machine-generated
language:
- tr-TR
license:
- mit
multilinguality:
- monolingual
pretty_name: QNLI for Turkish
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
--- |
nlpyeditepe | null | null | null | false | 1 | false | nlpyeditepe/tr_rte | 2022-07-01T15:28:27.000Z | null | false | 0996e5fe4b8d41d9ce4c899a0f04d2f801f1fc33 | [] | [
"annotations_creators:found",
"language_creators:machine-generated",
"language:tr-TR",
"license:mit",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"task_categories:text-classification",
"task_ids:natural-language-inference"
] | https://huggingface.co/datasets/nlpyeditepe/tr_rte/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- machine-generated
language:
- tr-TR
license:
- mit
multilinguality:
- monolingual
pretty_name: RTE for Turkish
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
--- |
nntadotzip | null | null | null | false | 2 | false | nntadotzip/iuQAchatbot | 2022-01-20T07:25:26.000Z | null | false | f9172c55616145ca31d33872003249043c8f805c | [] | [] | https://huggingface.co/datasets/nntadotzip/iuQAchatbot/resolve/main/README.md | annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
languages:
- en
licenses:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: squad
pretty_name: SQuAD
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa |
notional | null | null | null | false | 1 | false | notional/notional-python | 2022-10-21T13:39:56.000Z | null | false | 65563031418b17855bf1f0c5252faa7c674109f0 | [] | [
"annotations_creators:no-annotation",
"language:py",
"language_creators:found",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_ids:language-modeling",
"task_ids:code-generation"
] | https://huggingface.co/datasets/notional/notional-python/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- py
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- code-generation
- conditional-text-generation
task_ids:
- language-modeling
- code-generation
---
# Dataset Card for notional-python
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://notional.ai/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The Notional-python dataset contains python code files from 100 well-known repositories gathered from Google Bigquery Github Dataset. The dataset was created to test the ability of programming language models.
Follow [our repo]() to do the model evaluation using notional-python dataset.
### Languages
Python
## Dataset Creation
### Curation Rationale
Notional-python was built to provide a dataset for testing the ability of the machine to generate python code.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering code from [Google Bigquery Github data](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code)
In order to improve the quality of the dataset, only python code files that meet the below conditions are added to the dataset:
- Code with more than 60% of executable lines
- Code with logic, not config files or comment-only files
- Code with more than 30% of attribute declaration lines (E.G.: Some files contain just only class names and their class attributes, usually used for configuration of the project, these files were not selected)
- Code without `TODO` and `FIXME`.
#### Who are the source language producers?
The producers are users of github.
|
nucklehead | null | null | null | false | 2 | false | nucklehead/ht-voice-dataset | 2021-04-14T12:34:47.000Z | null | false | 7314a5b74f065dca9d690a5494d3145b531f1d85 | [] | [] | https://huggingface.co/datasets/nucklehead/ht-voice-dataset/resolve/main/README.md | # Ansanb done vwa an Kreyòl pou antrene DeepSPeech.
Dataset sa a gen plis pase 7 è tan anrejistreman vwa ak prèske 100 moun an Kreyòl pou bati sistèm ASR ak TTS pou lang Kreyòl la.
Pifò nan done yo soti nan "CMU Haitian Creole Speech Recognition Database" la.
Done sa yo gentan filtre epi òganize pou ka antrene modèl DeepSPeech Mozilaa a.
Si toutfwa ou ta bezwen jwenn plis enfòmasyon sou jan done yo ranje a epi kisa ou ka fè avèk yo, tcheke [DeepSpeech Readme](https://deepspeech.readthedocs.io/en/r0.9/TRAINING.html).
|
oelkrise | null | null | null | false | 2 | false | oelkrise/CRT | 2021-03-28T15:32:38.000Z | null | false | a5c0dce3bed22d5a2fffb5a26b6f9c349e6b8f6c | [] | [] | https://huggingface.co/datasets/oelkrise/CRT/resolve/main/README.md | egrfdfaffd |
omar-sharif | null | null | null | false | 1 | false | omar-sharif/BAD-Bengali-Aggressive-Text-Dataset | 2022-02-24T15:42:02.000Z | null | false | 2b4f7179a68a023d210c30b5b093b178b5686760 | [] | [] | https://huggingface.co/datasets/omar-sharif/BAD-Bengali-Aggressive-Text-Dataset/resolve/main/README.md | ## Novel Aggressive Text Dataset in Bengali
## Tackling Cyber-Aggression: Identification and Fine-Grained Categorization of Aggressive Texts on Social Media using Weighted Ensemble of Transformers
**Author:** Omar Sharif and Mohammed Moshiul Hoque
**Related Papers:**
[Paper1 in Neurocomputing Journal](https://www.sciencedirect.com/science/article/abs/pii/S0925231221018567)
[Paper2 in CONSTRAINT@AAAI-2021](https://link.springer.com/chapter/10.1007%2F978-3-030-73696-5_2)
[Paper3 in LTEDI@EACL-2021](https://link.springer.com/chapter/10.1007%2F978-3-030-73696-5_2)
## Abstract
The pervasiveness of aggressive content in social media has become a serious concern for government organizations and tech companies because of its pernicious societal effects. In recent years, social media has been repeatedly used as a tool to incite communal aggression, spread distorted propaganda, damage social harmony and demean the identity of individuals or a community in the public spaces. Therefore, restraining the proliferation of aggressive content and detecting them has become an urgent duty. Studies of the identification of aggressive content have mostly been done for English and other resource-high languages. Automatic systems developed for those languages can not accurately identify detrimental contents written in regional languages like Bengali. To compensate this insufficiency, this work presents a novel Bengali aggressive text dataset (called ‘BAD’) with two-level annotation. In level-A, 14158 texts are labeled as either aggressive or non-aggressive. While in level-B, 6807 aggressive texts are categorized into religious, political, verbal and gendered aggression classes each having 2217, 2085, 2043 and 462 texts respectively. This paper proposes a weighted ensemble technique including m-BERT, distil-BERT, Bangla-BERT and XLM-R as the base classifiers to identify and classify the aggressive texts in Bengali. The proposed model can readdress the softmax probabilities of the participating classifiers depending on their primary outcomes. This weighting technique has enabled the model to outdoes the simple average ensemble and all other machine learning (ML), deep learning (DL) baselines. It has acquired the highest weighted f1-score of 93.43% in the identification task and 93.11% in the categorization task.
## Contribution
Major contributions of this work can be illustrated in the following:
- Dataset:present a new Bengali aggressive text dataset which contains 6807 aggressive and 7351 non-aggressive texts. Furthermore, by employing a hierarchical annotation schema, aggressive texts are annotated into religious, political, verbal and gendered aggression classes.
- Insights: provide useful insights and detailed statistics of the data that ensure the quality of the dataset.
- Model: develop a weighted ensemble model using m-BERT, distil-BERT, Bangla-BERT, XLM-R to identify and categorize aggressive Bengali texts. The proposed model emphasizes the participating classifiers' softmax probabilities based on their previous performance on the dataset. This weighting technique outperforms the simple average ensemble approach and enhances the classifier performance in the developed dataset.
- Benchmarking: investigate and compare the performance of the proposed model with other ML, DL baselines and existing techniques, thus setting up a benchmark work to compare in the future.
- Error analysis: deeply analyze the results and errors of the proposed model. Presents qualitative and quantitative analysis that shed light on the reasons behind some of the errors and provide a few directions that might help to mitigate the system's deficiency.
This research is one of the pioneering works that aims to identify and classify aggressive texts in Bengali as per our exploration. We expect that the resources developed in this work will pave the way for aggressive text classification researchers in Bengali.
## Ackonwlegements
We sincerely acknowledge the anonymous reviewers for their insightful suggestions, which help to improve the work. This work was supported by the ICT Innovation Fund, ICT Division and Directorate of Research & Extension, CUET. Thanks to [Prof. Dr. Mohammed Moshiul Hoque](https://www.researchgate.net/profile/Moshiul_Hoque) sir for his valuable guidance.
## Cite this work
If you find this repository helpful in your work please cite the following
```
@article{SHARIF2021,
title = {Tackling Cyber-Aggression: Identification and Fine-Grained Categorization of Aggressive Texts on Social Media using Weighted Ensemble of Transformers},
journal = {Neurocomputing},
year = {2021},
issn = {0925-2312},
doi = {https://doi.org/10.1016/j.neucom.2021.12.022},
url = {https://www.sciencedirect.com/science/article/pii/S0925231221018567},
author = {Omar Sharif and Mohammed Moshiul Hoque},
keywords = {Natural language processing, Aggressive text classification, Low resource language, Bengali aggressive text corpus, Deep learning, Transformers, Ensemble},
abstract = {The pervasiveness of aggressive content in social media has become a serious concern for government organizations and tech companies because of its pernicious societal effects. In recent years, social media has been repeatedly used as a tool to incite communal aggression, spread distorted propaganda, damage social harmony and demean the identity of individuals or a community in the public spaces. Therefore, restraining the proliferation of aggressive content and detecting them has become an urgent duty. Studies of the identification of aggressive content have mostly been done for English and other resource-high languages. Automatic systems developed for those languages can not accurately identify detrimental contents written in regional languages like Bengali. To compensate this insufficiency, this work presents a novel Bengali aggressive text dataset (called ‘BAD’) with two-level annotation. In level-A, 14158 texts are labeled as either aggressive or non-aggressive. While in level-B, 6807 aggressive texts are categorized into religious, political, verbal and gendered aggression classes each having 2217, 2085, 2043 and 462 texts respectively. This paper proposes a weighted ensemble technique including m-BERT, distil-BERT, Bangla-BERT and XLM-R as the base classifiers to identify and classify the aggressive texts in Bengali. The proposed model can readdress the softmax probabilities of the participating classifiers depending on their primary outcomes. This weighting technique has enabled the model to outdoes the simple average ensemble and all other machine learning (ML), deep learning (DL) baselines. It has acquired the highest weighted f1-score of 93.43% in the identification task and 93.11% in the categorization task.}
}
@InProceedings{sharif2021constraint,
author="Sharif, Omar
and Hoque, Mohammed Moshiul",
editor="Chakraborty, Tanmoy and et al.",
title="Identification and Classification of Textual Aggression in Social Media: Resource Creation and Evaluation",
booktitle="Combating Online Hostile Posts in Regional Languages during Emergency Situation",
year="2021",
publisher="Springer Nature Switzerland AG",
pages="1--12",
doi = {https://doi.org/10.1007/978-3-030-73696-5_2},
}
@inproceedings{sharif-etal-2021-nlp,
title = "{NLP}-{CUET}@{D}ravidian{L}ang{T}ech-{EACL}2021: Offensive Language Detection from Multilingual Code-Mixed Text using Transformers",
author = "Sharif, Omar and
Hossain, Eftekhar and
Hoque, Mohammed Moshiul",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.dravidianlangtech-1.35",
pages = "255--261",
abstract = "The increasing accessibility of the internet facilitated social media usage and encouraged individuals to express their opinions liberally. Nevertheless, it also creates a place for content polluters to disseminate offensive posts or contents. Most of such offensive posts are written in a cross-lingual manner and can easily evade the online surveillance systems. This paper presents an automated system that can identify offensive text from multilingual code-mixed data. In the task, datasets provided in three languages including Tamil, Malayalam and Kannada code-mixed with English where participants are asked to implement separate models for each language. To accomplish the tasks, we employed two machine learning techniques (LR, SVM), three deep learning (LSTM, LSTM+Attention) techniques and three transformers (m-BERT, Indic-BERT, XLM-R) based methods. Results show that XLM-R outperforms other techniques in Tamil and Malayalam languages while m-BERT achieves the highest score in the Kannada language. The proposed models gained weighted f{\_}1 score of 0.76 (for Tamil), 0.93 (for Malayalam ), and 0.71 (for Kannada) with a rank of 3rd, 5th and 4th respectively.",
}
```
## Note
`If you find any anomaly or have any query/suggestion feel free to ping.
|
openclimatefix | null | @InProceedings{eumetsat:ocf_uk_hrv,
title = {EUMETSAT SEVIRI RSS UK HRV},
author={EUMETSAT, with preparation by Open Climate Fix
},
year={2022}
} | The EUMETSAT Spinning Enhanced Visible and InfraRed Imager (SEVIRI) rapid scanning service (RSS) takes an image of the northern third of the Meteosat disc every five minutes (see the EUMETSAT website for more information on SEVIRI RSS ). The original EUMETSAT dataset contains data from 2008 to the present day from 12 channels, and for a wide geographical extent covering North Africa, Saudi Arabia, all of Europe, and Western Russia. In contrast, this dataset on Google Cloud is a small subset of the entire SEVIRI RSS dataset: This Google Cloud dataset is from a single channel: the "high resolution visible" (HRV) channel; and contains data from January 2020 to November 2021. The geographical extent of this dataset on Google Cloud is a small subset of the total SEVIRI RSS extent: This Google Cloud dataset includes data over the United Kingdom and over North Western Europe.
This dataset is slightly transformed: It does not contain the original numerical values.
The original data is copyright EUMETSAT. EUMETSAT has given permission to redistribute this transformed data. The data was transformed by Open Climate Fix using satip.
This public dataset is hosted in Google Cloud Storage and available free to use. | false | 2 | false | openclimatefix/eumetsat_uk_hrv | 2022-08-04T11:40:24.000Z | null | false | 6bd475ffbfaef79351083a387c49bb03fc4575d7 | [] | [] | https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv/resolve/main/README.md | [Needs More Information]
# Dataset Card for EUMETSAT UK HRV
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://navigator.eumetsat.int/product/EO:EUM:DAT:MSG:MSG15-RSS
- **Repository:** https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
<p>The EUMETSAT Spinning Enhanced Visible and InfraRed Imager (SEVIRI) rapid scanning service (RSS) takes an image of the northern third of the Meteosat disc every five minutes (see <a href="https://www.eumetsat.int/rapid-scanning-service">the EUMETSAT website for more information on SEVIRI RSS</a>). The <a href="https://navigator.eumetsat.int/product/EO:EUM:DAT:MSG:MSG15-RSS">original EUMETSAT dataset</a> contains data from 2008 to the present day from 12 channels, and for a wide geographical extent covering North Africa, Saudi Arabia, all of Europe, and Western Russia. In contrast, this dataset on Google Cloud is a small subset of the entire SEVIRI RSS dataset: This Google Cloud dataset is from a single channel: the "high resolution visible" (HRV) channel; and contains data from January 2020 to November 2021. The geographical extent of this dataset on Google Cloud is a small subset of the total SEVIRI RSS extent: This Google Cloud dataset includes data over the United Kingdom and over North Western Europe.</p>
<p>This dataset is slightly transformed: It does not contain the original numerical values. See the "samples" section for more technical detail about the dataset.</p>
<p>The original data is copyright <a href="https://www.eumetsat.int">EUMETSAT</a>. EUMETSAT has given permission to redistribute this transformed data. The data was transformed by <a href="https://openclimatefix.org/">Open Climate Fix</a> using <a href="https://github.com/openclimatefix/Satip">satip</a>.</p>
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was originally created for helping improve national PV output forecasts in combination with Numerical Weather Predictions and other data sources.
### Source Data
#### Initial Data Collection and Normalization
Data generated by <a href="https://navigator.eumetsat.int/product/EO:EUM:DAT:MSG:MSG15-RSS">EUMETSAT Rapid Scan High Rate SEVIRI Level 1.5 Image Data MSG</a> with preparation by <a href="https://openclimatefix.org/">Open Climate Fix</a>
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
This dataset only includes the High Resolution Visible channel, and covers only the area around the UK. Additionally, the RSS service is shut down for 1 month each year, and so data for most of
February 2020 and 2021 does not exist.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
<p>Cite EUMETSAT as the data source. This data is redistributed with permission from EUMETSAT under the terms of the <a href="https://www.eumetsat.int/eumetsat-data-licensing">EUMETSAT Data Policy for SEVIRI data with a latency of >3 hours</a>.
### Citation Information
Cite EUMETSAT as the data source. |
openclimatefix | null | null | null | false | 2 | false | openclimatefix/gfs | 2022-02-14T13:09:42.000Z | null | false | b99847afc7bd0a90dece766d0dee1c0ae4d420c4 | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/gfs/resolve/main/README.md | ---
license: mit
---
|
openclimatefix | null | null | null | false | 2 | false | openclimatefix/goes-l2 | 2022-02-06T10:21:15.000Z | null | false | a0b8e0658b37bdd81b39e6af5e5cd97d52efd685 | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/goes-l2/resolve/main/README.md | ---
license: mit
---
|
openclimatefix | null | null | null | false | 25 | false | openclimatefix/goes-mrms | 2022-02-10T17:26:40.000Z | null | false | ee4a07d09306bbf49880de047ab450fb809ba4c8 | [] | [] | https://huggingface.co/datasets/openclimatefix/goes-mrms/resolve/main/README.md | # Dataset Card for Goes-MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is a combination of GOES-16 data and MRMS radar precipitation data to roughly match the unreleased dataset used to train Google Research's MetNet. In the papers they used GOES-16 satellite imagery, MultiRadar/Multi-System (MRMS) instantaneous precipitation, hourly cumulative precipitation, and High Resolution Rapid Refresh NWP initializations as inputs to predict future MRMS precipitation rates. The precipitation rates were binned into 0.2mm/hr bins to make the output a classification task, and allow for the models to predict a probability distribution over the region of interest.
Additionally, the input image patches are much larger than the target image patches. For MetNet, the input images covered 512x512 km area, while the target was the center 64x64 km crop. For MetNet-2 the input covered 2048x2048 km with the target being the central 512x512 km.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
MetNet (January 2018-July 2019) (16 days training, 2 days validation, 2 days test)
MetNet-2 (July 2017-August 2020) (Non-overlapping time ranges with 12 hour black outs in between)
Full (July 2017-January 2022) (Train: 2017-2020. except for first of the month, Validation: first of the month July 2017-2020, Test: 2021-2022)
## Dataset Creation
### Curation Rationale
The original curation rationale was for forecasting precipitation rate in a probabilistic way. This dataset covers a different time period than in the original paper, going from July 2017 through December 2021. There is a split available to match the temporal coverage of the original MetNet paper, (Janurary 2018 to July 2019) or the MetNet-2 paper (July 2017 to August 2020).
### Source Data
#### Initial Data Collection and Normalization
From the MetNet paper: "For both MRMS and GOES we acquired data for the period January 2018 through July 2019. We split the data temporally into three non-overlapping data sets by repeatedly using approximately 16 days for training followed by two days for validation and two days for testing. From these temporal splits we randomly extracted 13,717 test and validation samples and kept increasing the training set size until we observed no over-fitting at 1.72 million training samples."
From the MetNet-2 paper: "The training data consists of 1,230,585 patches of size 2048 km x 2048 km at the input and targets of size 512 km x 512 km including all 360 (2 to 720 minutes) time slices. The training area covers a region of 7000x2500 kilometers. We sample target patches from the input context region minus an all around border of 512 km. The input context is padded for all regions outside of the 7000x2500 CONUS. The validation data used for developing the models consists of 11,991 patches and the test data of 39,864 patches. The training, validation and test data are drawn from non-overlapping ranges of hours, with black out periods of 12 hours in between, over a period of observations of 3 years from July 2017 to August 2020. This ensures that the model does not learn any spurious training and evaluation correlations within any single day. HRRR only generates forecasts starting at full hours."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Jacob Bieker (jacob@openclimatefix.org)
MetNet-1 split: MetNet Authors
MetNet-2 split: MetNet-2 Authors
### Licensing Information
All data is open and without restrictions from NOAA.
### Citation Information
Please cite NOAA as the data provider. |
openclimatefix | null | @InProceedings{noaa,
title = {GOES-16 and GOES-17 data},
author={NOAA
},
year={2022}
} | The National Oceanic and Atmospheric Administration (NOAA) operates a constellation of Geostationary Operational Environmental Satellites (GOES) to provide continuous weather imagery and monitoring of meteorological and space environment data for the protection of life and property across the United States. GOES satellites provide critical atmospheric, oceanic, climatic and space weather products supporting weather forecasting and warnings, climatologic analysis and prediction, ecosystems management, safe and efficient public and private transportation, and other national priorities.
The satellites provide advanced imaging with increased spatial resolution, 16 spectral channels, and up to 1 minute scan frequency for more accurate forecasts and timely warnings.
The real-time feed and full historical archive of original resolution Advanced Baseline Imager (ABI) radiance data (Level 1b) and full resolution Cloud and Moisture Imager (CMI) products (Level 2) are freely available on Amazon S3 for anyone to use. | false | 1 | false | openclimatefix/goes | 2022-05-09T16:05:54.000Z | null | false | 074e673ccd419ae07caa72b4584e8dd82b8bcdf7 | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/goes/resolve/main/README.md | ---
license: mit
---
|
openclimatefix | null | null | null | false | 2 | false | openclimatefix/hrrr | 2022-02-05T17:38:23.000Z | null | false | 427e4798c7f8638c90a436c54364190c0a7f2729 | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/hrrr/resolve/main/README.md | ---
license: mit
---
|
openclimatefix | null | @article{ravuris2021skillful,
author={Suman Ravuri and Karel Lenc and Matthew Willson and Dmitry Kangin and Remi Lam and Piotr Mirowski and Megan Fitzsimons and Maria Athanassiadou and Sheleem Kashem and Sam Madge and Rachel Prudden Amol Mandhane and Aidan Clark and Andrew Brock and Karen Simonyan and Raia Hadsell and Niall Robinson Ellen Clancy and Alberto Arribas† and Shakir Mohamed},
title={Skillful Precipitation Nowcasting using Deep Generative Models of Radar},
journal={Nature},
volume={597},
pages={672--677},
year={2021}
} | This dataset contains UK Nimrod rainfall radar data for 2016-2019 as used in the Skillful Precipitation Nowcasting Using Deep Generative Model of Radar paper by DeepMind. | false | 184 | false | openclimatefix/nimrod-uk-1km | 2022-06-08T14:49:03.000Z | null | false | f433a990cca7574d4ed4687e7fa969ccad0dbeb3 | [] | [] | https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/resolve/main/README.md | [Needs More Information]
# Dataset Card for UK Nimrod 1km Rainfall Radar Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/deepmind/deepmind-research/tree/master/nowcasting
- **Repository:** https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km
- **Paper:** [Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Ravuri et al. 2021](https://www.nature.com/articles/s41586-021-03854-z)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
This dataset contains UK Nimrod rainfall radar data for 2016-2019 as used in the Skillful Precipitation Nowcasting Using Deep Generative Model of Radar paper by DeepMind. This dataset is an unofficial mirror of the open sourced dataset available here: gs://dm-nowcasting/datasets/nowcasting_open_source_osgb/nimrod_osgb_1000m_yearly_splits/radar/20200718
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
The train data is all days except the first of each month for 2016-2018. The validation is the first of every month for 2016-2018. The test data is all of 2019.
## Dataset Creation
### Curation Rationale
This dataset was originally created for training a generative model for forecasting rainfall percipitation.
### Source Data
#### Initial Data Collection and Normalization
DeepMind initially collected the data from the UK Met Office and post processed it into this dataset.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The provided post-processed nowcasting dataset is licensed under a Creative Commons Attribution 4.0 International License and it contains public sector information licensed by the Met Office under the Open Government Licence v3.0.
### Citation Information
Cite DeepMind, and the authors of [Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Ravuri et al. 2021](https://www.nature.com/articles/s41586-021-03854-z). |
orisuchy | null | null | null | false | 2 | false | orisuchy/Descriptive_Sentences_He | 2022-03-03T10:19:56.000Z | null | false | af9b3be1603dbb85d9b98d3b8db844ed317c85e5 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/orisuchy/Descriptive_Sentences_He/resolve/main/README.md | ---
license: afl-3.0
---
|
ought | null | null | null | false | 4 | false | ought/raft-submission | 2022-06-22T10:02:06.000Z | null | false | 0db66732f368d727d5ec941e95189872a127f336 | [] | [] | https://huggingface.co/datasets/ought/raft-submission/resolve/main/README.md | # RAFT Submission Template
Welcome to the [RAFT benchmark](https://raft.elicit.org/)! RAFT is a few-shot classification benchmark that tests language models:
- across multiple domains (lit review, tweets, customer interaction, etc.)
- on economically valuable classification tasks (someone inherently cares about the task)
- in a setting that mirrors deployment (50 examples per task, info retrieval allowed, hidden test set)
This repository can be used to generate a template so you can submit your predictions for evaluation on [the leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard).
## Quickstart
### 1. Create an account on the Hugging Face Hub
First create an account on the Hugging Face Hub and you can sign up [here](https://huggingface.co/join) if you haven't already!
### 2. Create a template repository on your machine
The next step is to create a template repository on your local machine that contains various files and a CLI to help you validate and submit your predictions. The Hugging Face Hub uses [Git Large File Storage (LFS)](https://git-lfs.github.com) to manage large files, so first install it if you don't have it already. For example, on macOS you can run:
```bash
brew install git-lfs
git lfs install
```
Next, run the following commands to create the repository. We recommend creating a Python virtual environment for the project, e.g. with Anaconda:
```bash
# Create and activate a virtual environment
conda create -n raft python=3.8 && conda activate raft
# Install the following libraries
pip install cookiecutter huggingface-hub==0.0.16
# Create the template repository
cookiecutter git+https://huggingface.co/datasets/ought/raft-submission
```
This will ask you to specify your Hugging Face Hub username, password, and the name of the repository:
```
hf_hub_username [<huggingface>]:
hf_hub_password [<password>]:
repo_name [<my-raft-submissions>]:
```
This will trigger the following steps:
1. Create a private dataset repository on the Hugging Face Hub under `{hf_hub_username}/{repo_name}`
2. Clone the repository to your local machine
3. Add various template files and commit them locally to the repository
The resulting repository should have the following structure:
```
my-raft-submission
├── LICENSE
├── README.md <- The README with submission instructions
├── cli.py <- The CLI for validating predictions etc
├── data <- The predictions for each task
├── my-raft-submission.py <- Script to load predictions. Do not edit!
└── requirements.txt <- The requirements file for the submissions
```
### 3. Install the dependencies
The final step is to install the project's dependencies:
```bash
# Navigate to the template repository
cd my-raft-submissions
# Install dependencies
python -m pip install -r requirements.txt
```
That's it! You're now all set to start generating predictions - see the instructions below on how to submit them to the Hub.
## Submitting to the leaderboard
To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week. Submissions are evaluated **every Sunday at 12:00 UTC.**
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
```python
from pathlib import Path
import pandas as pd
from collections import Counter
from datasets import load_dataset, get_dataset_config_names
tasks = get_dataset_config_names("ought/raft")
for task in tasks:
# Load dataset
raft_subset = load_dataset("ought/raft", task)
# Compute majority class over training set
counter = Counter(raft_subset["train"]["Label"])
majority_class = counter.most_common(1)[0][0]
# Load predictions file
preds = pd.read_csv(f"data/{task}/predictions.csv")
# Convert label IDs to label names
preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
# Save predictions
preds.to_csv(f"data/{task}/predictions.csv", index=False)
```
As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
```
data
├── ade_corpus_v2
│ ├── predictions.csv <- A CSV file of the predictions with `ID` and `Label` columns
│ └── task.json <- Configuration file for loading the predictions. Do not edit!
├── banking_77
│ ├── predictions.csv
│ └── task.json
├── neurips_impact_statement_risks
│ ├── predictions.csv
│ └── task.json
├── one_stop_english
│ ├── predictions.csv
│ └── task.json
├── overruling
│ ├── predictions.csv
│ └── task.json
├── semiconductor_org_types
│ ├── predictions.csv
│ └── task.json
├── systematic_review_inclusion
│ ├── predictions.csv
│ └── task.json
├── tai_safety_research
│ ├── predictions.csv
│ └── task.json
├── terms_of_service
│ ├── predictions.csv
│ └── task.json
├── tweet_eval_hate
│ ├── predictions.csv
│ └── task.json
└── twitter_complaints
├── predictions.csv
└── task.json
```
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaluated on Sunday 05 September 2021 at 12:00 UTC ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. |
ought | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | Large pre-trained language models have shown promise for few-shot learning, completing text-based tasks given only a few task-specific examples. Will models soon solve classification tasks that have so far been reserved for human research assistants?
[RAFT](https://raft.elicit.org) is a few-shot classification benchmark that tests language models:
- across multiple domains (lit review, tweets, customer interaction, etc.)
- on economically valuable classification tasks (someone inherently cares about the task)
- in a setting that mirrors deployment (50 examples per task, info retrieval allowed, hidden test set) | false | 8,108 | false | ought/raft | 2022-10-25T09:54:19.000Z | null | false | 9ee50172ea9afda2f1033c6f1b986e568b862fb3 | [] | [
"arxiv:2109.14076",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"source_datasets:extended|ade_corpus_v2",
... | https://huggingface.co/datasets/ought/raft/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
- extended|ade_corpus_v2
- extended|banking77
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: 'Real-world Annotated Few-shot Tasks: RAFT'
language_bcp47:
- en-US
---
# Dataset Card for RAFT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://raft.elicit.org
- **Repository:** https://huggingface.co/datasets/ought/raft
- **Paper:** [arxiv.org](https://arxiv.org/abs/2109.14076)
- **Leaderboard:** https://huggingface.co/spaces/ought/raft-leaderboard
- **Point of Contact:** [Eli Lifland](eli.d.lifland@gmail.com)
### Dataset Summary
The Real-world Annotated Few-shot Tasks (RAFT) dataset is an aggregation of English-language datasets found in the real world. Associated with each dataset is a binary or multiclass classification task, intended to improve our understanding of how language models perform on tasks that have concrete, real-world value. Only 50 labeled examples are provided in each dataset.
### Supported Tasks and Leaderboards
- `text-classification`: Each subtask in RAFT is a text classification task, and the provided train and test sets can be used to submit to the [RAFT Leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard) To prevent overfitting and tuning on a held-out test set, the leaderboard is only evaluated once per week. Each task has its macro-f1 score calculated, then those scores are averaged to produce the overall leaderboard score.
### Languages
RAFT is entirely in American English (en-US).
## Dataset Structure
### Data Instances
| Dataset | First Example |
| ----------- | ----------- |
| Ade Corpus V2 | <pre>Sentence: No regional side effects were noted.<br>ID: 0<br>Label: 2</pre> |
| Banking 77 | <pre>Query: Is it possible for me to change my PIN number?<br>ID: 0<br>Label: 23<br></pre> |
| NeurIPS Impact Statement Risks | <pre>Paper title: Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation...<br>Paper link: https://proceedings.neurips.cc/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-Paper.pdf...<br>Impact statement: This work makes the first attempt to search for all key components of panoptic pipeline and manages to accomplish this via the p...<br>ID: 0<br>Label: 1</pre> |
| One Stop English | <pre>Article: For 85 years, it was just a grey blob on classroom maps of the solar system. But, on 15 July, Pluto was seen in high resolution ...<br>ID: 0<br>Label: 3<br></pre> |
| Overruling | <pre>Sentence: in light of both our holding today and previous rulings in johnson, dueser, and gronroos, we now explicitly overrule dupree....<br>ID: 0<br>Label: 2<br></pre> |
| Semiconductor Org Types | <pre>Paper title: 3Gb/s AC-coupled chip-to-chip communication using a low-swing pulse receiver...<br>Organization name: North Carolina State Univ.,Raleigh,NC,USA<br>ID: 0<br>Label: 3<br></pre> |
| Systematic Review Inclusion | <pre>Title: Prototyping and transforming facial textures for perception research...<br>Abstract: Wavelet based methods for prototyping facial textures for artificially transforming the age of facial images were described. Pro...<br>Authors: Tiddeman, B.; Burt, M.; Perrett, D.<br>Journal: IEEE Comput Graphics Appl<br>ID: 0<br>Label: 2</pre> |
| TAI Safety Research | <pre>Title: Malign generalization without internal search<br>Abstract Note: In my last post, I challenged the idea that inner alignment failures should be explained by appealing to agents which perform ex...<br>Url: https://www.alignmentforum.org/posts/ynt9TD6PrYw6iT49m/malign-generalization-without-internal-search...<br>Publication Year: 2020<br>Item Type: blogPost<br>Author: Barnett, Matthew<br>Publication Title: AI Alignment Forum<br>ID: 0<br>Label: 1</pre> |
| Terms Of Service | <pre>Sentence: Crowdtangle may change these terms of service, as described above, notwithstanding any provision to the contrary in any agreemen...<br>ID: 0<br>Label: 2<br></pre> |
| Tweet Eval Hate | <pre>Tweet: New to Twitter-- any men on here know what the process is to get #verified?...<br>ID: 0<br>Label: 2<br></pre> |
| Twitter Complaints | <pre>Tweet text: @HMRCcustomers No this is my first job<br>ID: 0<br>Label: 2</pre> |
### Data Fields
The ID field is used for indexing data points. It will be used to match your submissions with the true test labels, so you must include it in your submission. All other columns contain textual data. Some contain links and URLs to websites on the internet.
All output fields are designated with the "Label" column header. The 0 value in this column indicates that the entry is unlabeled, and should only appear in the unlabeled test set. Other values in this column are various other labels. To get their textual value for a given dataset:
```
# Load the dataset
dataset = datasets.load_dataset("ought/raft", "ade_corpus_v2")
# First, get the object that holds information about the "Label" feature in the dataset.
label_info = dataset.features["Label"]
# Use the int2str method to access the textual labels.
print([label_info.int2str(i) for i in (0, 1, 2)])
# ['Unlabeled', 'ADE-related', 'not ADE-related']
```
### Data Splits
There are two splits provided: train data and unlabeled test data.
The training examples were chosen at random. No attempt was made to ensure that classes were balanced or proportional in the training data -- indeed, the Banking 77 task with 77 different classes if used cannot fit all of its classes into the 50 training examples.
| Dataset | Train Size | Test Size | |
|--------------------------------|------------|-----------|---|
| Ade Corpus V2 | 50 | 5000 | |
| Banking 77 | 50 | 5000 | |
| NeurIPS Impact Statement Risks | 50 | 150 | |
| One Stop English | 50 | 516 | |
| Overruling | 50 | 2350 | |
| Semiconductor Org Types | 50 | 449 | |
| Systematic Review Inclusion | 50 | 2243 | |
| TAI Safety Research | 50 | 1639 | |
| Terms Of Service | 50 | 5000 | |
| Tweet Eval Hate | 50 | 2966 | |
| Twitter Complaints | 50 | 3399 | |
| **Total** | **550** | **28712** | |
## Dataset Creation
### Curation Rationale
Generally speaking, the rationale behind RAFT was to create a benchmark for evaluating NLP models that didn't consist of contrived or artificial data sources, for which the tasks weren't originally assembled for the purpose of testing NLP models. However, each individual dataset in RAFT was collected independently. For the majority of datasets, we only collected them second-hand from existing curated sources. The datasets that we curated are:
* NeurIPS impact statement risks
* Semiconductor org types
* TAI Safety Research
Each of these three datasets was sourced from our existing collaborators at Ought. They had used our service, Elicit, to analyze their dataset in the past, and we contact them to include their dataset and the associated classification task in the benchmark. For all datasets, more information is provided in our paper. For the ones which we did not curate, we provide a link to the dataset. For the ones which we did, we provide a datasheet that elaborates on many of the topics here in greater detail.
For the three datasets that we introduced:
* **NeurIPS impact statement risks** The dataset was created to evaluate the then new requirement for authors to include an "impact statement" in their 2020 NeurIPS papers. Had it been successful? What kind of things did authors mention the most? How long were impact statements on average? Etc.
* **Semiconductor org types** The dataset was originally created to understand better which countries’ organisations have contributed most to semiconductor R\&D over the past 25 years using three main conferences. Moreover, to estimate the share of academic and private sector contributions, the organisations were classified as “university”, “research institute” or “company”.
* **TAI Safety Research** The primary motivations for assembling this database were to: (1) Aid potential donors in assessing organizations focusing on TAI safety by collecting and analyzing their research output. (2) Assemble a comprehensive bibliographic database that can be used as a base for future projects, such as a living review of the field.
**For the following sections, we will only describe the datasets we introduce. All other dataset details, and more details on the ones described here, can be found in our paper.**
### Source Data
#### Initial Data Collection and Normalization
* **NeurIPS impact statement risks** The data was directly observable (raw text scraped) for the most part; although some data was taken from previous datasets (which themselves had taken it from raw text). The data was validated, but only in part, by human reviewers. Cf this link for full details:
* **Semiconductor org types** We used the IEEE API to obtain institutions that contributed papers to semiconductor conferences in the last 25 years. This is a random sample of 500 of them with a corresponding conference paper title. The three conferences were the International Solid-State Circuits Conference (ISSCC), the Symposia on VLSI Technology and Circuits (VLSI) and the International Electron Devices Meeting (IEDM).
* **TAI Safety Research** We asked TAI safety organizations for what their employees had written, emailed some individual authors, and searched Google Scholar. See the LessWrong post for more details: https://www.lesswrong.com/posts/4DegbDJJiMX2b3EKm/tai-safety-bibliographic-database
#### Who are the source language producers?
* **NeurIPS impact statement risks** Language generated from NeurIPS 2020 impact statement authors, generally the authors of submission papers.
* **Semiconductor org types** Language generated from IEEE API. Generally machine-formatted names, and title of academic papers.
* **TAI Safety Research** Language generated by authors of TAI safety research publications.
### Annotations
#### Annotation process
* **NeurIPS impact statement risks** Annotations were entered directly into a Google Spreadsheet with instructions, labeled training examples, and unlabeled testing examples.
* **Semiconductor org types** Annotations were entered directly into a Google Spreadsheet with instructions, labeled training examples, and unlabeled testing examples.
* **TAI Safety Research** N/A
#### Who are the annotators?
* **NeurIPS impact statement risks** Contractors paid by Ought performed the labeling of whether impact statements mention harmful applications. A majority vote was taken from 3 annotators.
* **Semiconductor org types** Contractors paid by Ought performed the labeling of organization types. A majority vote was taken from 3 annotators.
* **TAI Safety Research** The dataset curators annotated the dataset by hand.
### Personal and Sensitive Information
It is worth mentioning that the Tweet Eval Hate, by necessity, contains highly offensive content.
* **NeurIPS impact statement risks** The dataset contains authors' names. These were scraped from publicly available scientific papers submitted to NeurIPS 2020.
* **Semiconductor org types** N/A
* **TAI Safety Research** N/A
## Considerations for Using the Data
### Social Impact of Dataset
* **NeurIPS impact statement risks** N/A
* **Semiconductor org types** N/A
* **TAI Safety Research** N/A
### Discussion of Biases
* **NeurIPS impact statement risks** N/A
* **Semiconductor org types** N/A
* **TAI Safety Research** N/A
### Other Known Limitations
* **NeurIPS impact statement risks** This dataset has limitations that should be taken into consideration when using it. In particular, the method used to collect broader impact statements involved automated downloads, conversions and scraping and was not error-proof. Although care has been taken to identify and correct as many errors as possible, not all texts have been reviewed by a human. This means it is possible some of the broader impact statements contained in the dataset are truncated or otherwise incorrectly extracted from their original article.
* **Semiconductor org types** N/A
* **TAI Safety Research** Don't use it to create a dangerous AI that could bring the end of days.
## Additional Information
### Dataset Curators
The overall RAFT curators are Neel Alex, Eli Lifland, and Andreas Stuhlmüller.
* **NeurIPS impact statement risks** Volunteers working with researchers affiliated to Oxford's Future of Humanity Institute (Carolyn Ashurst, now at The Alan Turing Institute) created the impact statements dataset.
* **Semiconductor org types** The data science unit of Stiftung Neue Verantwortung (Berlin).
* **TAI Safety Research** Angelica Deibel and Jess Riedel. We did not do it on behalf of any entity.
### Licensing Information
RAFT aggregates many other datasets, each of which is provided under its own license. Generally, those licenses permit research and commercial use.
| Dataset | License |
| ----------- | ----------- |
| Ade Corpus V2 | Unlicensed |
| Banking 77 | CC BY 4.0 |
| NeurIPS Impact Statement Risks | MIT License/CC BY 4.0 |
| One Stop English | CC BY-SA 4.0 |
| Overruling | Unlicensed |
| Semiconductor Org Types | CC BY-NC 4.0 |
| Systematic Review Inclusion | CC BY 4.0 |
| TAI Safety Research | CC BY-SA 4.0 |
| Terms Of Service | Unlicensed |
| Tweet Eval Hate | Unlicensed |
| Twitter Complaints | Unlicensed |
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@neel-alex](https://github.com/neel-alex), [@uvafan](https://github.com/uvafan), and [@lewtun](https://github.com/lewtun) for adding this dataset. |
papluca | null | null | null | false | 255 | false | papluca/language-identification | 2022-07-15T10:11:23.000Z | null | false | aa56583bf2bc52b0565770607d6fc3faebecf9e2 | [] | [
"language:ar",
"language:bg",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:it",
"language:ja",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"language:sw",
"language:th",
"language:tr",
"language:ur",
"language:... | https://huggingface.co/datasets/papluca/language-identification/resolve/main/README.md | ---
annotations_creators: []
language_creators: []
language:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- it
- ja
- nl
- pl
- pt
- ru
- sw
- th
- tr
- ur
- vi
- zh
license: []
multilinguality:
- multilingual
pretty_name: Language Identification dataset
size_categories:
- unknown
source_datasets:
- extended|amazon_reviews_multi
- extended|xnli
- extended|stsb_multi_mt
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for Language Identification dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Language Identification dataset is a collection of 90k samples consisting of text passages and corresponding language label.
This dataset was created by collecting data from 3 sources: [Multilingual Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi), [XNLI](https://huggingface.co/datasets/xnli), and [STSb Multi MT](https://huggingface.co/datasets/stsb_multi_mt).
### Supported Tasks and Leaderboards
The dataset can be used to train a model for language identification, which is a **multi-class text classification** task.
The model [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection), which is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base), was trained on this dataset and currently achieves 99.6% accuracy on the test set.
### Languages
The Language Identification dataset contains text in 20 languages, which are:
`arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
## Dataset Structure
### Data Instances
For each instance, there is a string for the text and a string for the label (the language tag). Here is an example:
`{'labels': 'fr', 'text': 'Conforme à la description, produit pratique.'}`
### Data Fields
- **labels:** a string indicating the language label.
- **text:** a string consisting of one or more sentences in one of the 20 languages listed above.
### Data Splits
The Language Identification dataset has 3 splits: *train*, *valid*, and *test*.
The train set contains 70k samples, while the validation and test sets 10k each.
All splits are perfectly balanced: the train set contains 3500 samples per language, while the validation and test sets 500.
## Dataset Creation
### Curation Rationale
This dataset was built during *The Hugging Face Course Community Event*, which took place in November 2021, with the goal of collecting a dataset with enough samples for each language to train a robust language detection model.
### Source Data
The Language Identification dataset was created by collecting data from 3 sources: [Multilingual Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi), [XNLI](https://huggingface.co/datasets/xnli), and [STSb Multi MT](https://huggingface.co/datasets/stsb_multi_mt).
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating (balanced) multi-class text classification models.
### Discussion of Biases
The possible biases correspond to those of the 3 datasets on which this dataset is based.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@LucaPapariello](https://github.com/LucaPapariello) for adding this dataset.
|
pariajm | null | null | null | false | 2 | false | pariajm/sharif_emotional_speech_dataset | 2022-10-24T16:49:19.000Z | null | false | 34264380029d9aca8c6031b072d6fab6e1f97d10 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:radio-plays",
"task_categories:automatic-speech-recognition",
"task_ids:speech-recognition"
] | https://huggingface.co/datasets/pariajm/sharif_emotional_speech_dataset/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Sharif Emotional Speech Dataset (ShEMO)
size_categories:
- 1K<n<10K
source_datasets:
- radio-plays
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
---
## Sharif Emotional Speech Dataset (ShEMO)
## Dataset Summary
The dataset includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online Persian radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including <i>anger</i>, <i>fear</i>, <i>happiness</i>, <i>sadness</i> and <i>surprise</i>, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure,
the inter-annotator agreement is 64% which is interpreted as "substantial agreement".
## Languages
Persian (fa)
## Overview of ShEMO
Feature | Status
------------- | ----------
**license** | apache-2.0
**language** | Persian (fa)
**modality** | Speech
**duration** | 3 hours and 25 minutes
**#utterances** | 3000
**#speakers** | 87 (31 females, 56 males)
**#emotions** | 5 basic emotions (anger, fear, happiness, sadness and surprise) and neutral state
**orthographic transcripts** | Available
**phonetic transcripts** | Available
## Data Instances
Here is a sample of data instances:
```json
"F21N37": {
"speaker_id": "F21",
"gender": "female",
"emotion": "neutral",
"transcript": "مگه من به تو نگفته بودم که باید راجع به دورانت سکوت کنی؟",
"ipa": "mӕge mæn be to nægofte budӕm ke bɑyæd rɑdʒeʔ be dorɑnt sokut koni"
}
```
## Citation
If you use this dataset, please cite the following paper:
~~~~
@Article{MohamadNezami2019,
author = {Mohamad Nezami, Omid and Jamshid Lou, Paria and Karami, Mansoureh},
title = {ShEMO: a large-scale validated database for Persian speech emotion detection},
journal = {Language Resources and Evaluation},
year = {2019},
volume = {53},
number = {1},
pages = {1--16},
issn = {1574-0218},
doi = {10.1007/s10579-018-9427-x},
url = {https://doi.org/10.1007/s10579-018-9427-x}
}
~~~~
## Download Dataset
To download the dataset, please check the [ShEMO repo](https://github.com/pariajm/sharif-emotional-speech-database)! |
parivartanayurveda | null | null | null | false | 1 | false | parivartanayurveda/Malesexproblemsayurvedictreatment | 2021-02-24T12:23:37.000Z | null | false | 18c0e0854a2e9e36b19f8524f272d861dbafc9ab | [] | [] | https://huggingface.co/datasets/parivartanayurveda/Malesexproblemsayurvedictreatment/resolve/main/README.md | Best ayurvedic medicine for erectile dysfunction. More Info :- https://www.parivartanayurveda.com/male-sexual-problems.php |
pasinit | null | @inproceedings{raganato-etal-2020-xl-wic,
title={XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization},
author={Raganato, Alessandro and Pasini, Tommaso and Camacho-Collados, Jose and Pilehvar, Mohammad Taher},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={7193--7206},
year={2020}
} | A system's task on any of the XL-WiC datasets is to identify the intended meaning of a word in a context of a given language. XL-WiC is framed as a binary classification task. Each instance in XL-WiC has a target word w, either a verb or a noun, for which two contexts are provided. Each of these contexts triggers a specific meaning of w. The task is to identify if the occurrences of w in the two contexts correspond to the same meaning or not.
XL-WiC provides dev and test sets in the following 12 languages:
Bulgarian (BG)
Danish (DA)
German (DE)
Estonian (ET)
Farsi (FA)
French (FR)
Croatian (HR)
Italian (IT)
Japanese (JA)
Korean (KO)
Dutch (NL)
Chinese (ZH)
and training sets in the following 3 languages:
German (DE)
French (FR)
Italian (IT) | false | 54 | false | pasinit/xlwic | 2022-10-25T09:54:22.000Z | null | false | cca3cfb747db5bf97b95126ec79d5b7d743f9654 | [] | [
"annotations_creators:expert-generated",
"extended:original",
"language_creators:found",
"language:en",
"language:bg",
"language:zh",
"language:hr",
"language:da",
"language:nl",
"language:et",
"language:fa",
"language:ja",
"language:ko",
"language:it",
"language:fr",
"language:de",
... | https://huggingface.co/datasets/pasinit/xlwic/resolve/main/README.md | ---
annotations_creators:
- expert-generated
extended:
- original
language_creators:
- found
language:
- en
- bg
- zh
- hr
- da
- nl
- et
- fa
- ja
- ko
- it
- fr
- de
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
---
# XL-WiC
Huggingface dataset for the XL-WiC paper [https://www.aclweb.org/anthology/2020.emnlp-main.584.pdf](https://www.aclweb.org/anthology/2020.emnlp-main.584.pdf).
Please refer to the official [website](https://pilehvar.github.io/xlwic/) for more information.
## Configurations
When loading one of the XL-WSD datasets one has to specify the training language and the target language (on which dev and test will be performed).
Please refer to [Languages](#languages) section to see in which languages training data is available.
For example, we can load the dataset having English as training language and Italian as target language as follows:
```python
from datasets import load_dataset
dataset = load_dataset('pasinit/xlwic', 'en_it')
```
## Languages
**Training data**
- en (English)
- fr (French)
- de (German)
- it (Italian)
**Dev & Test data**
- fr (French)
- de (German)
- it (Italian)
- bg (Bulgarian)
- zh (Chinese)
- hr (Croatian)
- da (Danish)
- nl (Dutch)
- et (Estonian)
- fa (Farsi)
- ja (Japanesse)
- ko (Korean)
|
peixian | null | @article{DBLP:journals/corr/abs-1805-04508,
author = {Svetlana Kiritchenko and
Saif M. Mohammad},
title = {Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems},
journal = {CoRR},
volume = {abs/1805.04508},
year = {2018},
url = {http://arxiv.org/abs/1805.04508},
archivePrefix = {arXiv},
eprint = {1805.04508},
timestamp = {Mon, 13 Aug 2018 16:47:58 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1805-04508.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems and resources. Further, there is a lack of benchmark datasets for examining inappropriate biases in system predictions. Here, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We used the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 ‘Affect in Tweets’. We found that several of the systems showed statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available, and encourage its use to evaluate biases in sentiment and other NLP tasks. | false | 4 | false | peixian/equity_evaluation_corpus | 2022-10-20T23:35:15.000Z | null | false | 0f68047bb0d5d17e273ea7bd87b8964cdbe00028 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"tags:gender-classification"
] | https://huggingface.co/datasets/peixian/equity_evaluation_corpus/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
tags:
- gender-classification
---
# Dataset Card for equity-evaluation-corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems and resources. Further, there is a lack of benchmark datasets for examining inappropriate biases in system predictions. Here, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We used the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 Affect in Tweets. We found that several of the systems showed statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available, and encourage its use to evaluate biases in sentiment and other NLP tasks.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `sentence`: a `string` feature.
- `template`: a `string` feature.
- `person`: a `string` feature.
- `race`: a `string` feature.
- `emotion`: a `string` feature.
- `emotion word`: a `string` feature.
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
|
peixian | null | @inproceedings{voigt-etal-2018-rtgender,
title = "{R}t{G}ender: A Corpus for Studying Differential Responses to Gender",
author = "Voigt, Rob and
Jurgens, David and
Prabhakaran, Vinodkumar and
Jurafsky, Dan and
Tsvetkov, Yulia",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://www.aclweb.org/anthology/L18-1445",
} | RtGender is a corpus for studying responses to gender online, including posts and responses from Facebook, TED, Fitocracy, and Reddit where the gender of the source poster/speaker is known. | false | 3 | false | peixian/rtGender | 2022-10-25T09:54:24.000Z | null | false | 74ef139a2d70372a878e406056ff37b1f0d561a5 | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-label-classification"
] | https://huggingface.co/datasets/peixian/rtGender/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# Dataset Card for rtGender
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
RtGender is a corpus for studying responses to gender online, including posts and responses from Facebook, TED, Fitocracy, and Reddit where the gender of the source poster/speaker is known.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `source`: a `string` feature.
- `op_gender`: a `string` feature.
- `post_text`: a `string` feature.
- `response_text`: a `string` feature.
- `sentiment`: a `string` feature.
- `relevance`: a `string` feature.
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
persiannlp | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian textual entailment task (deciding `sent1` entails `sent2`). | false | 82 | false | persiannlp/parsinlu_entailment | 2022-10-22T15:13:00.000Z | null | false | c49b2d8fa0d6476520695c52207690b7ec854043 | [] | [
"arxiv:2012.06154",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|translated|mnli",
"task_ids:textual-entailment",
"task_ids:natural-langu... | https://huggingface.co/datasets/persiannlp/parsinlu_entailment/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|translated|mnli
task_categories:
- textual-entailment
- natural-language-inference
task_ids:
- textual-entailment
- natural-language-inference
---
# Dataset Card for PersiNLU (Textual Entailment)
## Table of Contents
- [Dataset Card for PersiNLU (Textual Entailment)](#dataset-card-for-persi_nlu_entailment)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian textual entailment task (deciding `sent1` entails `sent2`).
The questions are partially translated from the SNLI dataset and partially generated by expert annotators.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"sent1": "سالها است که کنگره در تلاش است تا اثربخشی مدیریت اطلاعات و فناوری را در دولت فدرال افزایش دهد.",
"sent2": "کنگره بودجه ویژه ای برای مدیریت اطلاعات و فناوری در دولت فدرال دارد.",
"label": "n",
"category": "translation-train"
}
```
### Data Fields
- `sent1`: the first sentence.
- `sent2`: the second sentence.
- `source`: whether the questions are translated from MNLI (`translation-.`) or they're written by native speakers (`natural-.`).
- `label`: `e` if `sent2` is entailed from `sent1`; `c` if `sent2` is contradictory to `sent1`; `n` if the two sentences are neutral.
### Data Splits
The train/dev/test splits contains 756/271/1751 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
persiannlp | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian query paraphrasing task (paraphrase or not, given two questions).
The questions are partly mined using Google auto-complete, and partly translated from Quora paraphrasing dataset. | false | 15 | false | persiannlp/parsinlu_query_paraphrasing | 2022-10-22T15:13:22.000Z | null | false | ec675bb3ac50c1a52317c101fe1d724b4601f47a | [] | [
"arxiv:2012.06154",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|quora|google",
"task_ids:query-paraphrasing"
] | https://huggingface.co/datasets/persiannlp/parsinlu_query_paraphrasing/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|quora|google
task_categories:
- query-paraphrasing
task_ids:
- query-paraphrasing
---
# Dataset Card for PersiNLU (Query Paraphrasing)
## Table of Contents
- [Dataset Card for PersiNLU (Query Paraphrasing)](#dataset-card-for-persi_nlu_query_paraphrasing)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian query paraphrasng task (deciding whether two questions are paraphrases of each other).
The questions are partially generated from Google auto-complete, and partially translated from the Quora paraphrasing dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"q1": "اعمال حج تمتع از چه روزی شروع میشود؟",
"q2": "ویار از چه روزی شروع میشود؟",
"label": "0",
"category": "natural"
}
```
### Data Fields
- `q1`: the first question.
- `q2`: the second question.
- `category`: whether the questions are mined from Quora (`qqp`) or they're extracted from Google auto-complete (`natural`).
- `label`: `1` if the questions are paraphrases; `0` otherwise.
### Data Splits
The train/dev/test splits contains 1830/898/1916 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
persiannlp | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers. | false | 10 | false | persiannlp/parsinlu_reading_comprehension | 2022-10-25T09:54:26.000Z | null | false | 701cb4096c7e12695123c254f757ed56b12c49b8 | [] | [
"arxiv:2012.06154",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|wikipedia|google",
"task_categories:question-answering",
"task_ids:extra... | https://huggingface.co/datasets/persiannlp/parsinlu_reading_comprehension/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia|google
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Card for PersiNLU (Reading Comprehension)](#dataset-card-for-persi_nlu_reading_comprehension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer.
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
persiannlp | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment). | false | 22 | false | persiannlp/parsinlu_sentiment | 2022-10-22T15:13:40.000Z | null | false | abecf6a01a45174b7aa9b861fcc4a586cc4c7f9d | [] | [
"arxiv:2012.06154",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|translated|mnli",
"task_ids:sentiment-analysis"
] | https://huggingface.co/datasets/persiannlp/parsinlu_sentiment/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|translated|mnli
task_categories:
- sentiment-analysis
task_ids:
- sentiment-analysis
---
# Dataset Card for PersiNLU (Textual Entailment)
## Table of Contents
- [Dataset Card for PersiNLU (Sentiment Analysis)](#dataset-card-for-persi_sentiment)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian sentiment analysis dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"review": "خوب بود ولی خیلی گرون شده دیگه...فک نکنم به این قیمت ارزش خرید داشته باشد",
"review_id": "1538",
"example_id": "4",
"excel_id": "food_194",
"question": "نظر شما در مورد بسته بندی و نگهداری این حلوا شکری، ارده و کنجد چیست؟",
"category": "حلوا شکری، ارده و کنجد",
"aspect": "بسته بندی",
"label": "-3",
"guid": "food-dev-r1538-e4"
}
```
### Data Fields
- `review`: the review text.
- `review_id`: a unique id associated with the review.
- `example_id`: a unique id associated with a particular attribute being addressed about the review.
- `question`: a natural language question about a particular attribute.
- `category`: the subject discussed in the review.
- `aspect`: the aspect mentioned in the input question.
- `label`: the overall sentiment towards this particular subject, in the context of the mentioned aspect. Here are the definition of the labels:
```
'-3': 'no sentiment expressed',
'-2': 'very negative',
'-1': 'negative',
'0': 'neutral',
'1': 'positive',
'2': 'very positive',
'3': 'mixed',
```
### Data Splits
See the data.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
persiannlp | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian translation dataset (English -> Persian). | false | 63 | false | persiannlp/parsinlu_translation_en_fa | 2022-10-24T16:50:37.000Z | null | false | aac51e2d1d2d464c7c0a123ffbe66c43fb30c8e7 | [] | [
"arxiv:2012.06154",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:cc-by-nc-sa-4.0",
"multilinguality:fa",
"multilinguality:en",
"size_categories:1K<n<10K",
"source_datasets:extended",
"task_categories:translation",
"task_ids:translation"
] | https://huggingface.co/datasets/persiannlp/parsinlu_translation_en_fa/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- fa
- en
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# Dataset Card for PersiNLU (Machine Translation)
## Table of Contents
- [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian translation dataset (English -> Persian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`) and English (`en`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"source": "how toil to raise funds, propagate reforms, initiate institutions!",
"targets": ["چه زحمتها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد."],
"category": "mizan_dev_en_fa"
}
```
### Data Fields
- `source`: the input sentences, in English.
- `targets`: the list of gold target translations in Persian.
- `category`: the source from which the dataset is mined.
### Data Splits
The train/de/test split contains 1,621,666/2,138/48,360 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
persiannlp | null | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | A Persian translation dataset (Persian -> English). | false | 2 | false | persiannlp/parsinlu_translation_fa_en | 2022-10-24T17:01:27.000Z | null | false | a22208a3da5b794d4d5d472942327ca17ca0e806 | [] | [
"arxiv:2012.06154",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fa",
"license:cc-by-nc-sa-4.0",
"multilinguality:fa",
"multilinguality:en",
"size_categories:1K<n<10K",
"source_datasets:extended",
"task_categories:translation",
"task_ids:translation"
] | https://huggingface.co/datasets/persiannlp/parsinlu_translation_fa_en/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- fa
- en
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# Dataset Card for PersiNLU (Machine Translation)
## Table of Contents
- [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian translation dataset (English -> Persian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`) and English (`en`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"source": "چه زحمتها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد.",
"targets": ["how toil to raise funds, propagate reforms, initiate institutions!"],
"category": "mizan_dev_en_fa"
}
```
### Data Fields
- `source`: the input sentences, in Persian.
- `targets`: the list of gold target translations in English.
- `category`: the source from which the example is mined.
### Data Splits
The train/dev/test split contains 1,622,281/2,138/47,745 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
peterhsu | null | null | null | false | 1 | false | peterhsu/github-issues | 2022-01-07T09:16:29.000Z | null | false | 4996d1a68fc5c56f6b888180d6f4a7d98a0cd5e2 | [] | [] | https://huggingface.co/datasets/peterhsu/github-issues/resolve/main/README.md | annotations_creators:
- no-annotation
language_creators:
- found
languages:
- en
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: Practice
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval
# Dataset Card for [Needs More Information]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/peterhsu/
- **Repository:** github-issues
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
For Practice
### Supported Tasks and Leaderboards
Classification
### Languages
en
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
train
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
phongdtd | null | null | \ | false | 1 | false | phongdtd/VinDataVLSP | 2022-01-26T06:49:13.000Z | null | false | 8a731c1701fe9261accecdeee010c82202e7ef40 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/phongdtd/VinDataVLSP/resolve/main/README.md | ---
license: apache-2.0
---
|
phongdtd | null | null | \ | false | 1 | false | phongdtd/youtube_casual_audio | 2022-11-01T13:23:24.000Z | null | false | 22ddd6021c9c6cae167842867026230685ce3973 | [] | [
"multilinguality:190K<n<200K",
"source_datasets:extended|youtube",
"task_categories:automatic-speech-recognition",
"Pretty_name:Youtube Casual Audio",
"Annotations_creators:crowdsourced",
"Language_creators:datlq",
"Languages:vi",
"Licenses:cc0-1.0"
] | https://huggingface.co/datasets/phongdtd/youtube_casual_audio/resolve/main/README.md | ---
multilinguality:
vi:
- 190K<n<200K
source_datasets:
- extended|youtube
task_categories:
- automatic-speech-recognition
task_ids: []
Pretty_name: Youtube Casual Audio
Annotations_creators:
- crowdsourced
Language_creators:
- datlq
Languages:
- vi
Licenses:
- cc0-1.0
---
# Dataset Card for common_voice
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Vietnamese
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
`
{
'file_path': 'audio/_1OsFqkFI38_34.304_39.424.wav',
'script': 'Ik vind dat een dubieuze procedure.',
'audio': {'path': 'audio/_1OsFqkFI38_34.304_39.424.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000}
`
### Data Fields
file_path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
script: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train, test, validated.
The val, test, train are all data that has been reviewed, deemed of high quality and split into val, test and train.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
### Contributions
Thanks to [@datlq](https://github.com/datlq98) for adding this dataset.
|
pierreguillou | null | null | null | false | 17 | false | pierreguillou/lener_br_finetuning_language_model | 2022-10-25T09:54:32.000Z | lener-br | false | 59d44d489b64b128c388a5f27c4fa66dd6c3a080 | [] | [
"language:pt",
"multilinguality:monolingual",
"task_ids:language-modeling",
"datasets:lener_br",
"tags:lener_br"
] | https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model/resolve/main/README.md | ---
language:
- pt
multilinguality:
- monolingual
task_ids:
- language-modeling
paperswithcode_id: lener-br
pretty_name: LeNER-Br language modeling
datasets:
- lener_br
tags:
- lener_br
---
# Dataset Card for "LeNER-Br language modeling"
## Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the [LeNER-Br](https://huggingface.co/datasets/lener_br) dataset ([official site](https://cic.unb.br/~teodecampos/LeNER-Br/)).
The legal texts were downloaded from this [link](https://cic.unb.br/~teodecampos/LeNER-Br/LeNER-Br.zip) (93.6MB) and processed to create a `DatasetDict` with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau [base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) and [large](https://huggingface.co/neuralmind/bert-large-portuguese-cased).
## Language
Portuguese from Brazil.
## Blog post
[NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Dataset structure
```
DatasetDict({
validation: Dataset({
features: ['text'],
num_rows: 3813
})
train: Dataset({
features: ['text'],
num_rows: 15252
})
})
```
## Use
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("pierreguillou/lener_br_finetuning_language_model")
``` |
pierresi | null | @article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
} | https://github.com/clovaai/cord/ | false | 1 | false | pierresi/cord | 2021-10-13T16:47:07.000Z | null | false | d15dadc66c01f73d66f8b9947ebfc7db06cbb38e | [] | [] | https://huggingface.co/datasets/pierresi/cord/resolve/main/README.md | CORD: A Consolidated Receipt Dataset for Post-OCR Parsing. |
pietrolesci | null | @inproceedings{Zhang2015CharacterlevelCN,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
booktitle={NIPS},
year={2015}
} | AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
(xiang.zhang@nyu.edu) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015). | false | 101 | false | pietrolesci/ag_news | 2022-10-08T13:03:42.000Z | ag-news | false | 00422e4eaf5c0265df0a3b5cbf9ebbac364958e7 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilingualism:monolingual",
"size_categories:100K<n<1M",
"source_datasets:ag_news",
"task_categories:text-classification",
"task_ids:topic-classification"
] | https://huggingface.co/datasets/pietrolesci/ag_news/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilingualism:
- monolingual
paperswithcode_id: ag-news
pretty_name: ag_news
size_categories:
- 100K<n<1M
source_datasets:
- ag_news
task_categories:
- text-classification
task_ids:
- topic-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
(xiang.zhang@nyu.edu) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015).
### Supported Tasks and Leaderboards
Text classification.
### Languages
English.
## Dataset Structure
We show detailed information for the 2 available configurations of the dataset.
### Data Instances
The two configurations share the following information:
- **Size of downloaded dataset files**: 29.88 MB
- **Size of the generated dataset**: 30.23 MB
- **Total amount of disk used**: 60.10 MB
### Data Fields
**concat**
- text: a string feature.
- label: a classification label, with possible values including World (0), Sports (1), Business (2), Sci/Tech (3).
```
{
'description': 'Putin Heads for Turkey in Landmark Visit Between Former Foes Russian President Vladimir Putin is making a two-day official visit to Turkey, the first by any Russian leader in 32 years. Mr. Putin is expected to sign several economic cooperation agreements ',
'label': 0
}
```
**original**
- title: a string feature.
- description: a string feature.
- label: a string feature, with possible values including World, Sports, Business, Sci/Tech.
```
{
'description': 'Russian President Vladimir Putin is making a two-day official visit to Turkey, the first by any Russian leader in 32 years. Mr. Putin is expected to sign several economic cooperation agreements ',
'label': 1,
'class': 'World',
'title': 'Putin Heads for Turkey in Landmark Visit Between Former Foes'
}
```
### Data Splits
The two configurations share the following information:
- **train**: 120000
- **test**: 7600
## Dataset Creation
### Curation Rationale
This specific instance of the dataset contains two configurations:
1. `concat` that can be loaded using `load_dataset("pietrolesci/ag_news", name="concat")` and contains the original dataset as available [here](https://huggingface.co/datasets/ag_news). This configuration of the dataset concatenates the text fields ("title" and "description") and encodes the four classes numerically with the following mapping `{0: "World", 1: "Sports", 2: "Business", 3: "Sci/Tech"}`. Notably, in this specific instance, the `text` field is preprocessed by replacing `\\` with `" "` and stripping whitespaces.
1. `original` that can be loaded using `load_dataset("pietrolesci/ag_news", name="original")` and contains the original dataset as available from the [source](https://drive.google.com/drive/u/0/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M?resourcekey=0-TLwzfR2O-D2aPitmn5o9VQ). It is equal to [here](https://huggingface.co/datasets/ag_news) but the text fields are not concatenated, the `label` field is not zero-indexed (but is like the original), there is an additional `class` field reporting the original label. Also, the text fields are preprocessed by replacing `\\` with `" "` and stripping whitespaces.
### Source Data
The original data source for this dataset can be found [here](https://drive.google.com/drive/u/0/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M?resourcekey=0-TLwzfR2O-D2aPitmn5o9VQ).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Original split curated by [Pietro Lesci](https://github.com/pietrolesci).
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{Zhang2015CharacterlevelCN,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
booktitle={NIPS},
year={2015}
}
```
|
pile-of-law | null | @misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson, Peter and Krass, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
} | We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives. | false | 1,485 | false | pile-of-law/pile-of-law | 2022-07-14T06:13:16.000Z | null | false | 4232278db7c57ef28ad2ee1a667e646b0f308fb5 | [] | [
"arxiv:2207.00220",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"task_categories:fill-mask",
"task_ids:masked-language-modeling"
] | https://huggingface.co/datasets/pile-of-law/pile-of-law/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: pile-of-law
size_categories:
- 10M<n<100M
source_datasets: []
task_categories:
- fill-mask
task_ids:
- masked-language-modeling
viewer: false
---
# Dataset Card for Pile of Law
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/pile-of-law/pile-of-law
- **Repository:** https://huggingface.co/datasets/pile-of-law/pile-of-law
- **Paper:** https://arxiv.org/abs/2207.00220
### Dataset Summary
We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives.
### Supported Tasks and Leaderboards
See paper for details.
### Languages
English
## Dataset Structure
### Data Instances
**courtListener_docket_entry_documents** : Docket entries in U.S. federal courts, including filed briefs from CourtListener RECAP archive.
**courtListener_opinions** : U.S. court opinions from CourtListener.
**atticus_contracts**: Unannotated contracts from the Atticus Project.
**federal_register**: The U.S. federal register where agencies file draft rulemaking.
**bva_opinions**: Bureau of Veterans Appeals opinions.
**us_bills**: Draft Bills from the United States Congress.
**cc_casebooks**: Educational Casebooks released under open CC licenses.
**tos**: Unannotated Terms of Service contracts.
**euro_parl**: European parliamentary debates.
**nlrb_decisions**: Decisions from the U.S. National Labor Review Board.
**scotus_oral_arguments**: U.S. Supreme Court Oral Arguments
**cfr**: U.S. Code of Federal Regulations
**state_codes**: U.S. State Codes
**scotus_filings**: Briefs and filings with the U.S. Supreme Court.
**bar_exam_outlines**: Bar exam outlines available openly on the web.
**edgar**: Contracts filed with the SEC and made available on the SEC's Edgar tool.
**cfpb_creditcard_contracts**: Credit Card Contracts compiled by the U.S. Consumer Finance Protection Bureau.
**constitutions** : The World's constitutions.
**congressional_hearings** : U.S. Congressional hearing transcripts and statements.
**oig**: U.S. Office of Inspector general reports.
**olc_memos**: U.S. Office of Legal Counsel memos.
**uscode**: The United States Code (laws).
**founding_docs**: Letters from U.S. founders.
**ftc_advisory_opinions**: Advisory opinions by the Federal Trade Commission.
**echr** : European Court of Human Rights opinions.
**eurlex**: European Laws.
**tax_rulings**: Rulings from U.S. Tax court.
**un_debates**: U.N. General Debates
**fre**: U.S. Federal Rules of Evidence
**frcp** : U.S. Federal Rules of Civil Procedure
**canadian_decisions**: Canadian Court Opinions from ON and BC.
**eoir**: U.S. Executive Office for Immigration Review Immigration and Nationality Precedential Decisions
**dol_ecab**: Department of Labor Employees' Compensation Appeals Board decisions after 2006
**r_legaladvice** : Filtered data from the r/legaladvice and r/legaladviceofftopic subreddits in the format.
Title: [Post Title]
Question: [Post Content]
Topic: [Post Flair]
Answer \#[N]: [Top Answers]...
### Data Fields
- text: the document text
- created_timestamp: If the original source provided a timestamp when the document was created we provide this as well. Note, these may be inaccurate. For example CourtListener case opinions provide the timestamp of when it was uploaded to CourtListener not when the opinion was published. We welcome pull requests to correct this field if such inaccuracies are discovered.
- downloaded_timestamp: When the document was scraped.
- url: the source url
### Data Splits
There is a train/validation split for each subset of the data. 75%/25%
## Dataset Creation
### Curation Rationale
We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives. As such, data sources are curated to inform: (1) legal analysis, knowledge, or understanding; (2) argument formation; (3) privacy filtering standards. Sources like codes and laws tend to inform (1). Transcripts and court filings tend to inform (2). Opinions tend to inform (1) and (3).
### Source Data
#### Initial Data Collection and Normalization
We do not normalize the data, but we provide dataset creation code and relevant urls in https://github.com/Breakend/PileOfLaw
#### Who are the source language producers?
Varied (see sources above).
### Personal and Sensitive Information
This dataset may contain personal and sensitive information. However, this has been previously filtered by the relevant government and federal agencies that weigh the harms of revealing this information against the benefits of transparency. If you encounter something particularly harmful, please file a takedown request with the upstream source and notify us in the communities tab. We will then remove the content. We cannot enable more restrictive licensing because upstream sources may restrict using a more restrictive license. However, we ask that all users of this data respect the upstream licenses and restrictions. Per the standards of CourtListener, we do not allow indexing of this data by search engines and we ask that others do not also. Please do not turn on anything that allows the data to be easily indexed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset will provide more mechanisms for doing data work. As we describe in the paper, the internal variation allows contextual privacy rules to be learned. If robust mechanisms for this are developed they can applied more broadly. This dataset can also potentially be used for legal language model pretraining. As discussed in ``On the Opportunities and Risks of Foundation Models'', legal language models can help improve access to justice in various ways. But they can also be used in potentially harmful ways. While such models are not ready for most production environments and are the subject of significant research, we ask that model creators using this data, particularly when creating generative models, consider the impacts of their model and make a good faith effort to weigh the benefits against the harms of their method. Our license and many of the sub-licenses also restrict commercial usage.
### Discussion of Biases
The data reflects the biases of governments and courts. As we discuss in our work, these can be significant, though more recent text will likely be less overtly toxic. Please see the above statement and embark on any model uses responsibly.
### Other Known Limitations
We mainly focus on U.S. and English-speaking legal sources, though we include some European and Canadian resources.
## Additional Information
### Licensing Information
CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International. But individual sources may have other licenses. See paper for details. Some upstream data sources request that indexing be disabled. As such please **do not re-host any data in a way that can be indexed by search engines.**
### No Representations
We do not make any representation that the legal information provided here is accurate. It is meant for research purposes only. For the authoritative and updated source of information please refer directly to the governing body which provides the latest laws, rules, and regulations relevant to you.
### DMCA Takedown Requests
Pile of Law follows the notice and takedown procedures in the Digital Millennium Copyright Act (DMCA), 17 U.S.C. Section 512.
If you believe content on Pile of Law violates your copyright, please immediately notify its operators by sending a message with the information described below. Please use the subject "Copyright" in your message. If Pile of Law's operators act in response to an infringement notice, they will make a good-faith attempt to contact the person who contributed the content using the most recent email address that person provided to Pile of Law.
Under the DMCA, you may be held liable for damages based on material misrepresentations in your infringement notice. You must also make a good-faith evaluation of whether the use of your content is a fair use, because fair uses are not infringing. See 17 U.S.C. Section 107 and Lenz v. Universal Music Corp., No. 13-16106 (9th Cir. Sep. 14, 2015). If you are not sure if the content you want to report infringes your copyright, you should first contact a lawyer.
The DMCA requires that all infringement notices must include all of the following:
+ A signature of the copyright owner or a person authorized to act on the copyright owner's behalf
+ An identification of the copyright claimed to have been infringed
+ A description of the nature and location of the material that you claim to infringe your copyright, in sufficient detail to allow Pile of Law to find and positively identify that material
+ Your name, address, telephone number, and email address
+ A statement that you believe in good faith that the use of the material that you claim to infringe your copyright is not authorized by law, or by the copyright owner or such owner's agent
+ A statement, under penalty of perjury, that all of the information contained in your infringement notice is accurate
+ A statement, under penalty of perjury, that you are either the copyright owner or a person authorized to act on their behalf.
Pile of Law will respond to all DMCA-compliant infringement notices, including, as required or appropriate, by removing the offending material or disabling all links to it.
All received infringement notices may be posted in full to the Lumen database (previously known as the Chilling Effects Clearinghouse).
All takedown requests with the above information should be sent via email to pileoflaw@gmail.com.
This removal notice has been modified from the (CourtListener DMCA takedown notice)[https://www.courtlistener.com/terms/].
### Citation Information
For a citation to this work:
```
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson*, Peter and Krass*, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
```
Since this dataset also includes several other data sources with citations, please refer to our paper and cite the additional relevant work in addition to our own work. |
pmc | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets | false | 3 | false | pmc/open_access | 2022-10-25T14:32:29.000Z | null | false | d8916155e305a52ac9de76681a09fbab29bc45d5 | [] | [
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"license:cc-by-sa-4.0",
"license:cc-by-nd-4.0",
"license:cc-by-nc-4.0",
"license:cc-by-nc-sa-4.0",
"license:cc-by-nc-nd-4.0",
"license:other",
"license:unknown",
... | https://huggingface.co/datasets/pmc/open_access/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
- cc-by-4.0
- cc-by-sa-4.0
- cc-by-nd-4.0
- cc-by-nc-4.0
- cc-by-nc-sa-4.0
- cc-by-nc-nd-4.0
- other
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: PMC Open Access
---
# Dataset Card for PMC Open Access Subset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [PubMed Central](mailto:pubmedcentral@ncbi.nlm.nih.gov)
### Dataset Summary
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have
copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or
similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The
PMC Open Access Subset is one part of the PMC Article Datasets.
Within the PMC Open Access Subset, there are three groupings:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Supported Tasks and Leaderboards
- Language modeling
### Languages
English (`en`).
## Dataset Structure
### Data Instances
```
{
'text': "==== Front\nPLoS BiolPLoS BiolpbioplosbiolPLoS Biology1544-91731545-7885Public Library of Science San Francisco, USA 10.1371/journal.pbio.0000005Research ArticleGenetics/Genomics/Gene TherapyInfectious DiseasesMicrobiologyPlasmodiumThe Transcriptome of the Intraerythrocytic Developmental Cycle of Plasmodium falciparum\n P. falciparum IDC TranscriptomeBozdech Zbynek \n1\nLlinás Manuel \n1\nPulliam Brian Lee \n1\nWong Edith D \n1\nZhu Jingchun \n2\nDeRisi Joseph L joe@derisilab.ucsf.edu\n1\n1Department of Biochemistry and Biophysics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America2Department of Biological and Medical Informatics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America10 2003 18 8 2003 18 8 2003 1 1 e512 6 2003 25 7 2003 Copyright: ©2003 Bozdech et al.2003This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.\nMicroarray Analysis: Genome-Scale Hypothesis Scanning \n\nMonitoring Malaria: Genomic Activity of the Parasite in Human Blood Cells \n\nPlasmodium falciparum is the causative agent of the most burdensome form of human malaria, affecting 200–300 million individuals per year worldwide. The recently sequenced genome of P. falciparum revealed over 5,400 genes, of which 60% encode proteins of unknown function. Insights into the biochemical function and regulation of these genes will provide the foundation for future drug and vaccine development efforts toward eradication of this disease. By analyzing the complete asexual intraerythrocytic developmental cycle (IDC) transcriptome of the HB3 strain of P. falciparum, we demonstrate that at least 60% of the genome is transcriptionally active during this stage. Our data demonstrate that this parasite has evolved an extremely specialized mode of transcriptional regulation that produces a continuous cascade of gene expression, beginning with genes corresponding to general cellular processes, such as protein synthesis, and ending with Plasmodium-specific functionalities, such as genes involved in erythrocyte invasion. The data reveal that genes contiguous along the chromosomes are rarely coregulated, while transcription from the plastid genome is highly coregulated and likely polycistronic. Comparative genomic hybridization between HB3 and the reference genome strain (3D7) was used to distinguish between genes not expressed during the IDC and genes not detected because of possible sequence variations...
'pmid': '12929205',
'accession_id': 'PMC176545',
'license': 'CC BY',
'last_updated': '2021-01-05 08:21:03',
'retracted': 'no',
'citation': 'PLoS Biol. 2003 Oct 18; 1(1):e5'
}
```
### Data Fields
- `text`: Text content.
- `pmid`: PubMed ID.
- `accession_id`: Unique identifier for a sequence record.
- `license`: License type.
- `last_updated`: Date of last update.
- `retracted`: Whether retracted or not.
- `citation`: Citation reference.
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License terms vary. Please refer to the license statement in each article for specific terms of use.
Within the PMC Open Access Subset, there are three groupings based on available license terms:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses;
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
MLCommons | null | @inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
} | Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken
words in 50 languages collectively spoken by over 5 billion people, for academic
research and commercial applications in keyword spotting and spoken term search,
licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords,
totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset
has many use cases, ranging from voice-enabled consumer devices to call center
automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level
audio to produce per-word timing estimates for extraction.
All alignments are included in the dataset. | false | 28 | false | MLCommons/ml_spoken_words | 2022-10-24T08:45:19.000Z | null | false | 273f583343799b0a55f39e9f3bc8fdc3c01f44a8 | [] | [
"annotations_creators:machine-generated",
"language_creators:other",
"language:ar",
"language:as",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"lan... | https://huggingface.co/datasets/MLCommons/ml_spoken_words/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ga
- gn
- ha
- ia
- id
- it
- ka
- ky
- lt
- lv
- mn
- mt
- nl
- or
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sk
- sl
- sv
- ta
- tr
- tt
- uk
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- extended|common_voice
task_categories:
- speech-processing
task_ids: []
pretty_name: Multilingual Spoken Words
language_bcp47:
- fy-NL
- ga-IE
- rm-sursilv
- rm-vallader
- sv-SE
- zh-CN
tags:
- other-keyword-spotting
---
# Dataset Card for Multilingual Spoken Words
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/multilingual-spoken-words/
- **Repository:** https://github.com/harvard-edge/multilingual_kws
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken
words in 50 languages collectively spoken by over 5 billion people, for academic
research and commercial applications in keyword spotting and spoken term search,
licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords,
totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset
has many use cases, ranging from voice-enabled consumer devices to call center
automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level
audio to produce per-word timing estimates for extraction.
All alignments are included in the dataset.
Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like
`"{lang}_{format}"`, so to load, for example, Tatar in wav format do:
```python
ds = load_dataset("MLCommons/ml_spoken_words", "tt_wav")
```
To download multiple languages in a single dataset pass list of languages to `languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
To download a specific format pass it to the `format` argument (default format is `wav`):
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"], format="opus")
```
Note that each time you provide different sets of languages,
examples are generated from scratch even if you already provided one or several of them before
because custom configurations are created each time (the data is **not** redownloaded though).
### Supported Tasks and Leaderboards
Keyword spotting, Spoken term search
### Languages
The dataset is multilingual. To specify several languages to download pass a list of them to the
`languages` argument:
```python
ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"])
```
The dataset contains data for the following languages:
Low-resourced (<10 hours):
* Arabic (0.1G, 7.6h)
* Assamese (0.9M, 0.1h)
* Breton (69M, 5.6h)
* Chuvash (28M, 2.1h)
* Chinese (zh-CN) (42M, 3.1h)
* Dhivehi (0.7M, 0.04h)
* Frisian (0.1G, 9.6h)
* Georgian (20M, 1.4h)
* Guarani (0.7M, 1.3h)
* Greek (84M, 6.7h)
* Hakha Chin (26M, 0.1h)
* Hausa (90M, 1.0h)
* Interlingua (58M, 4.0h)
* Irish (38M, 3.2h)
* Latvian (51M, 4.2h)
* Lithuanian (21M, 0.46h)
* Maltese (88M, 7.3h)
* Oriya (0.7M, 0.1h)
* Romanian (59M, 4.5h)
* Sakha (42M, 3.3h)
* Slovenian (43M, 3.0h)
* Slovak (31M, 1.9h)
* Sursilvan (61M, 4.8h)
* Tamil (8.8M, 0.6h)
* Vallader (14M, 1.2h)
* Vietnamese (1.2M, 0.1h)
Medium-resourced (>10 & <100 hours):
* Czech (0.3G, 24h)
* Dutch (0.8G, 70h)
* Estonian (0.2G, 19h)
* Esperanto (1.3G, 77h)
* Indonesian (0.1G, 11h)
* Kyrgyz (0.1G, 12h)
* Mongolian (0.1G, 12h)
* Portuguese (0.7G, 58h)
* Swedish (0.1G, 12h)
* Tatar (4G, 30h)
* Turkish (1.3G, 29h)
* Ukrainian (0.2G, 18h)
Hig-resourced (>100 hours):
* Basque (1.7G, 118h)
* Catalan (8.7G, 615h)
* English (26G, 1957h)
* French (9.3G, 754h)
* German (14G, 1083h)
* Italian (2.2G, 155h)
* Kinyarwanda (6.1G, 422h)
* Persian (4.5G, 327h)
* Polish (1.8G, 130h)
* Russian (2.1G, 137h)
* Spanish (4.9G, 349h)
* Welsh (4.5G, 108h)
## Dataset Structure
### Data Instances
```python
{'file': 'абзар_common_voice_tt_17737010.opus',
'is_valid': True,
'language': 0,
'speaker_id': '687025afd5ce033048472754c8d2cb1cf8a617e469866bbdb3746e2bb2194202094a715906f91feb1c546893a5d835347f4869e7def2e360ace6616fb4340e38',
'gender': 0,
'keyword': 'абзар',
'audio': {'path': 'абзар_common_voice_tt_17737010.opus',
'array': array([2.03458695e-34, 2.03458695e-34, 2.03458695e-34, ...,
2.03458695e-34, 2.03458695e-34, 2.03458695e-34]),
'sampling_rate': 48000}}
```
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance. Makes sense only when providing multiple languages to the
dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`)
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data comes form Common Voice dataset.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
he dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
### Citation Information
```
@inproceedings{mazumder2021multilingual,
title={Multilingual Spoken Words Corpus},
author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
pritamdeka | null | null | null | false | 2 | false | pritamdeka/cord-19-abstract | 2022-02-01T23:58:54.000Z | null | false | c2280ffaf80629ba1b1be5dad6b08b93cd395371 | [] | [] | https://huggingface.co/datasets/pritamdeka/cord-19-abstract/resolve/main/README.md | # Dataset Card for [pritamdeka/cord-19-abstract]
## Dataset Description
### Dataset Summary
This is a modified [cord19](https://huggingface.co/datasets/cord19) dataset which contains only the abstract field. This can be used directly for language modelling tasks.
### Languages
English
### Citation Information
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and
K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and
Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and
D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
```
|
pritamdeka | null | null | null | false | 1 | false | pritamdeka/cord-19-fulltext | 2022-02-05T02:29:13.000Z | null | false | 53f4c17ae8f06d9aabeae7230194e1528f9cd7aa | [] | [] | https://huggingface.co/datasets/pritamdeka/cord-19-fulltext/resolve/main/README.md | # Dataset Card for [pritamdeka/cord-19-fulltext]
## Dataset Description
### Dataset Summary
This is a modified [cord19](https://huggingface.co/datasets/cord19) dataset which contains only the fulltext field. This can be used directly for language modelling tasks.
### Languages
English
### Citation Information
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and
K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and
Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and
D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
```
|
priya3301 | null | null | null | false | 1 | false | priya3301/Graduation_admission | 2021-05-14T15:42:30.000Z | null | false | d1d759e8c2ab06e5958a2054d1987ea046f261c8 | [] | [] | https://huggingface.co/datasets/priya3301/Graduation_admission/resolve/main/README.md | |
prk | null | \
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | \
combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers
to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from answering. | false | 1 | false | prk/testsq | 2022-02-25T13:52:13.000Z | null | false | be260110deb051db63b66038bbb00a5ebfe996c6 | [] | [] | https://huggingface.co/datasets/prk/testsq/resolve/main/README.md | |
project2you | null | null | null | false | 1 | false | project2you/asr | 2021-12-02T08:08:08.000Z | null | false | d4ab94cacce28137358c0ad2de765e2fafa98653 | [] | [] | https://huggingface.co/datasets/project2you/asr/resolve/main/README.md | Common Voice 7
วันที่ 2021-07-21
ขนาด5 GB
รุ่น th_255h_2021-07-21
จำนวนชั่วโมงทั้งหมดที่ตรวจสอบ133
จำนวนชั่วโมงโดยรวม255
สัญญาอนุญาตCC-0
จำนวนเสียง7,212
รูปแบบเสียงMP3
|
projecte-aina | null | AnCora Catalan NER.
This is a dataset for Named Eentity Reacognition (NER) from Ancora corpus adapted for
Machine Learning and Language Model evaluation purposes.
Since multiwords (including Named Entites) in the original Ancora corpus are aggregated as
a single lexical item using underscores (e.g. "Ajuntament_de_Barcelona")
we splitted them to align with word-per-line format, and added conventional Begin-Inside-Outside (IOB)
tags to mark and classify Named Entites.
We did not filter out the different categories of NEs from Ancora (weak and strong).
We did 6 minor edits by hand.
AnCora corpus is used under [CC-by] (https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB). | false | 1 | false | projecte-aina/ancora-ca-ner | 2022-11-16T15:27:39.000Z | null | false | 4b11fdf4b81e7c03b2dfa465a283a990e4cfaa51 | [] | [
"arxiv:2107.07903",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:unknown"
] | https://huggingface.co/datasets/projecte-aina/ancora-ca-ner/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: ancora-ca-ner
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for AnCora-Ca-NER
## Dataset Description
- **Website:** https://zenodo.org/record/5036651
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Paper:** [AnCora: Multilevel Annotated Corpora for Catalan and Spanish](http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
This is a dataset for Named Entity Recognition (NER) in Catalan. It adapts <a href="http://clic.ub.edu/corpus/">AnCora corpus</a> for Machine Learning and Language Model evaluation purposes.
[AnCora corpus](http://clic.ub.edu/corpus/) is used under [CC-by](https://creativecommons.org/licenses/by/4.0/) licence.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
Fundació B-ORG
Privada I-ORG
Fira I-ORG
de I-ORG
Manresa I-ORG
ha O
fet O
un O
balanç O
de O
l' O
activitat O
del O
Palau B-LOC
Firal I-LOC
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
We took the original train, dev and test splits from the [UD version of the corpus](https://huggingface.co/datasets/universal_dependencies)
- train: 10,630 examples
- validation: 1,429 examples
- test: 1,528 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
[AnCora](http://clic.ub.edu/corpus/) consists of a Catalan corpus (AnCora-CA) and a Spanish corpus (AnCora-ES), each of them of 500,000 tokens (some multi-word). The corpora are annotated for linguistic phenomena at different levels.
AnCora corpus is mainly based on newswire texts. For more information, refer to Taulé, M., M.A. Martí, M. Recasens (2009): <a href="http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf">"AnCora: Multilevel Annotated Corpora for Catalan and Spanish”</a>, Proceedings of 6th International Conference on language Resources and Evaluation.
#### Who are the source language producers?
Catalan [AnCora corpus](http://clic.ub.edu/corpus/) is compiled from articles from the following news outlets: <a href="https://www.efe.com">EFE</a>, <a href="https://www.acn.cat">ACN</a>, <a href="https://www.elperiodico.cat/ca/">El Periodico</a>.
### Annotations
#### Annotation process
We adapted the NER labels from [AnCora corpus](http://clic.ub.edu/corpus/) to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from [AnCora corpus](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529299)
### Contributions
[N/A] | |
projecte-aina | null | @misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency. The corpus consists of 217,735 instances that are composed by the headline and the body. | false | 1 | false | projecte-aina/casum | 2022-11-10T12:54:49.000Z | null | false | c3cc8d5903cee19458e6410833fb04bbe3f589de | [] | [
"arxiv:2202.06871",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language:ca",
"license:cc-by-nc-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:summarization"
] | https://huggingface.co/datasets/projecte-aina/casum/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- ca
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- summarization
task_ids: []
pretty_name: casum
---
# Dataset Card for CaSum
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Sequence to Sequence Resources for Catalan](https://arxiv.org/pdf/2202.06871.pdf)
- **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es)
### Dataset Summary
CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)). The corpus consists of 217,735 instances that are composed by the headline and the body.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for abstractive summarization. Success on this task is typically measured by achieving a high Rouge score. The [mbart-base-ca-casum](https://huggingface.co/projecte-aina/bart-base-ca-casum) model currently achieves a 41.39.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'summary': 'Mapfre preveu ingressar 31.000 milions d’euros al tancament de 2018',
'text': 'L’asseguradora llançarà la seva filial Verti al mercat dels EUA a partir de 2017 ACN Madrid.-Mapfre preveu assolir uns ingressos de 31.000 milions d'euros al tancament de 2018 i destinarà a retribuir els seus accionistes com a mínim el 50% dels beneficis del grup durant el període 2016-2018, amb una rendibilitat mitjana a l’entorn del 5%, segons ha anunciat la companyia asseguradora durant la celebració aquest divendres de la seva junta general d’accionistes. La firma asseguradora també ha avançat que llançarà la seva filial d’automoció i llar al mercat dels EUA a partir de 2017. Mapfre ha recordat durant la junta que va pagar més de 540 milions d'euros en impostos el 2015, amb una taxa impositiva efectiva del 30,4 per cent. La companyia també ha posat en marxa el Pla de Sostenibilitat 2016-2018 i el Pla de Transparència Activa, “que han de contribuir a afermar la visió de Mapfre com a asseguradora global de confiança”, segons ha informat en un comunicat.'
}
```
### Data Fields
- `summary` (str): Summary of the piece of news
- `text` (str): The text of the piece of news
### Data Splits
We split our dataset into train, dev and test splits
- train: 197,735 examples
- validation: 10,000 examples
- test: 10,000 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language. There exist few resources for summarization in Catalan.
### Source Data
#### Initial Data Collection and Normalization
We obtained each headline and its corresponding body of each news piece on the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) website and applied the following cleaning pipeline: deduplicating the documents, removing the documents with empty attributes, and deleting some boilerplate sentences.
#### Who are the source language producers?
The news portal Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)).
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymization process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of summarization models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by MT4All CEF project and [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### BibTeX citation
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[N/A] |
projecte-aina | null | @inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | The Catalan General Crawling Corpus is a 435-million-token web corpus of Catalan built from the web. It has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020. It consists of 434.817.705 tokens, 19.451.691 sentences and 1.016.114 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus. | false | 1 | false | projecte-aina/catalan_general_crawling | 2022-11-10T13:03:33.000Z | null | false | af5d51b39545ee0cb2a88a64ec67931ec738bf38 | [] | [
"arxiv:2107.07903",
"annotations_creators:no-annotation",
"language_creators:found",
"language:ca",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:fill-mask"
] | https://huggingface.co/datasets/projecte-aina/catalan_general_crawling/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Catalan General Crawling
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
task_ids: []
---
# Dataset Card for Catalan General Crawling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5483031
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [ona.degibert@bsc.es](ona.degibert@bsc.es)
### Dataset Summary
The Catalan General Crawling Corpus is a 435-million-token web corpus of Catalan built from the web. It has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020. It consists of 434,817,705 tokens, 19,451,691 sentences and 1,016,114 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
### Supported Tasks and Leaderboards
This corpus is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'text': 'Reduïu els costos dels processos administratius al vostre organisme públic\nEviteu els desplaçaments i pèrdua de temps als ciutadans en les seves gestions\nOferiu una administració més transparent a
ciutadans i empreses\nEns grans i petits experimenten aquesta transformació amb èxit, gràcies al suport de l\'AOC\nDepartament de Sistemes d\'Informació i Processos\n" Via Oberta ens ha permès fer efectiu el d
ret dels ciutadans a no aportar documents, eliminant paper i simplificant procediments"\n" e.FACT proporciona informació indispensable per a la realització de les auditories del registre comptable de factures d
e les Administracions Públiques Catalanes"\nCoordinador del departament d\'Informàtica\n"El servei VIA OBERTA és el que ha aportat majors avantatges per als ciutadans"\n"Amb l\' e-NOTUM hem escurçat els procedi
ments en 12 dies, quasi un 40% menys!"\nCoordinadora d\'organització de persones i e-administració\n" Via Oberta ofereix millores per als ciutadans al no haver d\'aportar cap document"\nResponsable d\'Informàti
ca i Administració Electrònica\n" e-TRAM ens ha permès implantar un servei de tramitació electrònica per als ciutadans de forma ràpida, senzilla i amb un cost reduït"\n"Els municipis amb pocs habitants trobem e
n els serveis de l\'AOC la gratuïtat i la comoditat necessàries per dur a terme el nostre dia a dia"\n"Les T-CAT han permès incorporar de forma segura la signatura electrònica dins dels nostres procediments afa
vorint la transformació digital de la nostra activitat"\nCap de Departament de Sistemes i Tecnologies de la Informació\n"Amb el desplegament de l\' idCAT hem apropat l\'Ajuntament a la ciutadania"\n"Mitjançant
els serveis de Govern Obert de l\'AOC hem pogut fer fàcil el que sembla difícil"\n"Al tauler electrònic pots penjar fins i tot el projecte sencer i al final et permet fer també la diligència"\nÀrea de Promoció
Econòmica, Administració i Hisenda\n"El Sobre Digital i la PSCP han aconseguit una comunió senzilla entre empreses i administració per universalitzar la compra pública electrònica"\n"L\' e-SET és la implantació
d\'un nou sistema de treball que facilita la feina del dia a dia"\nCap del servei de contractació i compres\n"El Sobre Digital, una experiència imprescindible per a la bona administració amb estalvi de recurso
s i millora de la seguretat jurídica i la transparència"\nÀrea d\'Organització i Administració Electrònica\n"El desplegament de la valisa electrònica ha estat clau en el procés de transformació digital dels nos
tres procediments interns"\n"L\' Hèstia permet el treball en temps real i des de qualsevol lloc, així com sistematitzar la pràctica professional, recollir la informació ordenadament i amb el mateix llenguatge"\
nConsulta els materials del Congrés de Govern Digital 2019\nGoverns transparents, fluids, dinàmics, líquids... un bon lema pel principal objectiu de la governança del segle XXI: democratitzar-ho tot.\nConfluènc
ies, rius, cooperació.\nCatalunya, Mediterrània, mar de drets.\nA favor: totes les Administracions movent-se per posar-se al dia i millorar, tot aprofitant la revolució digital.\nEn contra: quants cops estem re
inventant la roda i quantes quantes oportunitats perdudes de fer-ho una única vegada i de forma coordinada i col·laborativa?\n"La transparència és una oportunitat.\nHem de perdre tota por a explicar què fem": l
a conclusió de la taula d\'alcaldies de la Jornada de Govern Obert pic.twitter.com/ERbgLSIXZM\nEl director general de Participació Ciutadana ens convida a transformar les administracions públiques a partir de l
a participació ciutadana\nEns cal que allò que preocupa i ocupa els governants formi part d\'allò en què participa la ciutadania pic.twitter.com/NwQr4EZSCS: "A moltes institucions encara els sona xinés això de
les dades obertes i la transparència.\nDe que serveix que hi hagi un portal, si llavors no hi ha dades?\nLlavors l\'accés a la informació pels periodistes és molt parcial".\nOferim eines que, conjuntament amb l
a metodologia i el suport necessari, fan possible l\'assoliment d\'un govern digital\nPosem al vostre abast tot el coneixement: formació, guies, normatives, etc.\nTenim eines per gestionar àgilment part del pro
cés administratiu del vostre ens\nEl nostre equip farà tot el possible per resoldre les vostres incidències\nSabem que es tracta d\'una decisió molt important per al vostre ens i és per això que us ho volem pos
ar fàcil.\nLa selecció de l\'actualitat d\'Administració Oberta a la vostra safata.'
}
```
### Data Fields
- `text` (str): Text.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The corpus has been obtained by crawling the 500 most popular .cat and .ad domains during July 2020.
For preprocessing we used [Corpus-Cleaner](https://github.com/TeMU-BSC/corpus-cleaner-acl), a modular Python-based toolkit to clean raw text corpora through generator pipelines.
#### Who are the source language producers?
The data comes from multiple web pages in Catalan.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
projecte-aina | null | @inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | The Catalan Government Crawling Corpus is a 39-million-token web corpus of Catalan built from the web. It has been obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government during September and October 2020. It consists of 39.117.909 tokens, 1.565.433 sentences and 71.043 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus. | false | 1 | false | projecte-aina/catalan_government_crawling | 2022-11-10T13:00:45.000Z | null | false | a7246da17b5710522410cc416d071f49488155a7 | [] | [
"arxiv:2107.07903",
"annotations_creators:no-annotation",
"language_creators:found",
"language:ca",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:fill-mask"
] | https://huggingface.co/datasets/projecte-aina/catalan_government_crawling/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Catalan Government Crawling
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
task_ids: []
---
# Dataset Card for Catalan Government Crawling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5511667
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [ona.degibert@bsc.es](ona.degibert@bsc.es)
### Dataset Summary
The Catalan Government Crawling Corpus is a 39-million-token web corpus of Catalan built from the web. It has been obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government during September and October 2020. It consists of 39,117,909 tokens, 1,565,433 sentences and 71,043 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
### Supported Tasks and Leaderboards
This corpus is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'text': 'Títol: Estudi de tres marededéus del bisbat de Solsona\nResponsables del projecte: Pep Paret conservador–restaurador de l\'Àrea de Pintura i Escultura sobre fusta del CRBMC\nL\'objecte d\'aquest est
udi és un millor coneixement de l\'estat de conservació del patrimoni moble català, en concret de tres escultures romàniques del bisbat de Solsona.\nEs du a terme un estudi científic de tres marededéus del bisb
at de Solsona: la Mare de Déu de Queralt, la Mare de Déu de Coaner i la Mare de Déu de la Quar.\nLes imatges originals són romàniques, però totes elles han patit modificacions estructurals...'
}
```
### Data Fields
- `text` (str): Text.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The corpus has been obtained by crawling the all the `.gencat.cat` domains during July 2020.
For preprocessing we used [Corpus-Cleaner](https://github.com/TeMU-BSC/corpus-cleaner-acl), a modular Python-based toolkit to clean raw text corpora through generator pipelines.
#### Who are the source language producers?
The data comes from the official Catalan Government websites.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from public web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
projecte-aina | null | @inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | The Catalan Textual Corpus is a 1760-million-token web corpus of Catalan built from several sources: existing corpus such as DOGC, CaWac (non-dedup version), Oscar (unshuffled version), Open Subtitles, Catalan Wikipedia; and three brand new crawlings: the Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains; the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government; and the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency.
It consists of 1.758.388.896 tokens, 73.172.152 sentences and 12.556.365 documents. Documents are separated by single new lines. These boundaries have been preserved as long as the license allowed it. | false | 2 | false | projecte-aina/catalan_textual_corpus | 2022-11-10T12:59:31.000Z | null | false | a38fffc6ed9eab968b0f5bfc030e8f05f34509e4 | [] | [
"arxiv:2107.07903",
"annotations_creators:no-annotation",
"language_creators:found",
"language:ca",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"source_datasets:extended|opus_dogc",
"source_datasets:extended|cawac",
"source_dat... | https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Catalan Textual Corpus
size_categories:
- 10M<n<100M
source_datasets:
- original
- extended|opus_dogc
- extended|cawac
- extended|oscar
- extended|open_subtitles
- extended|wikipedia
- extended|projecte-aina/catalan_general_crawling
- extended|projecte-aina/catalan_government_crawling
task_categories:
- fill-mask
task_ids: []
---
# Dataset Card for Catalan Textual Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4519349
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [ona.degibert@bsc.es](ona.degibert@bsc.es)
### Dataset Summary
The Catalan Textual Corpus is a 1760-million-token web corpus of Catalan built from several sources.
It consists of 1,758,388,896 tokens, 73,172,152 sentences, and 12,556,365 documents. Documents are separated by single new lines. These boundaries have been preserved as long as the license allowed it.
### Supported Tasks and Leaderboards
This corpus is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{'text': "L'operatiu continuarà durant aquest divendres."}
```
### Data Fields
- `text` (str): Text.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The Catalan Textual Corpus is a 1760-million-token web corpus of Catalan built from several sources: existing corpora such as DOGC, CaWac (non-dedup version), Oscar (unshuffled version), Open Subtitles, Catalan Wikipedia, and three brand new crawlings: the Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains; the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government; and the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency.
For preprocessing we used [Corpus-Cleaner](https://github.com/TeMU-BSC/corpus-cleaner-acl), a modular Python-based toolkit to clean raw text corpora through generator pipelines.
#### Who are the source language producers?
The original data comes from various sources: existing corpora and crawlings from public websites.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages and multilingual crawled corpora, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution Share Alike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
projecte-aina | null | @dataset{kulebi_baybars_2021_5541827,
author = {Külebi, Baybars},
title = {{ParlamentParla - Speech corpus of Catalan
Parliamentary sessions}},
month = oct,
year = 2021,
publisher = {Zenodo},
version = {v2.0},
doi = {10.5281/zenodo.5541827},
url = {https://doi.org/10.5281/zenodo.5541827}
} | This is the ParlamentParla speech corpus for Catalan prepared by Col·lectivaT. The audio segments were extracted from recordings the Catalan Parliament (Parlament de Catalunya) plenary sessions, which took place between 2007/07/11 - 2018/07/17. We aligned the transcriptions with the recordings and extracted the corpus. The content belongs to the Catalan Parliament and the data is released conforming their terms of use.
Preparation of this corpus was partly supported by the Department of Culture of the Catalan autonomous government, and the v2.0 was supported by the Barcelona Supercomputing Center, within the framework of the project AINA of the Departament de Polítiques Digitals.
As of v2.0 the corpus is separated into 211 hours of clean and 400 hours of other quality segments. Furthermore, each speech segment is tagged with its speaker and each speaker with their gender. The statistics are detailed in the readme file.
For more information, go to https://github.com/CollectivaT-dev/ParlamentParla or mail info@collectivat.cat. | false | 6 | false | projecte-aina/parlament_parla | 2022-11-10T12:51:41.000Z | null | false | 3b73f758a1bae3305271ebd8947b37a6647a8dbb | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ca",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:automatic-speech-recognition",
"task_categories:text-generation",
"t... | https://huggingface.co/datasets/projecte-aina/parlament_parla/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
clean:
- 10K<n<100K
other:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-generation
task_ids:
- language-modeling
- speaker-identification
pretty_name: ParlamentParla
---
# Dataset Card for ParlamentParla
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5541827
- **Repository:** https://github.com/CollectivaT-dev/ParlamentParla
- **Paper:** ParlamentParla: [A Speech Corpus of Catalan Parliamentary Sessions.](http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/2022.parlaclariniii-1.0.pdf#page=135)
- **Point of Contact:** [Col·lectivaT](mailto:baybars.kulebi@bsc.es)
### Dataset Summary
This is the ParlamentParla speech corpus for Catalan prepared by Col·lectivaT. The audio segments were extracted from recordings the Catalan Parliament (Parlament de Catalunya) plenary sessions, which took place between 2007/07/11 - 2018/07/17. We aligned the transcriptions with the recordings and extracted the corpus. The content belongs to the Catalan Parliament and the data is released conforming their terms of use.
Preparation of this corpus was partly supported by the Department of Culture of the Catalan autonomous government, and the v2.0 was supported by the Barcelona Supercomputing Center, within the framework of Projecte AINA of the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya.
As of v2.0 the corpus is separated into 211 hours of clean and 400 hours of other quality segments. Furthermore, each speech segment is tagged with its speaker and each speaker with their gender. The statistics are detailed in the readme file.
### Supported Tasks and Leaderboards
The dataset can be used for:
- Language Modeling.
- Automatic Speech Recognition (ASR) transcribes utterances into words.
- Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'path': 'clean_train/c/c/ccca4790a55aba3e6bcf_63.88_74.06.wav'
'audio': {
'path': 'clean_train/c/c/ccca4790a55aba3e6bcf_63.88_74.06.wav',
'array': array([-6.10351562e-05, -6.10351562e-05, -1.22070312e-04, ...,
-1.22070312e-04, 0.00000000e+00, -3.05175781e-05]),
'sampling_rate': 16000
},
'speaker_id': 167,
'sentence': "alguns d'ells avui aquí presents un agraïment a aquells que mantenen viva la memòria aquest acte de reparació i dignitat és",
'gender': 0,
'duration': 10.18
}
```
### Data Fields
- `path` (str): The path to the audio file.
- `audio` (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling
rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and
resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might
take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column,
*i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id` (int): The speaker ID.
- `sentence` (str): The sentence the user was prompted to speak.
- `gender` (ClassLabel): The gender of the speaker (0: 'F', 1: 'M').
- `duration` (float): Duration of the speech.
### Data Splits
The dataset is split in: "train", "validation" and "test".
## Dataset Creation
The dataset is created by aligning the parliamentary session transcripts
and the audiovisual content. For more detailed information please consult
this [paper](http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/2022.parlaclariniii-1.0.pdf#page=135).
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The audio segments were extracted from recordings the Catalan Parliament
(Parlament de Catalunya) plenary sessions, which took place between 2007/07/11 -
2018/07/17. The cleaning procedures are in the archived repository [Long Audio
Aligner](https://github.com/gullabi/long-audio-aligner)
#### Who are the source language producers?
The parliamentary members of the legislatures between 2007/07/11 -
2018/07/17
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The initial content is publicly available furthermore, the identities of
the parliamentary members are anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in
Catalan, a low-resource language.
### Discussion of Biases
This dataset has a gender bias, however since the speakers are tagged according to
their genders, creating a balanced subcorpus is possible.
| Subcorpus | Gender | Duration (h) |
|-------------|----------|------------|
| other_test | F | 2.516 |
| other_dev | F | 2.701 |
| other_train | F | 109.68 |
| other_test | M | 2.631 |
| other_dev | M | 2.513 |
| other_train | M | 280.196 |
|*other total*| | 400.239 |
| clean_test | F | 2.707 |
| clean_dev | F | 2.576 |
| clean_train | F | 77.905 |
| clean_test | M | 2.516 |
| clean_dev | M | 2.614 |
| clean_train | M | 123.162 |
|*clean total*| | 211.48 |
|*Total* | | 611.719 |
### Other Known Limitations
The text corpus belongs to the domain of Catalan politics
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@dataset{kulebi_baybars_2021_5541827,
author = {Külebi, Baybars},
title = {{ParlamentParla - Speech corpus of Catalan
Parliamentary sessions}},
month = oct,
year = 2021,
publisher = {Zenodo},
version = {v2.0},
doi = {10.5281/zenodo.5541827},
url = {https://doi.org/10.5281/zenodo.5541827}
}
```
For the paper:
```
@inproceedings{kulebi2022parlamentparla,
title={ParlamentParla: A Speech Corpus of Catalan Parliamentary Sessions},
author={K{\"u}lebi, Baybars and Armentano-Oller, Carme and Rodr{\'\i}guez-Penagos, Carlos and Villegas, Marta},
booktitle={Workshop on Creating, Enriching and Using Parliamentary Corpora},
volume={125},
number={130},
pages={125},
year={2022}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
projecte-aina | null | Rodriguez-Penagos, Carlos Gerardo, Armentano-Oller, Carme, Gonzalez-Agirre, Aitor, & Gibert Bonet, Ona. (2021).
Semantic Textual Similarity in Catalan (Version 1.0.1) [Data set].
Zenodo. http://doi.org/10.5281/zenodo.4761434 | Semantic Textual Similarity in Catalan.
STS corpus is a benchmark for evaluating Semantic Text Similarity in Catalan.
It consists of more than 3000 sentence pairs, annotated with the semantic similarity between them,
using a scale from 0 (no similarity at all) to 5 (semantic equivalence).
It is done manually by 4 different annotators following our guidelines based on previous work from the SemEval challenges (https://www.aclweb.org/anthology/S13-1004.pdf).
The source data are scraped sentences from the Catalan Textual Corpus (https://doi.org/10.5281/zenodo.4519349), used under CC-by-SA-4.0 licence (https://creativecommons.org/licenses/by-sa/4.0/). The dataset is released under the same licence.
This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB).
This is the version 1.0.2 of the dataset with the complete human and automatic annotations and the analysis scripts. It also has a more accurate license.
This dataset can be used to build and score semantic similiarity models. | false | 1 | false | projecte-aina/sts-ca | 2022-11-16T15:24:02.000Z | null | false | c51d11e45e780bce09f4f71e1dbf4c3c1181c041 | [] | [
"arxiv:2107.07903",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring"
] | https://huggingface.co/datasets/projecte-aina/sts-ca/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- semantic-similarity-scoring
- text-scoring
pretty_name: sts-ca
---
# Dataset Card for STS-ca
## Dataset Description
- **Website:** https://zenodo.org/record/4761434
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
STS-ca corpus is a benchmark for evaluating Semantic Text Similarity in Catalan. This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
This dataset can be used to build and score semantic similarity models in Catalan.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Follows [SemEval challenges](https://www.aclweb.org/anthology/S13-1004.pdf):
* index (int)
* id (str): Unique ID assigned to the sentence pair.
* sentence 1 (str): First sentence of the pair.
* sentence 2 (str): Second sentence of the pair.
* avg (float): Gold truth
#### Example
| index | id | sentence 1 | sentence 2 | avg |
| ------- | ---- | ------------ | ------------ | ----- |
| 19 | ACN2_131 | Els manifestants ocupen l'Imperial Tarraco durant una hora fent jocs de taula | Els manifestants ocupen l'Imperial Tarraco i fan jocs de taula | 4 |
| 21 | TE2_80 | El festival comptarà amb cinc escenaris i se celebrarà entre el 7 i el 9 de juliol al Parc del Fòrum. | El festival se celebrarà el 7 i 8 de juliol al Parc del Fòrum de Barcelona | 3 |
| 23 | Oscar2_609 | Aleshores hi posarem un got de vi i continuarem amb la cocció fins que s'hagi evaporat el vi i ho salpebrarem. | Mentre, hi posarem el vi al sofregit i deixarem coure uns 7/8′, fins que el vi s'evapori. | 3 |
| 25 | Viqui2_48 | L'arboç grec (Arbutus andrachne) és un arbust o un petit arbre dins la família ericàcia. | El ginjoler ("Ziziphus jujuba") és un arbust o arbre petit de la família de les "Rhamnaceae". | 2.75 |
| 27 | ACN2_1072 | Mentre han estat davant la comandància, els manifestants han cridat consignes a favor de la independència i han cantat cançons com 'L'estaca'. | Entre les consignes que han cridat s'ha pogut escoltar càntics com 'els carrers seran sempre nostres' i contínues consignes en favor de la independència. | 3 |
| 28 | Viqui2_587 | Els cinc municipis ocupen una superfície de poc més de 100 km2 i conjuntament sumen una població total aproximada de 3.691 habitants (any 2019). | Té una població d'1.811.177 habitants (2005) repartits en 104 municipis d'una superfície total de 14.001 km2. | 2.67 |
### Data Fields
This dataset follows [SemEval](https://www.aclweb.org/anthology/S13-1004.pdf) challenges formats and conventions.
### Data Splits
- sts_cat_dev_v1.tsv (500 annotated pairs)
- sts_cat_train_v1.tsv (2073 annotated pairs)
- sts_cat_test_v1.tsv (500 annotated pairs)
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
Random sentences were extracted from 3 Catalan subcorpus from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Ys_0PexBzOs): [ACN](https://www.acn.cat/), [Oscar](https://oscar-corpus.com/) and [Wikipedia](https://ca.wikipedia.org/wiki/Portada).
We generated candidate pairs using a combination of metrics from Doc2Vec, Jaccard and a BERT-like model (“[distiluse-base-multilingual-cased-v2](https://huggingface.co/distilbert-base-multilingual-cased)”). Finally, we manually reviewed the generated pairs to reject non-relevant pairs (identical or ungrammatical sentences, etc.) before providing them to the annotation team.
The average of the four annotations was selected as a “ground truth” for each sentence pair, except when an annotator diverged in more than one unit from the average. In these cases, we discarded the divergent annotation and recalculated the average without it. We also discarded 45 sentence pairs because the annotators disagreed too much.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the source language producers?
The [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Ys_0PexBzOs) is a 1760-million-token web corpus of Catalan built from several sources: existing corpus such as DOGC, CaWac (non-deduplicated version), Oscar (unshuffled version), Open Subtitles, Catalan Wikipedia; and three brand new crawlings: the Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains; the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government; and the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency.
### Annotations
#### Annotation process
We comissioned the manual annotation of the similarity between the sentences of each pair, following the provided guidelines.
#### Who are the annotators?
A team of native language speakers from 2 different companies, working independently.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529183)
### Contributions
[N/A]
|
projecte-aina | null | @inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
} | TECA consists of two subsets of textual entailment in Catalan, *catalan_TE1* and *vilaweb_TE*, which contain 14997 and 6166 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral). This dataset was developed by BSC TeMU as part of the AINA project and intended as part of the Catalan Language Understanding Benchmark (CLUB). | false | 3 | false | projecte-aina/teca | 2022-11-10T12:58:08.000Z | null | false | c7cdb32a1ade077ad15ddd65279fecf8d4728368 | [] | [
"arxiv:2107.07903",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-classification",
"task_ids:natural-language-inference"
] | https://huggingface.co/datasets/projecte-aina/teca/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: teca
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for TE-ca
## Dataset Description
- **Website:** https://zenodo.org/record/4761458
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
TE-ca is a dataset of textual entailment in Catalan, which contains 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral).
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Three JSON files, one for each split.
### Example:
<pre>
{
"id": 3247,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "S'acorden unes recomanacions per les persones migrades a Marràqueix",
"label": "0"
},
{
"id": 2825,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "Les persones migrades seran acollides a Marràqueix",
"label": "1"
},
{
"id": 2431,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "L'acord impulsat per l'ONU lluny de tancar-se",
"label": "2"
},
</pre>
### Data Fields
- premise: text
- hypothesis: text related to the premise
- label: relation between premise and hypothesis:
* 0: entailment
* 1: neutral
* 2: contradiction
### Data Splits
* dev.json: 2116 examples
* test.json: 2117 examples
* train.json: 16930 examples
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
Source sentences are extracted from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) and from [VilaWeb](https://www.vilaweb.cat) newswire.
#### Initial Data Collection and Normalization
12000 sentences from the BSC [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349), together with 6200 headers from the Catalan news site [VilaWeb](https://www.vilaweb.cat), were chosen randomly. We filtered them by different criteria, such as length and stand-alone intelligibility. For each selected text, we commissioned 3 hypotheses (one for each entailment category) to be written by a team of native annotators.
Some sentence pairs were excluded because of inconsistencies.
#### Who are the source language producers?
The Catalan Textual Corpus corpus consists of several corpora gathered from web crawling and public corpora. More information can be found [here](https://doi.org/10.5281/zenodo.4519349).
[VilaWeb](https://www.vilaweb.cat) is a Catalan newswire.
### Annotations
#### Annotation process
We commissioned 3 hypotheses (one for each entailment category) to be written by a team of annotators.
#### Who are the annotators?
Annotators are a team of native language collaborators from two independent companies.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529183)
|
projecte-aina | null | Carrino, Casimiro Pio, Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
TeCla: Text Classification Catalan dataset (Version 1.0) [Data set].
Zenodo. http://doi.org/10.5281/zenodo.4627198 | TeCla: Text Classification Catalan dataset
Catalan News corpus for Text classification, crawled from ACN (Catalan News Agency) site: www.acn.cat
Corpus de notícies en català per a classificació textual, extret del web de l'Agència Catalana de Notícies - www.acn.cat | false | 4 | false | projecte-aina/tecla | 2022-11-16T15:26:50.000Z | null | false | 654ddfc2364c5439797259e06d0357df16cd39a1 | [] | [
"arxiv:2107.07903",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/projecte-aina/tecla/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: tecla
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for TeCla
## Dataset Description
- **Website:** https://zenodo.org/record/4761505
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
TeCla (Text Classification) is a Catalan News corpus for thematic Text Classification tasks. It contains 153.265 articles classified under 30 different categories.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Three json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{"version": "1.0",
"data":
[
{
'sentence': 'L\'editorial valenciana Media Vaca, Premi Nacional a la Millor Tasca Editorial Cultural del 2018. El jurat en destaca la cura "exquisida" del catàleg, la qualitat dels llibres i el "respecte" pels lectors. ACN Madrid.-L\'editorial valenciana Media Vaca ha obtingut el Premi Nacional a la Millor Labor Editorial Cultural corresponent a l\'any 2018 que atorga el Ministeri de Cultura i Esports. El guardó pretén distingir la tasca editorial d\'una persona física o jurídica que hagi destacat per l\'aportació a la vida cultural espanyola. El premi és de caràcter honorífic i no té dotació econòmica. En el cas de Media Vaca, fundada pel valencià Vicente Ferrer i la bilbaïna Begoña Lobo, el jurat n\'ha destacat la cura "exquisida" del catàleg, la qualitat dels llibres i el "respecte" pels lectors i per la resta d\'agents de la cadena del llibre. Media Vaca va publicar els primers llibres el desembre del 1998. El catàleg actual el componen 64 títols dividits en sis col·leccions, que barregen ficció i no ficció. Des del Ministeri de Cultura es destaca que la il·lustració té un pes "fonamental" als productes de l\'editorial i que la majoria de projectes solen partir de propostes literàries i textos preexistents. L\'editorial ha rebut quatre vegades el Bologna Ragazzi Award. És l\'única editorial estatal que ha aconseguit el guardó que atorga la Fira del Llibre per a Nens de Bolonya, la més important del sector.',
'label': 'Lletres'
},
...
]
}
</pre>
#### Labels
'Societat', 'Política', 'Turisme', 'Salut', 'Economia', 'Successos', 'Partits', 'Educació', 'Policial', 'Medi ambient', 'Parlament', 'Empresa', 'Judicial', 'Unió Europea', 'Comerç', 'Cultura', 'Cinema', 'Govern', 'Lletres', 'Infraestructures', 'Música', 'Festa i cultura popular', 'Teatre', 'Mobilitat', 'Govern espanyol', 'Equipaments i patrimoni', 'Meteorologia', 'Treball', 'Trànsit', 'Món'
### Data Splits
* train.json: 110203 article-label pairs
* dev.json: 13786 article-label pairs
* test.json: 13786 article-label pairs
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The source data are crawled articles from the Catalan News Agency ([Agència Catalana de Notícies, ACN](https://www.acn.cat/)) site.
We crawled 219.586 articles from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) newswire archive, the latest from October 11, 2020.
We used the "subsection" category as a classification label, after excluding territorial labels (see [territorial_labels.txt](https://huggingface.co/datasets/projecte-aina/tecla/blob/main/territorial_labels.txt) file) and labels with less than 2000 occurrences.
With this criteria compiled a total of 153.265 articles for this text classification dataset.
#### Who are the source language producers?
The Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) is a news agency owned by the Catalan government via the public corporation Intracatalònia, SA. It is one of the first digital news agencies created in Europe and has been operating since 1999 (source: [wikipedia](https://en.wikipedia.org/wiki/Catalan_News_Agency)).
### Annotations
#### Annotation process
We used the "subsection" category as a classification label, after excluding territorial labels (see [territorial_labels.txt](https://huggingface.co/datasets/projecte-aina/tecla/blob/main/territorial_labels.txt) file) and labels with less than 2000 occurrences.
#### Who are the annotators?
Editorial staff classified the articles under the different thematic sections, and we extracted these from metadata.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529183)
### Contributions
[N/A] |
projecte-aina | null | Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
VilaQuAD: an extractive QA dataset for catalan, from Vilaweb newswire text
[Data set]. Zenodo. https://doi.org/10.5281/zenodo.4562337 | This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context).
VilaQuad articles are extracted from the daily Vilaweb (www.vilaweb.cat) and used under CC-by-nc-sa-nd (https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence.
This dataset can be used to build extractive-QA and Language Models.
Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL). | false | 1 | false | projecte-aina/vilaquad | 2022-11-16T15:28:53.000Z | null | false | 0e042fcd5f1c7ff593ce465f3f8877ee985914b2 | [] | [
"arxiv:2107.07903",
"arxiv:1606.05250",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:extractive-qa"... | https://huggingface.co/datasets/projecte-aina/vilaquad/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: VilaQuAD
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for VilaQuAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doi.org/10.5281/zenodo.4562337
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
VilaQuAD, An extractive QA dataset for Catalan, from [VilaWeb](https://www.vilaweb.cat/) newswire text.
This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context).
VilaQuad articles are extracted from the daily [VilaWeb](https://www.vilaweb.cat/) and used under [CC-by-nc-sa-nd](https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence.
This dataset can be used to build extractive-QA and Language Models.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'id': 'P_556_C_556_Q1',
'title': "El Macba posa en qüestió l'eufòria amnèsica dels anys vuitanta a l'estat espanyol",
'context': "El Macba ha obert una nova exposició, 'Gelatina dura. Històries escamotejades dels 80', dedicada a revisar el discurs hegemònic que es va instaurar en aquella dècada a l'estat espanyol, concretament des del començament de la transició, el 1977, fins a la fita de Barcelona 92. És una mirada en clau espanyola, però també centralista, perquè més enllà dels esdeveniments ocorreguts a Catalunya i els artistes que els van combatre, pràcticament només s'hi mostren fets polítics i culturals generats des de Madrid. No es parla del País Basc, per exemple. Però, dit això, l'exposició revisa aquesta dècada de la història recent tot qüestionant un triomfalisme homogeneïtzador, que ja se sap que va arrasar una gran quantitat de sectors crítics i radicals de l'àmbit social, polític i cultural. Com diu la comissària, Teresa Grandas, de l'equip del Macba: 'El relat oficial dels anys vuitanta a l'estat espanyol va prioritzar la necessitat per damunt de la raó i va consolidar una mirada que privilegiava el futur abans que l'anàlisi del passat recent, obviant qualsevol consideració crítica respecte de la filiació amb el poder franquista.",
'question': 'Com es diu la nova exposició que ha obert el Macba?',
'answers': [
{
'text': "'Gelatina dura. Històries escamotejades dels 80'",
'answer_start': 38
}
]
}
```
### Data Fields
Follows [Rajpurkar, Pranav et al., (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the VilaWeb article.
- `context` (str): VilaWeb section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 1295 contexts, 3882 questions
- dev.json: 400 contexts, 1200 questions
- test.json: 400 contexts, 1200 questions
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb site](https://www.vilaweb.cat/)
#### Initial Data Collection and Normalization
The source data are scraped articles from archives of Catalan newspaper website [Vilaweb](https://www.vilaweb.cat).
From a the online edition of the newspaper [VilaWeb](https://www.vilaweb.cat), 2095 articles were randomnly selected. These headlines were also used to create a Textual Entailment dataset. For the extractive QA dataset, creation of between 1 and 5 questions for each news context was commissioned, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 6282 pairs of a question and an extracted fragment that contains the answer were created.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created [another QA dataset with wikipedia](https://huggingface.co/datasets/projecte-aina/viquiquad) to ensure thematic and stylistic variety.
#### Who are the source language producers?
Professional journalists from the Catalan newspaper [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We comissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
#### Who are the annotators?
Annotation was commissioned to an specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4562337)
### Contributions
[N/A] |
projecte-aina | null | @misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | VilaSum is a summarization dataset for evaluation. It is extracted from a newswire corpus crawled from Vilaweb. The corpus consists of 13,843 instances that are composed by the headline and the body. | false | 1 | false | projecte-aina/vilasum | 2022-11-10T12:50:36.000Z | null | false | 42c59625386b88e5e004dec1f85cba37d69b07fb | [] | [
"arxiv:2202.06871",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language:ca",
"license:cc-by-nc-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:summarization"
] | https://huggingface.co/datasets/projecte-aina/vilasum/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- ca
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- summarization
task_ids: []
pretty_name: casum
---
# Dataset Card for VilaSum
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:**[Sequence to Sequence Resources for Catalan](https://arxiv.org/pdf/2202.06871.pdf)
- **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es)
### Dataset Summary
VilaSum is a summarization dataset for evaluation. It is extracted from a newswire corpus crawled from the Catalan news portal [VilaWeb](https://www.vilaweb.cat/). The corpus consists of 13,843 instances that are composed by the headline and the body.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for abstractive summarization. Success on this task is typically measured by achieving a high Rouge score. The [mbart-base-ca-casum](https://huggingface.co/projecte-aina/bart-base-ca-casum) model currently achieves a 35.04.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'summary': 'Un vídeo corrobora les agressions a dues animalistes en un correbou del Mas de Barberans',
'text': 'Noves imatges, a les quals ha tingut accés l'ACN, certifiquen les agressions i la destrucció del material d'enregistrament que han denunciat dues activistes d'AnimaNaturalis en la celebració d'un acte de bous a la plaça al Mas de Barberans (Montsià). En el vídeo es veu com unes quantes persones s'abalancen sobre les noies que reben estirades i cops mentre els intenten prendre les càmeres. Membres de la comissió taurina intervenen per aturar els presumptes agressors però es pot escoltar com part del públic victoreja la situació. Els Mossos d'Esquadra presentaran aquest dilluns al migdia l'atestat dels fets al Jutjat d'Amposta. Dissabte ja es van detenir quatre persones que van quedar en llibertat a l'espera de ser cridats pel jutge. Es tracta de tres homes i una dona de Sant Carles de la Ràpita, tots ells membres de la mateixa família.'
}
```
### Data Fields
- `summary` (str): Summary of the piece of news
- `text` (str): The text of the piece of news
### Data Splits
Due to the reduced size of the dataset, we use it only for evaluation as a test set.
- test: 13,843 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language. There exist few resources for summarization in Catalan.
### Source Data
#### Initial Data Collection and Normalization
We obtained each headline and its corresponding body of each news piece on [VilaWeb](https://www.vilaweb.cat/) and applied the following cleaning pipeline: deduplicating the documents, removing the documents with empty attributes, and deleting some boilerplate sentences.
#### Who are the source language producers?
The news portal [VilaWeb](https://www.vilaweb.cat/).
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymization process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of summarization models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by MT4All CEF project and the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[N/A]
|
projecte-aina | null | Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
ViquiQuAD: an extractive QA dataset from Catalan Wikipedia (Version ViquiQuad_v.1.0.1)
[Data set]. Zenodo. http://doi.org/10.5281/zenodo.4761412 | ViquiQuAD: an extractive QA dataset from Catalan Wikipedia.
This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations)
articles in the Catalan Wikipedia "Viquipèdia" (ca.wikipedia.org), and 1 to 5 questions with their
answer for each fragment. Viquipedia articles are used under CC-by-sa licence.
This dataset can be used to build extractive-QA and Language Models.
Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
MT4ALL and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL). | false | 1 | false | projecte-aina/viquiquad | 2022-11-16T15:28:11.000Z | null | false | df5fe3916d430e37aecc1446f1b995c8f7136d8c | [] | [
"arxiv:2107.07903",
"arxiv:1606.05250",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:extractive-q... | https://huggingface.co/datasets/projecte-aina/viquiquad/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ViquiQuAD
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4562345#.YK41aqGxWUk
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia.
This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "[Viquipèdia](https://ca.wikipedia.org/wiki/Portada)", and 1 to 5 questions with their answer for each fragment.
Viquipedia articles are used under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
This dataset can be used to fine-tune and evaluate extractive-QA and Language Models.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'id': 'P_66_C_391_Q1',
'title': 'Xavier Miserachs i Ribalta',
'context': "En aquesta època es va consolidar el concepte modern del reportatge fotogràfic, diferenciat del fotoperiodisme[n. 2] i de la fotografia documental,[n. 3] pel que fa a l'abast i el concepte. El reportatge fotogràfic implica més la idea de relat: un treball que vol més dedicació de temps, un esforç d'interpretació d'una situació i que culmina en un conjunt d'imatges. Això implica, d'una banda, la reivindicació del fotògraf per opinar, fet que li atorgarà estatus d'autor; l'autor proposa, doncs, una interpretació pròpia de la realitat. D'altra banda, el consens que s'estableix entre la majoria de fotògrafs és que el vehicle natural de la imatge fotogràfica és la pàgina impresa. Això suposà que revistes com Life, Paris-Match, Stern o Época assolissin la màxima esplendor en aquest període.",
'question': 'De què es diferenciava el reportatge fotogràfic?',
'answers': [{
'text': 'del fotoperiodisme[n. 2] i de la fotografia documental',
'answer_start': 92
}]
}
```
### Data Fields
Follows [Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the Wikipedia article.
- `context` (str): Wikipedia section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train: 11259 examples
- developement: 1493 examples
- test: 1428 examples
## Dataset Creation
### Curation Rationale
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Source Data
- [Catalan Wikipedia](https://ca.wikipedia.org)
#### Initial Data Collection and Normalization
The source data are scraped articles from the [Catalan wikipedia](https://ca.wikipedia.org) site.
From a set of high quality, non-translation, articles in the Catalan Wikipedia, 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the source language producers?
Volunteers who collaborate with Catalan Wikipedia.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
#### Who are the annotators?
Annotation was commissioned to an specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4562344)
### Contributions
[N/A] |
projecte-aina | null | ADD CITATION | professional translation into Catalan of Winograd NLI dataset as published in GLUE Benchmark.
The Winograd NLI dataset presents 855 sentence pairs,
in which the first sentence contains an ambiguity and the second one a possible interpretation of it.
The label indicates if the interpretation is correct (1) or not (0). | false | 1 | false | projecte-aina/wnli-ca | 2022-11-16T15:25:12.000Z | null | false | 23761fd18c32531ab63271267bf1fb99e3ad909a | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"task_categories:text-classification",
"task_ids:natural-language-inference"
] | https://huggingface.co/datasets/projecte-aina/wnli-ca/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: wnli-ca
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# WNLI-ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Catalan of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
The dataset is in Catalan (`ca-CA`)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Example
| index | sentence 1 | sentence 2 | label |
| ------- |----------- | --------- | ----- |
| 0 | Vaig clavar una agulla en una pastanaga. Quan la vaig treure, tenia un forat. | La pastanaga tenia un forat. | 1 |
| 1 | En Joan no podia veure l’escenari amb en Guillem davant seu perquè és molt baix. | En Joan és molt baix. | 1 |
| 2 | Els policies van arrestar tots els membres de la banda. Volien aturar el tràfic de drogues del barri. | Els policies volien aturar el tràfic de drogues del barri. | 1 |
| 3 | L’Esteve segueix els passos d’en Frederic en tot. L’influencia moltíssim. | L’Esteve l’influencia moltíssim. | 0 |
### Data Splits
- wnli-train-ca.csv: 636
- wnli-dev-ca.csv: 72
- wnli-test-shuffled-ca.csv: 147
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Catalan, a low-resource language, and to allow inter-lingual comparisons.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan, commissioned by BSC TeMU within the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan.
#### Who are the annotators?
Translation was commisioned to a professional translator.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Contributions
[N/A]
|
projecte-aina | null | Carlos Gerardo Rodriguez-Penagos, & Carme Armentano-Oller. (2021). XQuAD-ca [Data set].
Zenodo. http://doi.org/10.5281/zenodo.4757559 | Professional translation into Catalan of XQuAD dataset (https://github.com/deepmind/xquad).
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating
cross-lingual question answering performance.
The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from
the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with
their professional translations into ten languages:
Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi.
Rumanian was added later.
We added the 13th language to the corpus using also professional native catalan translators.
XQuAD and XQuAD-Ca datasets are released under CC-by-sa licence. | false | 2 | false | projecte-aina/xquad-ca | 2022-11-16T15:29:39.000Z | null | false | 6286f625078d7059f9f6a1086811edf06f8d1a79 | [] | [
"arxiv:2107.07903",
"arxiv:1606.05250",
"arxiv:1910.11856",
"annotations_creators:expert-generated",
"language_creators:found",
"language:ca",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/projecte-aina/xquad-ca/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: xquad-ca
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for XQuAD-Ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/6669801
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
Professional translation into Catalan of [XQuAD dataset](https://github.com/deepmind/xquad).
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 ([Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250)) together with their professional translations into ten language: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Rumanian was added later. We added the 13th language to the corpus using also professional native Catalan translators.
XQuAD and XQuAD-Ca datasets are released under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
### Supported Tasks and Leaderboards
Cross-lingual-QA, Extractive-QA, Language Model
### Languages
The dataset is in Catalan (`ca-CA`)
## Dataset Structure
### Data Instances
One json file.
1189 examples.
<pre>
{
"data": [
{
"context": "Al llarg de la seva existència, Varsòvia ha estat una ciutat multicultural. Segons el cens del 1901, de 711.988 habitants, el 56,2 % eren catòlics, el 35,7 % jueus, el 5 % cristians ortodoxos grecs i el 2,8 % protestants. Vuit anys després, el 1909, hi havia 281.754 jueus (36,9 %), 18.189 protestants (2,4 %) i 2.818 mariavites (0,4 %). Això va provocar que es construïssin centenars de llocs de culte religiós a totes les parts de la ciutat. La majoria d’ells es van destruir després de la insurrecció de Varsòvia del 1944. Després de la guerra, les noves autoritats comunistes de Polònia van apocar la construcció d’esglésies i només se’n va construir un petit nombre.",
"qas": [
{
"answers": [
{
"text": "711.988",
"answer_start": 104
}
],
"id": "57338007d058e614000b5bdb",
"question": "Quina era la població de Varsòvia l’any 1901?"
},
{
"answers": [
{
"text": "56,2 %",
"answer_start": 126
}
],
"id": "57338007d058e614000b5bdc",
"question": "Dels habitants de Varsòvia l’any 1901, quin percentatge era catòlic?"
},
...
]
}
]
},
...
]
}
</pre>
### Data Fields
Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the Wikipedia article.
- `context` (str): Wikipedia section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- test.json: 1189 examples.
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language, and for compatibility with similar datasets in other languages, and to allow inter-lingual comparisons.
### Source Data
- [XQuAD's webpage](https://github.com/deepmind/xquad).
#### Initial Data Collection and Normalization
This dataset is a professional translation of [XQuAD](https://github.com/deepmind/xquad) into Catalan, commissioned by [BSC TeMU](https://temu.bsc.es/) within [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how XQuAD was created, refer to the paper, On the [Cross-lingual Transferability of Monolingual Representations](https://arxiv.org/abs/1910.11856), or visit the [XQuAD's webpage](https://github.com/deepmind/xquad).
#### Who are the source language producers?
For more information on how XQuAD was created, refer to the paper, [On the Cross-lingual Transferability of Monolingual Representations ](https://arxiv.org/abs/1910.11856), or visit the [XQuAD's webpage](https://github.com/deepmind/xquad).
### Annotations
This is a professional translation of the XQuAD corpus and its annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
Translation was commissioned to a professional translation company.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Carlos Rodríguez-Penagos (carlos.rodriguez1@bsc.es) and Carme Armentano-Oller (carme.armentano@bsc.es) from [BSC-CNS](https://www.bsc.es/).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4526223)
### Contributions
[N/A] |
pstroe | null | null | null | false | 8 | false | pstroe/cc100-latin | 2022-11-02T14:28:12.000Z | null | false | 1df6ec7dce31491d28f5af112c6ad3a70716a159 | [] | [] | https://huggingface.co/datasets/pstroe/cc100-latin/resolve/main/README.md | ## Latin part of cc100 corpus
This dataset contains parts of the Latin part of the [cc100](http://data.statmt.org/cc-100/) dataset. It was used to train a [RoBERTa-based LM model](https://huggingface.co/pstroe/roberta-base-latin-cased) with huggingface.
### Preprocessing
I undertook the following preprocessing steps:
- Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
- Use of [CLTK](http://www.cltk.org) for sentence splitting and normalisation.
- Retaining only lines containing letters of the Latin alphabet, numerals, and certain punctuation (--> `grep -P '^[A-z0-9ÄÖÜäöüÆæŒœᵫĀāūōŌ.,;:?!\- Ęę]+$' la.nolorem.tok.txt`
- deduplication of the corpus
The result is a corpus of ~390 million tokens.
### Structure
The dataset is structured the following way:
```
{
"train": {
"text": "Solventibus autem illis pullum , dixerunt domini ejus ad illos : Quid solvitis pullum ?",
"text": "Errare humanum est ."
...
}
"test": {
"text": "Alia iacta est ."
...
}
}
```
### Contact
For contact, reach out to Phillip Ströbel [via mail](mailto:pstroebel@cl.uzh.ch) or [via Twitter](https://twitter.com/CLingophil). |
puffy310 | null | null | null | false | 2 | false | puffy310/yandset | 2022-03-01T06:18:16.000Z | null | false | 1e7db1d2e6e0984ff24efa50133d3adc90205429 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/puffy310/yandset/resolve/main/README.md | ---
license: apache-2.0
---
|
pulmo | null | null | null | false | 1 | false | pulmo/chest_xray | 2021-07-25T15:10:08.000Z | null | false | c8cfe7c55b5245ef9b48edd2ca37e1a1df6a04ff | [] | [] | https://huggingface.co/datasets/pulmo/chest_xray/resolve/main/README.md | COVID-19 image data collection |
qa4pc | null | null | null | false | 1 | false | qa4pc/QA4PC | 2021-11-23T11:22:13.000Z | null | false | f05fdf09596bd7fcf8b2c14cb93305e7b7a7fa54 | [] | [] | https://huggingface.co/datasets/qa4pc/QA4PC/resolve/main/README.md | ## QA4PC Dataset (paper: Cross-Policy Compliance Detection via Question Answering)
### Train Sets
To create training set or entailment and QA tasks, download and convert the ShARC data using the following commands:
```
wget https://sharc-data.github.io/data/sharc1-official.zip
unzip sharc1-official.zip
python create_train_from_sharc.py -sharc_dev_path sharc1-official/json/sharc_dev.json -sharc_train_path sharc1-official/json/sharc_train.json
```
### Evaluation Sets
#### Entailment Data
The following files contain the data for the entailment task. This includes the policy + questions, a scenario and an answer (_Yes, No, Maybe_). Each datapoint also contain the information from the ShARC dataset such as tree_id and source_url.
- __dev_entailment_qa4pc.json__
- __test_entailment_qa4pc.json__
#### QA Data
The following files contain the data for the QA task.
- __dev_sc_qa4pc.json__
- __test_sc_qa4pc.json__
The following file contains the expression tree data for the dev and test sets. Each tree includes a policy, a set of questions and a logical expression.
- __trees_dev_test_qa4pc.json__
|
qanastek | null | @misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
@inproceedings{mcdonald-etal-2013-universal,
title = {{U}niversal {D}ependency Annotation for Multilingual Parsing},
author = {
McDonald, Ryan and
Nivre, Joakim and
Quirmbach-Brundage, Yvonne and
Goldberg, Yoav and
Das, Dipanjan and
Ganchev, Kuzman and
Hall, Keith and
Petrov, Slav and
Zhang, Hao and
Tackstrom, Oscar and
Bedini, Claudia and
Bertomeu Castello, Nuria and
Lee, Jungmee
},
booktitle = {Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
month = aug,
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/P13-2017},
pages = {92--97",
}
@techreport{
LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
} | null | false | 4 | false | qanastek/ANTILLES | 2022-10-24T17:13:19.000Z | null | false | 9a6b58c803ec27ad00117420022761b2a69cf526 | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language:fr",
"language_bcp47:fr-FR",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:part-of-speech-tagging"
] | https://huggingface.co/datasets/qanastek/ANTILLES/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- fr
language_bcp47:
- fr-FR
pretty_name: ANTILLES
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- part-of-speech-tagging
---
# ANTILLES : An Open French Linguistically Enriched Part-of-Speech Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [sent_id = fr-ud-dev_00005](#sent_id--fr-ud-dev_00005)
- [text = Travail de trés grande qualité exécuté par un imprimeur artisan passionné.](#text--travail-de-trs-grande-qualit-excut-par-un-imprimeur-artisan-passionn)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://qanastek.github.io/ANTILLES/
- **Repository:** https://github.com/qanastek/ANTILLES
- **Paper:** https://hal.archives-ouvertes.fr/hal-03696042/document
- **Leaderboard:** https://paperswithcode.com/dataset/antilles
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`ANTILLES` is a part-of-speech tagging corpora based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb).
Originally, the corpora consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation script `transform.py`, we obtain 60 different classes which add semantic information such as: the gender, number, mood, person, tense or verb form given in the different CoNLL-U fields from the original corpora.
We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
### Supported Tasks and Leaderboards
`part-of-speech-tagging`: The dataset can be used to train a model for part-of-speech-tagging. The performance is measured by how high its F1 score is. A Flair Sequence-To-Sequence model trained to tag tokens from Wikipedia passages achieves a F1 score (micro) of 0.952.
### Languages
The text in the dataset is in French, as spoken by [Wikipedia](https://en.wikipedia.org/wiki/Main_Page) users. The associated [BCP-47](https://tools.ietf.org/html/bcp47) code is `fr`.
## Load the dataset
### HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/ANTILLES")
print(dataset)
```
### FlairNLP
```python
from flair.datasets import UniversalDependenciesCorpus
corpus: Corpus = UniversalDependenciesCorpus(
data_folder='ANTILLES',
train_file="train.conllu",
test_file="test.conllu",
dev_file="dev.conllu"
)
```
## Load the model
### Flair ([model](https://huggingface.co/qanastek/pos-french))
```python
from flair.models import SequenceTagger
tagger = SequenceTagger.load("qanastek/pos-french")
```
## HuggingFace Spaces
<table style="width: fit-content;">
<thead>
<tr>
<td>
<a href="https://huggingface.co/spaces/qanastek/French-Part-Of-Speech-Tagging">
<img src="https://huggingface.co/datasets/qanastek/ANTILLES/raw/main/imgs/en.png" width="160">
</a>
</td>
<td>
<a href="https://huggingface.co/spaces/qanastek/Etiqueteur-Morphosyntaxique-Etendu">
<img src="https://huggingface.co/datasets/qanastek/ANTILLES/raw/main/imgs/fr.png" width="160">
</a>
</td>
</tr>
</thead>
</table>
## Dataset Structure
### Data Instances
```plain
# sent_id = fr-ud-dev_00005
# text = Travail de trés grande qualité exécuté par un imprimeur artisan passionné.
1 Travail travail NMS _ Gender=Masc|Number=Sing 0 root _ wordform=travail
2 de de PREP _ _ 5 case _ _
3 trés trés ADV _ _ 4 advmod _ _
4 grande grand ADJFS _ Gender=Fem|Number=Sing 5 amod _ _
5 qualité qualité NFS _ Gender=Fem|Number=Sing 1 nmod _ _
6 exécuté exécuter VPPMS _ Gender=Masc|Number=Sing|Tense=Past|VerbForm=Part 1 acl _ _
7 par par PREP _ _ 9 case _ _
8 un un DINTMS _ Definite=Ind|Gender=Masc|Number=Sing|PronType=Art 9 det _ _
9 imprimeur imprimeur NMS _ Gender=Masc|Number=Sing 6 obl:agent _ _
10 artisan artisan NMS _ Gender=Masc|Number=Sing 9 nmod _ _
11 passionné passionné ADJMS _ Gender=Masc|Number=Sing 9 amod _ SpaceAfter=No
12 . . YPFOR _ _ 1 punct _ _
```
### Data Fields
| Abbreviation | Description | Examples | # tokens |
|:--------:|:--------:|:--------:|:--------:|
| PREP | Preposition | de | 63 738 |
| AUX | Auxiliary Verb | est | 12 886 |
| ADV | Adverb | toujours | 14 969 |
| COSUB | Subordinating conjunction | que | 3 007 |
| COCO | Coordinating Conjunction | et | 10 102 |
| PART | Demonstrative particle | -t | 93 |
| PRON | Pronoun | qui ce quoi | 667 |
| PDEMMS | Singular Masculine Demonstrative Pronoun | ce | 1 950 |
| PDEMMP | Plurial Masculine Demonstrative Pronoun | ceux | 108 |
| PDEMFS | Singular Feminine Demonstrative Pronoun | cette | 1 004 |
| PDEMFP | Plurial Feminine Demonstrative Pronoun | celles | 53 |
| PINDMS | Singular Masculine Indefinite Pronoun | tout | 961 |
| PINDMP | Plurial Masculine Indefinite Pronoun | autres | 89 |
| PINDFS | Singular Feminine Indefinite Pronoun | chacune | 136 |
| PINDFP | Plurial Feminine Indefinite Pronoun | certaines | 31 |
| PROPN | Proper noun | houston | 22 135 |
| XFAMIL | Last name | levy | 6 449 |
| NUM | Numerical Adjectives | trentaine vingtaine | 67 |
| DINTMS | Masculine Numerical Adjectives | un | 4 254 |
| DINTFS | Feminine Numerical Adjectives | une | 3 543 |
| PPOBJMS | Singular Masculine Pronoun complements of objects | le lui | 1 425 |
| PPOBJMP | Plurial Masculine Pronoun complements of objects | eux y | 212 |
| PPOBJFS | Singular Feminine Pronoun complements of objects | moi la | 358 |
| PPOBJFP | Plurial Feminine Pronoun complements of objects | en y | 70 |
| PPER1S | Personal Pronoun First Person Singular | je | 571 |
| PPER2S | Personal Pronoun Second Person Singular | tu | 19 |
| PPER3MS | Personal Pronoun Third Person Masculine Singular | il | 3 938 |
| PPER3MP | Personal Pronoun Third Person Masculine Plurial | ils | 513 |
| PPER3FS | Personal Pronoun Third Person Feminine Singular | elle | 992 |
| PPER3FP | Personal Pronoun Third Person Feminine Plurial | elles | 121 |
| PREFS | Reflexive Pronouns First Person of Singular | me m' | 120 |
| PREF | Reflexive Pronouns Third Person of Singular | se s' | 2 337 |
| PREFP | Reflexive Pronouns First / Second Person of Plurial | nous vous | 686 |
| VERB | Verb | obtient | 21 131 |
| VPPMS | Singular Masculine Participle Past Verb | formulé | 6 275 |
| VPPMP | Plurial Masculine Participle Past Verb | classés | 1 352 |
| VPPFS | Singular Feminine Participle Past Verb | appelée | 2 434 |
| VPPFP | Plurial Feminine Participle Past Verb | sanctionnées | 813 |
| VPPRE | Present participle | étant | 2 |
| DET | Determinant | les l' | 25 206 |
| DETMS | Singular Masculine Determinant | les | 15 444 |
| DETFS | Singular Feminine Determinant | la | 10 978 |
| ADJ | Adjective | capable sérieux | 1 075 |
| ADJMS | Singular Masculine Adjective | grand important | 8 338 |
| ADJMP | Plurial Masculine Adjective | grands petits | 3 274 |
| ADJFS | Singular Feminine Adjective | franéaise petite | 8 004 |
| ADJFP | Plurial Feminine Adjective | légéres petites | 3 041 |
| NOUN | Noun | temps | 1 389 |
| NMS | Singular Masculine Noun | drapeau | 29 698 |
| NMP | Plurial Masculine Noun | journalistes | 10 882 |
| NFS | Singular Feminine Noun | téte | 25 414 |
| NFP | Plurial Feminine Noun | ondes | 7 448 |
| PREL | Relative Pronoun | qui dont | 2 976 |
| PRELMS | Singular Masculine Relative Pronoun | lequel | 94 |
| PRELMP | Plurial Masculine Relative Pronoun | lesquels | 29 |
| PRELFS | Singular Feminine Relative Pronoun | laquelle | 70 |
| PRELFP | Plurial Feminine Relative Pronoun | lesquelles | 25 |
| PINTFS | Singular Feminine Interrogative Pronoun | laquelle | 3 |
| INTJ | Interjection | merci bref | 75 |
| CHIF | Numbers | 1979 10 | 10 417 |
| SYM | Symbol | é % | 705 |
| YPFOR | Endpoint | . | 15 088 |
| PUNCT | Ponctuation | : , | 28 918 |
| MOTINC | Unknown words | Technology Lady | 2 022 |
| X | Typos & others | sfeir 3D statu | 175 |
### Data Splits
| | Train | Dev | Test |
|:------------------:|:------:|:------:|:-----:|
| # Docs | 14 449 | 1 476 | 416 |
| Avg # Tokens / Doc | 24.54 | 24.19 | 24.08 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The corpora is free of personal or sensitive information since it has been based on `Wikipedia` articles content.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The nature of the corpora introduce various biases such as the names of the streets which are temporaly based and can therefore introduce named entity like author or event names. For example, street names such as `Rue Victor-Hugo` or `Rue Pasteur` doesn't exist before the 20's century in France.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
__ANTILLES__: Labrak Yanis, Dufour Richard
__UD_FRENCH-GSD__: de Marneffe Marie-Catherine, Guillaume Bruno, McDonald Ryan, Suhr Alane, Nivre Joakim, Grioni Matias, Dickerson Carly, Perrier Guy
__Universal Dependency__: Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello and Jungmee Lee
### Licensing Information
```plain
For the following languages
German, Spanish, French, Indonesian, Italian, Japanese, Korean and Brazilian
Portuguese
we will distinguish between two portions of the data.
1. The underlying text for sentences that were annotated. This data Google
asserts no ownership over and no copyright over. Some or all of these
sentences may be copyrighted in some jurisdictions. Where copyrighted,
Google collected these sentences under exceptions to copyright or implied
license rights. GOOGLE MAKES THEM AVAILABLE TO YOU 'AS IS', WITHOUT ANY
WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED.
2. The annotations -- part-of-speech tags and dependency annotations. These are
made available under a CC BY-SA 4.0. GOOGLE MAKES
THEM AVAILABLE TO YOU 'AS IS', WITHOUT ANY WARRANTY OF ANY KIND, WHETHER
EXPRESS OR IMPLIED. See attached LICENSE file for the text of CC BY-NC-SA.
Portions of the German data were sampled from the CoNLL 2006 Tiger Treebank
data. Hans Uszkoreit graciously gave permission to use the underlying
sentences in this data as part of this release.
Any use of the data should reference the above plus:
Universal Dependency Annotation for Multilingual Parsing
Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg,
Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang,
Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello and Jungmee Lee
Proceedings of ACL 2013
```
### Citation Information
Please cite the following paper when using this model.
ANTILLES extended corpus:
```latex
@inproceedings{labrak:hal-03696042,
TITLE = {{ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus}},
AUTHOR = {Labrak, Yanis and Dufour, Richard},
URL = {https://hal.archives-ouvertes.fr/hal-03696042},
BOOKTITLE = {{25th International Conference on Text, Speech and Dialogue (TSD)}},
ADDRESS = {Brno, Czech Republic},
PUBLISHER = {{Springer}},
YEAR = {2022},
MONTH = Sep,
KEYWORDS = {Part-of-speech corpus ; POS tagging ; Open tools ; Word embeddings ; Bi-LSTM ; CRF ; Transformers},
PDF = {https://hal.archives-ouvertes.fr/hal-03696042/file/ANTILLES_A_freNch_linguisTIcaLLy_Enriched_part_of_Speech_corpus.pdf},
HAL_ID = {hal-03696042},
HAL_VERSION = {v1},
}
```
UD_French-GSD corpora:
```latex
@misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
```
{U}niversal {D}ependency Annotation for Multilingual Parsing:
```latex
@inproceedings{mcdonald-etal-2013-universal,
title = "{U}niversal {D}ependency Annotation for Multilingual Parsing",
author = {McDonald, Ryan and
Nivre, Joakim and
Quirmbach-Brundage, Yvonne and
Goldberg, Yoav and
Das, Dipanjan and
Ganchev, Kuzman and
Hall, Keith and
Petrov, Slav and
Zhang, Hao and
T{\"a}ckstr{\"o}m, Oscar and
Bedini, Claudia and
Bertomeu Castell{\'o}, N{\'u}ria and
Lee, Jungmee},
booktitle = "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P13-2017",
pages = "92--97",
}
```
LIA TAGG:
```latex
@techreport{LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
}
```
|
qanastek | null | @article{10.1007/s10579-014-9277-0,
author = {Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl\"{u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe},
title = {An Overview of the European Union's Highly Multilingual Parallel Corpora},
year = {2014},
issue_date = {December 2014},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
volume = {48},
number = {4},
issn = {1574-020X},
url = {https://doi.org/10.1007/s10579-014-9277-0},
doi = {10.1007/s10579-014-9277-0},
abstract = {Starting in 2006, the European Commission's Joint Research Centre and other European Union organisations have made available a number of large-scale highly-multilingual parallel language resources. In this article, we give a comparative overview of these resources and we explain the specific nature of each of them. This article provides answers to a number of question, including: What are these linguistic resources? What is the difference between them? Why were they originally created and why was the data released publicly? What can they be used for and what are the limitations of their usability? What are the text types, subject domains and languages covered? How to avoid overlapping document sets? How do they compare regarding the formatting and the translation alignment? What are their usage conditions? What other types of multilingual linguistic resources does the EU have? This article thus aims to clarify what the similarities and differences between the various resources are and what they can be used for. It will also serve as a reference publication for those resources, for which a more detailed description has been lacking so far (EAC-TM, ECDC-TM and DGT-Acquis).},
journal = {Lang. Resour. Eval.},
month = {dec},
pages = {679–707},
numpages = {29},
keywords = {DCEP, EAC-TM, EuroVoc, JRC EuroVoc Indexer JEX, Parallel corpora, DGT-TM, Eur-Lex, Highly multilingual, Linguistic resources, DGT-Acquis, European Union, ECDC-TM, JRC-Acquis, Translation memory}
} | null | false | 1 | false | qanastek/ECDC | 2022-10-23T04:59:32.000Z | null | false | 30a7e525efbb3094204e7e9a49bc46fd0ec7afb6 | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:other",
"multilinguality:en-sv",
"multilinguality:en-pl",
"multilinguality:en-hu",
"multilinguality:en-lt",
"multilinguality:en-sk",
"multilinguality:en-ga",
"m... | https://huggingface.co/datasets/qanastek/ECDC/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- en-sv
- en-pl
- en-hu
- en-lt
- en-sk
- en-ga
- en-fr
- en-cs
- en-el
- en-it
- en-lv
- en-da
- en-nl
- en-bg
- en-is
- en-ro
- en-no
- en-pt
- en-es
- en-et
- en-mt
- en-sl
- en-fi
- en-de
pretty_name: ECDC
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# ECDC : An overview of the European Union's highly multilingual parallel corpora
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction
- **Repository:** https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction
- **Paper:** https://dl.acm.org/doi/10.1007/s10579-014-9277-0
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. The data gets distributed via the [web pages of the EC's Joint Research Centre (JRC)](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for all 22 different languages from the European Union (EU).
**List of languages :** `English (en)`, `Swedish (sv)`, `Polish (pl)`, `Hungarian (hu)`,`Lithuanian (lt)`, `Latvian (lv)`, `German (de)`, `Finnish (fi)`, `Slovak (sk)`,`Slovenian (sl)`, `French (fr)`, ,`Czech (cs)`,`Danish (da)`, `Italian (it)`,`Maltese (mt)`,`Dutch (nl)`,`Portuguese (pt)`,`Romanian (ro)`, `Spanish (es)`,`Estonian (et)`, `Bulgarian (bg)`,`Greek (el)`, `Irish (ga)`, `Icelandic (is)` and `Norwegian (no)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/ECDC", "en-it", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
key,lang,source_text,target_text
doc_0,en-bg,Vaccination against hepatitis C is not yet available.,Засега няма ваксина срещу хепатит С.
doc_1355,en-bg,Varicella infection,Инфекция с варицела
doc_2349,en-bg,"If you have any questions about the processing of your e-mail and related personal data, do not hesitate to include them in your message.","Ако имате въпроси относно обработката на вашия адрес на електронна поща и свързаните лични данни, не се колебайте да ги включите в съобщението си."
doc_192,en-bg,Transmission can be reduced especially by improving hygiene in food production handling.,Предаването на инфекцията може да бъде ограничено особено чрез подобряване на хигиената при манипулациите в хранителната индустрия.
```
### Data Fields
**key** : The document identifier `String`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
|lang | key |
|-----|-----|
|en-bg|2567 |
|en-cs|2562 |
|en-da|2577 |
|en-de|2560 |
|en-el|2530 |
|en-es|2564 |
|en-et|2581 |
|en-fi|2617 |
|en-fr|2561 |
|en-ga|1356 |
|en-hu|2571 |
|en-is|2511 |
|en-it|2534 |
|en-lt|2545 |
|en-lv|2542 |
|en-mt|2539 |
|en-nl|2510 |
|en-no|2537 |
|en-pl|2546 |
|en-pt|2531 |
|en-ro|2555 |
|en-sk|2525 |
|en-sl|2545 |
|en-sv|2527 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
Every data of this corpora as been uploaded by on [JRC](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face ECDC__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__An overview of the European Union's highly multilingual parallel corpora__: Steinberger Ralf, Mohamed Ebrahim, Alexandros Poulis, Manuel Carrasco-Benitez, Patrick Schlüter, Marek Przybyszewski & Signe Gilbro.
### Licensing Information
By downloading or using the ECDC-Translation Memory, you are bound by the [ECDC-TM usage conditions (PDF)](https://wt-public.emm4u.eu/Resources/ECDC-TM/2012_10_Terms-of-Use_ECDC-TM.pdf).
### No Warranty
Each Work is provided ‘as is’ without, to the full extent permitted by law, representations,
warranties, obligations and liabilities of any kind, either express or implied, including, but
not limited to, any implied warranty of merchantability, integration, satisfactory quality and
fitness for a particular purpose.
Except in the cases of wilful misconduct or damages directly caused to natural persons, the
Owner will not be liable for any incidental, consequential, direct or indirect damages,
including, but not limited to, the loss of data, lost profits or any other financial loss arising
from the use of, or inability to use, the Work even if the Owner has been notified of the
possibility of such loss, damages, claims or costs, or for any claim by any third party. The
Owner may be liable under national statutory product liability laws as far as such laws apply
to the Work.
### Citation Information
Please cite the following paper when using this dataset.
```latex
@article{10.1007/s10579-014-9277-0,
author = {Steinberger, Ralf and Ebrahim, Mohamed and Poulis, Alexandros and Carrasco-Benitez, Manuel and Schl\"{u}ter, Patrick and Przybyszewski, Marek and Gilbro, Signe},
title = {An Overview of the European Union's Highly Multilingual Parallel Corpora},
year = {2014},
issue_date = {December 2014},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
volume = {48},
number = {4},
issn = {1574-020X},
url = {https://doi.org/10.1007/s10579-014-9277-0},
doi = {10.1007/s10579-014-9277-0},
abstract = {Starting in 2006, the European Commission's Joint Research Centre and other European Union organisations have made available a number of large-scale highly-multilingual parallel language resources. In this article, we give a comparative overview of these resources and we explain the specific nature of each of them. This article provides answers to a number of question, including: What are these linguistic resources? What is the difference between them? Why were they originally created and why was the data released publicly? What can they be used for and what are the limitations of their usability? What are the text types, subject domains and languages covered? How to avoid overlapping document sets? How do they compare regarding the formatting and the translation alignment? What are their usage conditions? What other types of multilingual linguistic resources does the EU have? This article thus aims to clarify what the similarities and differences between the various resources are and what they can be used for. It will also serve as a reference publication for those resources, for which a more detailed description has been lacking so far (EAC-TM, ECDC-TM and DGT-Acquis).},
journal = {Lang. Resour. Eval.},
month = {dec},
pages = {679–707},
numpages = {29},
keywords = {DCEP, EAC-TM, EuroVoc, JRC EuroVoc Indexer JEX, Parallel corpora, DGT-TM, Eur-Lex, Highly multilingual, Linguistic resources, DGT-Acquis, European Union, ECDC-TM, JRC-Acquis, Translation memory}
}
```
|
qanastek | null | @inproceedings{losch-etal-2018-european,
title = "European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management",
author = {L{\"o}sch, Andrea and
Mapelli, Val{\'e}rie and
Piperidis, Stelios and
Vasi{\c{l}}jevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef},
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1213",
} | null | false | 4 | false | qanastek/ELRC-Medical-V2 | 2022-10-24T17:15:17.000Z | null | false | 7f5633e7f9903947a9e51ab0e12ff483574aeebf | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr"... | https://huggingface.co/datasets/qanastek/ELRC-Medical-V2/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
- bg
- cs
- da
- de
- el
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: ELRC-Medical-V2
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# ELRC-Medical-V2 : European parallel corpus for healthcare machine translation
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://live.european-language-grid.eu/catalogue/project/2209
- **Repository:** https://github.com/qanastek/ELRC-Medical-V2/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`ELRC-Medical-V2` is a parallel corpus for neural machine translation funded by the [European Commission](http://www.lr-coordination.eu/) and coordinated by the [German Research Center for Artificial Intelligence](https://www.dfki.de/web).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for 23 differents languages from the European Union (EU) with as source language in each cases english (EN).
**List of languages :** `Bulgarian (bg)`,`Czech (cs)`,`Danish (da)`,`German (de)`,`Greek (el)`,`Spanish (es)`,`Estonian (et)`,`Finnish (fi)`,`French (fr)`,`Irish (ga)`,`Croatian (hr)`,`Hungarian (hu)`,`Italian (it)`,`Lithuanian (lt)`,`Latvian (lv)`,`Maltese (mt)`,`Dutch (nl)`,`Polish (pl)`,`Portuguese (pt)`,`Romanian (ro)`,`Slovak (sk)`,`Slovenian (sl)`,`Swedish (sv)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
NAME = "qanastek/ELRC-Medical-V2"
dataset = load_dataset(NAME, use_auth_token=True)
print(dataset)
dataset_train = load_dataset(NAME, "en-es", split='train[:90%]')
dataset_test = load_dataset(NAME, "en-es", split='train[10%:]')
print(dataset_train)
print(dataset_train[0])
print(dataset_test)
```
## Dataset Structure
### Data Instances
```plain
id,lang,source_text,target_text
1,en-bg,"TOC \o ""1-3"" \h \z \u Introduction 3","TOC \o ""1-3"" \h \z \u Въведение 3"
2,en-bg,The international humanitarian law and its principles are often not respected.,Международното хуманитарно право и неговите принципи често не се зачитат.
3,en-bg,"At policy level, progress was made on several important initiatives.",На равнище политики напредък е постигнат по няколко важни инициативи.
```
### Data Fields
**id** : The document identifier of type `Integer`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
| Lang | # Docs | Avg. # Source Tokens | Avg. # Target Tokens |
|--------|-----------|------------------------|------------------------|
| bg | 13 149 | 23 | 24 |
| cs | 13 160 | 23 | 21 |
| da | 13 242 | 23 | 22 |
| de | 13 291 | 23 | 22 |
| el | 13 091 | 23 | 26 |
| es | 13 195 | 23 | 28 |
| et | 13 016 | 23 | 17 |
| fi | 12 942 | 23 | 16 |
| fr | 13 149 | 23 | 28 |
| ga | 412 | 12 | 12 |
| hr | 12 836 | 23 | 21 |
| hu | 13 025 | 23 | 21 |
| it | 13 059 | 23 | 25 |
| lt | 12 580 | 23 | 18 |
| lv | 13 044 | 23 | 19 |
| mt | 3 093 | 16 | 14 |
| nl | 13 191 | 23 | 25 |
| pl | 12 761 | 23 | 22 |
| pt | 13 148 | 23 | 26 |
| ro | 13 163 | 23 | 25 |
| sk | 12 926 | 23 | 20 |
| sl | 13 208 | 23 | 21 |
| sv | 13 099 | 23 | 21 |
|||||
| Total | 277 780 | 22.21 | 21.47 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://elrc-share.eu/repository/search/?q=mfsp%3A87ef9e5e8ac411ea913100155d026706e19a1a9f908b463c944490c36ba2f454&page=3).
### Source Data
#### Initial Data Collection and Normalization
The acquisition of bilingual data (from multilingual websites), normalization, cleaning, deduplication and identification of parallel documents have been done by [ILSP-FC tool](http://nlp.ilsp.gr/redmine/projects/ilsp-fc/wiki/Introduction). [Maligna aligner](https://github.com/loomchild/maligna) was used for alignment of segments. Merging/filtering of segment pairs has also been applied.
#### Who are the source language producers?
Every data of this corpora as been uploaded by [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) on [ELRC-Share](https://elrc-share.eu/repository/browse/bilingual-corpus-from-the-publications-office-of-the-eu-on-the-medical-domain-v2-en-fr/6b31b32e8ac411ea913100155d0267061547d9b3ec284584af19a2953baa8937/).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__ELRC-Medical-V2__: Labrak Yanis, Dufour Richard
__Bilingual corpus from the Publications Office of the EU on the medical domain v.2 (EN-XX) Corpus__: [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) and [others](https://live.european-language-grid.eu/catalogue/project/2209).
### Licensing Information
<a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf"><img alt="Attribution 4.0 International (CC BY 4.0) License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf">Attribution 4.0 International (CC BY 4.0) License</a>.
### Citation Information
Please cite the following paper when using this model.
```latex
@inproceedings{losch-etal-2018-european,
title = European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,
author = {
L'osch, Andrea and
Mapelli, Valérie and
Piperidis, Stelios and
Vasiljevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef
},
booktitle = Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
month = may,
year = 2018,
address = Miyazaki, Japan,
publisher = European Language Resources Association (ELRA),
url = https://aclanthology.org/L18-1213,
}
```
|
qanastek | null | @inproceedings{tiedemann-2012-parallel,
title = Parallel Data, Tools and Interfaces in OPUS,
author = {
Tiedemann, Jorg
},
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
month = may,
year = 2012,
address = Istanbul, Turkey,
publisher = European Language Resources Association (ELRA),
url = http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf,
pages = 2214--2218,
abstract = This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.,
} | null | false | 18 | false | qanastek/EMEA-V3 | 2022-10-22T15:18:02.000Z | null | false | 783edb3e7341c61ec455b253654550c6bdbdfa89 | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:hu",
"language:it"... | https://huggingface.co/datasets/qanastek/EMEA-V3/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
pretty_name: EMEA-V3
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# EMEA-V3 : European parallel translation corpus from the European Medicines Agency
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/EMEA.php
- **Repository:** https://github.com/qanastek/EMEA-V3/
- **Paper:** https://aclanthology.org/L12-1246/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`EMEA-V3` is a parallel corpus for neural machine translation collected and aligned by [Tiedemann, Jorg](mailto:jorg.tiedemann@lingfil.uu.se) during the [OPUS project](https://opus.nlpl.eu/).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for all 22 different languages from the European Union (EU).
**List of languages :** `Bulgarian (bg)`,`Czech (cs)`,`Danish (da)`,`German (de)`,`Greek (el)`,`English (en)`,`Spanish (es)`,`Estonian (et)`,`Finnish (fi)`,`French (fr)`,`Hungarian (hu)`,`Italian (it)`,`Lithuanian (lt)`,`Latvian (lv)`,`Maltese (mt)`,`Dutch (nl)`,`Polish (pl)`,`Portuguese (pt)`,`Romanian (ro)`,`Slovak (sk)`,`Slovenian (sl)`,`Swedish (sv)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/EMEA-V3", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
lang,source_text,target_text
bg-cs,EMEA/ H/ C/ 471,EMEA/ H/ C/ 471
bg-cs,ABILIFY,ABILIFY
bg-cs,Какво представлява Abilify?,Co je Abilify?
bg-cs,"Abilify е лекарство, съдържащо активното вещество арипипразол.","Abilify je léčivý přípravek, který obsahuje účinnou látku aripiprazol."
bg-cs,"Предлага се под формата на таблетки от 5 mg, 10 mg, 15 mg и 30 mg, като диспергиращи се таблетки (таблетки, които се разтварят в устата) от 10 mg, 15 mg и 30 mg, като перорален разтвор (1 mg/ ml) и като инжекционен разтвор (7, 5 mg/ ml).","Je dostupný ve formě tablet s obsahem 5 mg, 10 mg, 15 mg a 30 mg, ve formě tablet dispergovatelných v ústech (tablet, které se rozpustí v ústech) s obsahem 10 mg, 15 mg a 30 mg, jako perorální roztok (1 mg/ ml) nebo jako injekční roztok (7, 5 mg/ ml)."
bg-cs,За какво се използва Abilify?,Na co se přípravek Abilify používá?
```
### Data Fields
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
| | bg | cs | da | de | el | en | es | et | fi | fr | hu | it | lt | lv | mt | nl | pl | pt | ro | sk | sl | sv |
|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| **bg** | 0 | 342378 | 349675 | 348061 | 355696 | 333066 | 349936 | 336142 | 341732 | 358045 | 352763 | 351669 | 348679 | 342721 | 351097 | 353942 | 355005 | 347925 | 351099 | 345572 | 346954 | 342927 |
| **cs** | 342378 | 0 | 354824 | 353397 | 364609 | 335716 | 356506 | 340309 | 349040 | 363614 | 358353 | 357578 | 353232 | 347807 | 334353 | 355192 | 358357 | 351244 | 330447 | 346835 | 348411 | 346894 |
| **da** | 349675 | 354824 | 0 | 387202 | 397654 | 360186 | 387329 | 347391 | 379830 | 396294 | 367091 | 388495 | 360572 | 353801 | 342263 | 388250 | 368779 | 382576 | 340508 | 356890 | 357694 | 373510 |
| **de** | 348061 | 353397 | 387202 | 0 | 390281 | 364005 | 386335 | 346166 | 378626 | 393468 | 366828 | 381396 | 360907 | 353151 | 340294 | 377770 | 367080 | 381365 | 337562 | 355805 | 358700 | 376925 |
| **el** | 355696 | 364609 | 397654 | 390281 | 0 | 372824 | 393051 | 354874 | 384889 | 403248 | 373706 | 391389 | 368576 | 360047 | 348221 | 396284 | 372486 | 387170 | 342655 | 364959 | 363778 | 384569 |
| **en** | 333066 | 335716 | 360186 | 364005 | 372824 | 0 | 366769 | 333667 | 357177 | 373152 | 349176 | 361089 | 339899 | 336306 | 324695 | 360418 | 348450 | 361393 | 321233 | 338649 | 338195 | 352587 |
| **es** | 349936 | 356506 | 387329 | 386335 | 393051 | 366769 | 0 | 348454 | 378158 | 394253 | 368203 | 378076 | 360645 | 354126 | 340297 | 381188 | 367091 | 376443 | 337302 | 358745 | 357961 | 379462 |
| **et** | 336142 | 340309 | 347391 | 346166 | 354874 | 333667 | 348454 | 0 | 341694 | 358012 | 352099 | 351747 | 345417 | 339042 | 337302 | 350911 | 354329 | 345856 | 325992 | 343950 | 342787 | 340761 |
| **fi** | 341732 | 349040 | 379830 | 378626 | 384889 | 357177 | 378158 | 341694 | 0 | 387478 | 358869 | 379862 | 352968 | 346820 | 334275 | 379729 | 358760 | 374737 | 331135 | 348559 | 348680 | 368528 |
| **fr** | 358045 | 363614 | 396294 | 393468 | 403248 | 373152 | 394253 | 358012 | 387478 | 0 | 373625 | 385869 | 368817 | 361137 | 347699 | 388607 | 372387 | 388658 | 344139 | 363249 | 366474 | 383274 |
| **hu** | 352763 | 358353 | 367091 | 366828 | 373706 | 349176 | 368203 | 352099 | 358869 | 373625 | 0 | 367937 | 361015 | 354872 | 343831 | 368387 | 369040 | 361652 | 340410 | 357466 | 361157 | 356426 |
| **it** | 351669 | 357578 | 388495 | 381396 | 391389 | 361089 | 378076 | 351747 | 379862 | 385869 | 367937 | 0 | 360783 | 356001 | 341552 | 384018 | 365159 | 378841 | 337354 | 357562 | 358969 | 377635 |
| **lt** | 348679 | 353232 | 360572 | 360907 | 368576 | 339899 | 360645 | 345417 | 352968 | 368817 | 361015 | 360783 | 0 | 350576 | 337339 | 362096 | 361497 | 357070 | 335581 | 351639 | 350916 | 349636 |
| **lv** | 342721 | 347807 | 353801 | 353151 | 360047 | 336306 | 354126 | 339042 | 346820 | 361137 | 354872 | 356001 | 350576 | 0 | 336157 | 355791 | 358607 | 349590 | 329581 | 348689 | 346862 | 345016 |
| **mt** | 351097 | 334353 | 342263 | 340294 | 348221 | 324695 | 340297 | 337302 | 334275 | 347699 | 343831 | 341552 | 337339 | 336157 | 0 | 341111 | 344764 | 335553 | 338137 | 335930 | 334491 | 335353 |
| **nl** | 353942 | 355192 | 388250 | 377770 | 396284 | 360418 | 381188 | 350911 | 379729 | 388607 | 368387 | 384018 | 362096 | 355791 | 341111 | 0 | 369694 | 383913 | 339047 | 359126 | 360054 | 379771 |
| **pl** | 355005 | 358357 | 368779 | 367080 | 372486 | 348450 | 367091 | 354329 | 358760 | 372387 | 369040 | 365159 | 361497 | 358607 | 344764 | 369694 | 0 | 357426 | 335243 | 352527 | 355534 | 353214 |
| **pt** | 347925 | 351244 | 382576 | 381365 | 387170 | 361393 | 376443 | 345856 | 374737 | 388658 | 361652 | 378841 | 357070 | 349590 | 335553 | 383913 | 357426 | 0 | 333365 | 354784 | 352673 | 373392 |
| **ro** | 351099 | 330447 | 340508 | 337562 | 342655 | 321233 | 337302 | 325992 | 331135 | 344139 | 340410 | 337354 | 335581 | 329581 | 338137 | 339047 | 335243 | 333365 | 0 | 332373 | 330329 | 331268 |
| **sk** | 345572 | 346835 | 356890 | 355805 | 364959 | 338649 | 358745 | 343950 | 348559 | 363249 | 357466 | 357562 | 351639 | 348689 | 335930 | 359126 | 352527 | 354784 | 332373 | 0 | 348396 | 346855 |
| **sl** | 346954 | 348411 | 357694 | 358700 | 363778 | 338195 | 357961 | 342787 | 348680 | 366474 | 361157 | 358969 | 350916 | 346862 | 334491 | 360054 | 355534 | 352673 | 330329 | 348396 | 0 | 347727 |
| **sv** | 342927 | 346894 | 373510 | 376925 | 384569 | 352587 | 379462 | 340761 | 368528 | 383274 | 356426 | 377635 | 349636 | 345016 | 335353 | 379771 | 353214 | 373392 | 331268 | 346855 | 347727 | 0 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://opus.nlpl.eu/EMEA.php).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
Every data of this corpora as been uploaded by [Tiedemann, Jorg](mailto:jorg.tiedemann@lingfil.uu.se) on [Opus](https://opus.nlpl.eu/EMEA.php).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face EMEA-V3__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__OPUS : Parallel Data, Tools and Interfaces in OPUS__: [Tiedemann, Jorg](mailto:jorg.tiedemann@lingfil.uu.se).
<!-- ### Licensing Information
ddd -->
### Citation Information
Please cite the following paper when using this dataset.
```latex
@inproceedings{tiedemann-2012-parallel,
title = Parallel Data, Tools and Interfaces in OPUS,
author = {
Tiedemann, Jorg
},
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
month = may,
year = 2012,
address = Istanbul, Turkey,
publisher = European Language Resources Association (ELRA),
url = http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf,
pages = 2214--2218,
abstract = This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.,
}
```
|
qanastek | null | @inproceedings{bojar-etal-2016-findings,
title = Findings of the 2016 Conference on Machine Translation,
author = {
Bojar, Ondrej and
Chatterjee, Rajen and
Federmann, Christian and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Jimeno Yepes, Antonio and
Koehn, Philipp and
Logacheva, Varvara and
Monz, Christof and
Negri, Matteo and
Neveol, Aurelie and
Neves, Mariana and
Popel, Martin and
Post, Matt and
Rubino, Raphael and
Scarton, Carolina and
Specia, Lucia and
Turchi, Marco and
Verspoor, Karin and
Zampieri, Marcos
},
booktitle = Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers,
month = aug,
year = 2016,
address = Berlin, Germany,
publisher = Association for Computational Linguistics,
url = https://aclanthology.org/W16-2301,
doi = 10.18653/v1/W16-2301,
pages = 131--198,
} | WMT'16 Biomedical Translation Task - PubMed parallel datasets
http://www.statmt.org/wmt16/biomedical-translation-task.html | false | 1 | false | qanastek/WMT-16-PubMed | 2022-10-22T15:20:12.000Z | null | false | d74986fdd2f8aa542ca4b875d9fd37979518a027 | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:hu",
"language:it"... | https://huggingface.co/datasets/qanastek/WMT-16-PubMed/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: WMT-16-PubMed
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# WMT-16-PubMed : European parallel translation corpus from the European Medicines Agency
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.statmt.org/wmt16/biomedical-translation-task.html
- **Repository:** https://github.com/biomedical-translation-corpora/corpora
- **Paper:** https://aclanthology.org/W16-2301/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`WMT-16-PubMed` is a parallel corpus for neural machine translation collected and aligned for ACL 2016 during the [WMT'16 Shared Task: Biomedical Translation Task](https://www.statmt.org/wmt16/biomedical-translation-task.html).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
The corpora consists of a pair of source and target sentences for all 4 different languages :
**List of languages :** `English (en)`,`Spanish (es)`,`French (fr)`,`Portuguese (pt)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/WMT-16-PubMed", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
lang doc_id workshop publisher source_text target_text
0 en-fr 26839447 WMT'16 Biomedical Translation Task - PubMed pubmed Global Health: Where Do Physiotherapy and Reha... La place des cheveux et des poils dans les rit...
1 en-fr 26837117 WMT'16 Biomedical Translation Task - PubMed pubmed Carabin Les Carabins
2 en-fr 26837116 WMT'16 Biomedical Translation Task - PubMed pubmed In Process Citation Le laboratoire d'Anatomie, Biomécanique et Org...
3 en-fr 26837115 WMT'16 Biomedical Translation Task - PubMed pubmed Comment on the misappropriation of bibliograph... Du détournement des références bibliographique...
4 en-fr 26837114 WMT'16 Biomedical Translation Task - PubMed pubmed Anti-aging medicine, a science-based, essentia... La médecine anti-âge, une médecine scientifiqu...
... ... ... ... ... ... ...
973972 en-pt 20274330 WMT'16 Biomedical Translation Task - PubMed pubmed Myocardial infarction, diagnosis and treatment Infarto do miocárdio; diagnóstico e tratamento
973973 en-pt 20274329 WMT'16 Biomedical Translation Task - PubMed pubmed The health areas politics A política dos campos de saúde
973974 en-pt 20274328 WMT'16 Biomedical Translation Task - PubMed pubmed The role in tissue edema and liquid exchanges ... O papel dos tecidos nos edemas e nas trocas lí...
973975 en-pt 20274327 WMT'16 Biomedical Translation Task - PubMed pubmed About suppuration of the wound after thoracopl... Sôbre as supurações da ferida operatória após ...
973976 en-pt 20274326 WMT'16 Biomedical Translation Task - PubMed pubmed Experimental study of liver lesions in the tre... Estudo experimental das lesões hepáticas no tr...
```
### Data Fields
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
`en-es` : 285,584
`en-fr` : 614,093
`en-pt` : 74,300
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://www.statmt.org/wmt16/biomedical-translation-task.html).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
The shared task as been organized by :
* Antonio Jimeno Yepes (IBM Research Australia)
* Aurélie Névéol (LIMSI, CNRS, France)
* Mariana Neves (Hasso-Plattner Institute, Germany)
* Karin Verspoor (University of Melbourne, Australia)
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face WMT-16-PubMed__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__WMT'16 Shared Task: Biomedical Translation Task__:
* Antonio Jimeno Yepes (IBM Research Australia)
* Aurélie Névéol (LIMSI, CNRS, France)
* Mariana Neves (Hasso-Plattner Institute, Germany)
* Karin Verspoor (University of Melbourne, Australia)
<!-- ### Licensing Information
ddd -->
### Citation Information
Please cite the following paper when using this dataset.
```latex
@inproceedings{bojar-etal-2016-findings,
title = Findings of the 2016 Conference on Machine Translation,
author = {
Bojar, Ondrej and
Chatterjee, Rajen and
Federmann, Christian and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Jimeno Yepes, Antonio and
Koehn, Philipp and
Logacheva, Varvara and
Monz, Christof and
Negri, Matteo and
Neveol, Aurelie and
Neves, Mariana and
Popel, Martin and
Post, Matt and
Rubino, Raphael and
Scarton, Carolina and
Specia, Lucia and
Turchi, Marco and
Verspoor, Karin and
Zampieri, Marcos,
},
booktitle = Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers,
month = aug,
year = 2016,
address = Berlin, Germany,
publisher = Association for Computational Linguistics,
url = https://aclanthology.org/W16-2301,
doi = 10.18653/v1/W16-2301,
pages = 131--198,
}
```
|
qwant | null | @inproceedings{cattan:hal-03336060,
TITLE = {{On the Usability of Transformers-based models for a French Question-Answering task}},
AUTHOR = {Cattan, Oralie and Servan, Christophe and Rosset, Sophie},
URL = {https://hal.archives-ouvertes.fr/hal-03336060},
BOOKTITLE = {{Recent Advances in Natural Language Processing (RANLP)}},
ADDRESS = {Varna, Bulgaria},
YEAR = {2021},
MONTH = Sep,
PDF = {https://hal.archives-ouvertes.fr/hal-03336060/file/RANLP_2021_transformers_usability.pdf},
HAL_ID = {hal-03336060},
HAL_VERSION = {v1},
} | SQuAD-fr is a French translated version of the Stanford Question Answering Dataset (SQuAD), the reference corpus to evaluate question answering models' performances in English.
It consists of 100K question-answer pairs on 500+ articles derived from the original English dataset and represents a large-scale dataset for closed-domain question answering on factoid questions in French.
SQuAD-fr serves as a means of data augmentation on FQuAD and PIAF benchmarks, with 90K+ translated training pairs. | false | 4 | false | qwant/squad_fr | 2022-10-25T09:54:34.000Z | squad | false | 184a0dba68c92beb3c91a816042f1fe0479e3845 | [] | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:fr-FR",
"license:cc-by-4.0",
"multilinguality:monolingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:extended|squad",
"task_categories:question-answering",
"task_ids:extr... | https://huggingface.co/datasets/qwant/squad_fr/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- fr-FR
license:
- cc-by-4.0
multilinguality:
- monolingual
- translation
paperswithcode_id: squad
pretty_name: SQuAD-fr
size_categories:
- 10K<n<100K
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
- closed-domain-qa
---
# Dataset Card for "squad_fr"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** [On the Usability of Transformers-based models for a French Question-Answering task](https://hal.archives-ouvertes.fr/hal-03336060)
- **Size of downloaded dataset files:** 10 MB
- **Size of the generated dataset:** 73 MB
- **Total amount of disk used:** 83 MB
### Dataset Summary
SQuAD-fr:
- a translated version of the Stanford Question Answering Dataset (SQuAD) into French
- obtained through automatic translation of the English dataset
- a reading comprehension dataset, consisting of approximately 90K factoid questions on Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage
- serves as a means of data augmentation on FQuAD and PIAF benchmarks
### Supported Tasks and Leaderboards
- `closed-domain-qa`, `text-retrieval`: This dataset is intended to be used for `closed-domain-qa`, but can also be used for information retrieval tasks.
### Languages
This dataset is exclusively in French.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 10 MB
- **Size of the generated dataset:** 73 MB
- **Total amount of disk used:** 83 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|1.1.0|87514| 17492|
## Dataset Creation
### Curation Rationale
Usability of Transformer-based models, instability relating to data scarcity, investigation of data augmentation, hyperparameters optimization and cross-lingual transfer on the performance of a question-answering task in French.
### Source Data
#### Initial Data Collection and Normalization
validation: manually collected gold standards, chrf scores and bleu evaluation
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
### Citation Information
```
@inproceedings{cattan:hal-03336060,
TITLE = {{On the Usability of Transformers-based models for a French Question-Answering task}},
AUTHOR = {Cattan, Oralie and Servan, Christophe and Rosset, Sophie},
URL = {https://hal.archives-ouvertes.fr/hal-03336060},
BOOKTITLE = {{Recent Advances in Natural Language Processing (RANLP)}},
ADDRESS = {Varna, Bulgaria},
YEAR = {2021},
MONTH = Sep,
PDF = {https://hal.archives-ouvertes.fr/hal-03336060/file/RANLP_2021_transformers_usability.pdf},
HAL_ID = {hal-03336060},
HAL_VERSION = {v1},
}
``` |
rahular | null | @inproceedings{aralikatte-etal-2021-itihasa,
title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
author = "Aralikatte, Rahul and
de Lhoneux, Miryam and
Kunchukuttan, Anoop and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wat-1.22",
pages = "191--197",
abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
} | A Sanskrit-English machine translation dataset. | false | 194 | false | rahular/itihasa | 2022-10-24T18:06:01.000Z | null | false | 56645be151b61e1143597f922ccf666b43a5c02b | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:sa",
"language:en",
"license:apache-2.0",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"task_categories:text2text-generation",
"metrics:bleu",
"metrics:sacrebleu",
... | https://huggingface.co/datasets/rahular/itihasa/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- sa
- en
license:
- apache-2.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: Itihasa
metrics:
- bleu
- sacrebleu
- rouge
- ter
- chrF
tags:
- conditional-text-generation
---
# Itihāsa
Itihāsa is a Sanskrit-English translation corpus containing 93,000 Sanskrit shlokas and their English translations extracted from M. N. Dutt's seminal works on The Rāmāyana and The Mahābhārata. The paper which introduced this dataset can be found [here](https://aclanthology.org/2021.wat-1.22/).
This repository contains the randomized train, development, and test sets. The original extracted data can be found [here](https://github.com/rahular/itihasa/tree/gh-pages/res) in JSON format. If you just want to browse the data, you can go [here](http://rahular.com/itihasa/).
## Usage
```
>> from datasets import load_dataset
>> dataset = load_dataset("rahular/itihasa")
>> dataset
DatasetDict({
train: Dataset({
features: ['translation'],
num_rows: 75162
})
validation: Dataset({
features: ['translation'],
num_rows: 6149
})
test: Dataset({
features: ['translation'],
num_rows: 11722
})
})
>> dataset['train'][0]
{'translation': {'en': 'The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies.',
'sn': 'ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥'}}
```
## Citation
If you found this dataset to be useful, please consider citing the paper as follows:
```
@inproceedings{aralikatte-etal-2021-itihasa,
title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
author = "Aralikatte, Rahul and
de Lhoneux, Miryam and
Kunchukuttan, Anoop and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wat-1.22",
pages = "191--197",
abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
}
``` |
rajeshradhakrishnan | null | null | null | false | 1 | false | rajeshradhakrishnan/malayalam_2020_wiki | 2022-07-04T11:01:57.000Z | null | false | 2790b7b6c85e85b97f1b8eda171ba3369cc134b1 | [] | [] | https://huggingface.co/datasets/rajeshradhakrishnan/malayalam_2020_wiki/resolve/main/README.md | ��T h i s d a t a s e t i s f r o m t h e c o m m o n - c r a w l - m a l a y a l a m r e p o : h t t p s : / / g i t h u b . c o m / q b u r s t / c o m m o n - c r a w l - m a l a y a l a m |
rajeshradhakrishnan | null | @article{kunchukuttan2020indicnlpcorpus,
title={AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages},
author={Anoop Kunchukuttan and Divyanshu Kakwani and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
year={2020},
journal={arXiv preprint arXiv:2005.00085},
} | The AI4Bharat-IndicNLP dataset is an ongoing effort to create a collection of large-scale,
general-domain corpora for Indian languages. Currently, it contains 2.7 billion words for 10 Indian languages from two language families.
We share pre-trained word embeddings trained on these corpora.
We create news article category classification datasets for 9 languages to evaluate the embeddings.
We evaluate the IndicNLP embeddings on multiple evaluation tasks. | false | 1 | false | rajeshradhakrishnan/malayalam_news | 2022-07-04T05:57:19.000Z | null | false | 7aa5ac224f3acc8600c6c8c648c18b5dd6d3cf41 | [] | [] | https://huggingface.co/datasets/rajeshradhakrishnan/malayalam_news/resolve/main/README.md | ## IndicNLP News Article Classification Dataset
We used the IndicNLP text corpora to create classification datasets comprising news articles and their categories for 9 languages. The dataset is balanced across classes. The following table contains the statistics of our dataset:
| Language | Classes | Articles per Class |
| --------- | ------------------------------------------- | ------------------ |
| Bengali | entertainment, sports | 7K |
| Gujarati | business, entertainment, sports | 680 |
| Kannada | entertainment, lifestyle, sports | 10K |
| Malayalam | business, entertainment, sports, technology | 1.5K |
| Marathi | entertainment, lifestyle, sports | 1.5K |
| Oriya | business, crime, entertainment, sports | 7.5K |
| Punjabi | business, entertainment, sports, politics | 780 |
| Tamil | entertainment, politics, sport | 3.9K |
| Telugu | entertainment, business, sports | 8K |
## Citing
If you are using any of the resources, please cite the following article:
```
@article{kunchukuttan2020indicnlpcorpus,
title={AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages},
author={Anoop Kunchukuttan and Divyanshu Kakwani and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
year={2020},
journal={arXiv preprint arXiv:2005.00085},
}
```
|
rajeshradhakrishnan | null | @article{qburst,
title={Common Crawl - Malayalam},
author={n.d},
year={2020},
journal={n.d},
} | Common Crawl - Malayalam. | false | 1 | false | rajeshradhakrishnan/malayalam_wiki | 2022-07-04T12:21:06.000Z | wikitext-2 | false | e4fbbe300e28a65c40334241aa4e9f1c4e155852 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:crowdsourced",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"tas... | https://huggingface.co/datasets/rajeshradhakrishnan/malayalam_wiki/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: rajeshradhakrishnan/malayalam_wiki
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for [Malayalam Wiki - common crawl malayalam]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/qburst/common-crawl-malayalam**
- **Paper: None**
- **Leaderboard:**
- **Point of Contact: [@RRaajjesshh](https://twitter.com/RRaajjesshh)**
### Dataset Summary
Created from the files extract using Useful tools for extracting malayalam text from the Common Crawl Dataset.
https://github.com/qburst/common-crawl-malayalam
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[qburst](https://github.com/qburst), have run scripts on some months of the Common Crawl archives and are made it publicly available. This dataset is from cleaned up corpus from [QBurst common-crawl-malayalam](https://github.com/qburst/common-crawl-malayalam)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/qburst/common-crawl-malayalam, contains the Useful tools to extract malayalam text from the Common Crawl Datasets.
### Licensing Information
[More Information Needed]
### Citation Information
@article{
qburst,
title={Common Crawl - Malayalam},
journal={arXiv preprint arXiv:2005.00085},
year={2020}\n}
]
### Contributions
Thanks to [rajeshradhakrishnanmvk](https://github.com/rajeshradhakrishnanmvk) for adding this dataset.
|
ranim | null | null | null | false | 1 | false | ranim/Algerian-Arabic | 2022-11-04T18:17:42.000Z | null | false | 036d6b4f0077262f485de3d16085244408af2430 | [] | [] | https://huggingface.co/datasets/ranim/Algerian-Arabic/resolve/main/README.md |
***This dataset contains 1.5k Algerian Arabic sentiment comments classified into two classes
subjective positive, subjective negative.
***This dataset is collected and annotated by RANIM for Arabic NLP Solutions, feel free to use it.
***We appreciate citing our company name "RANIM for Arabic NLP Solutions" when using this dataset.
***For more data/information visit our website : https://ranim-for-nlp.web.app
or contact us : ranim.for.nlp@gmail.com
*******************************"RANIM for Arabic NLP Solutions"********************************** |
rays2pix | null | null | null | false | 1 | false | rays2pix/example | 2021-07-05T11:29:59.000Z | null | false | 584b85c66dda5e43f64964267554329ec0675694 | [] | [] | https://huggingface.co/datasets/rays2pix/example/resolve/main/README.md | |
rbawden | null | null | null | false | 249 | false | rbawden/DiaBLa | 2022-10-25T14:21:10.000Z | null | false | 5345895c56a601afe1a98519ce3199be60a27dba | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:en",
"language:fr",
"license:cc-by-sa-4.0",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:translation",
"language_bcp47:en-UK",
"language_bcp47:fr-FR"
] | https://huggingface.co/datasets/rbawden/DiaBLa/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- fr
license:
- cc-by-sa-4.0
multilinguality:
- translation
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: DiaBLa
language_bcp47:
- en-UK
- fr-FR
---
# Dataset Card for DiaBLa: Bilingual dialogue parallel evaluation set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [almanach.inria.fr/software_and_resources/custom/DiaBLa-en.html](http://almanach.inria.fr/software_and_resources/custom/DiaBLa-en.html)
- **Repository:** [github.com/rbawden/DiaBLa-dataset](https://github.com/rbawden/DiaBLa-dataset)
- **Paper:** [Bawden et al. (2021). DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine Translation. Language Resources and Evaluation(55). Pages 635–660. Springer Verlag. 10.1007/s10579-020-09514-4.](https://hal.inria.fr/hal-03021633)
- **Point of contact:** rachel.bawden[at]inria.fr
### Dataset Summary
The dataset is an English-French dataset for the evaluation of Machine Translation (MT) for informal, written bilingual dialogue.
The dataset contains 144 spontaneous dialogues (5,700+ sentences) between native English and French speakers, mediated by one of two neural MT systems in a range of role-play settings. See below for some basic statistics. The dialogues are accompanied by fine-grained sentence-level judgments of MT quality, produced by the dialogue participants themselves, as well as by manually normalised versions and reference translations produced a posteriori. See here for information about evaluation.
The motivation for the corpus is two-fold: to provide:
- a unique resource for evaluating MT models for dialogue (i.e. in context)
- a corpus for the analysis of MT-mediated communication
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (mainly UK) and French
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 37 MB
- **Number of parallel utterances:** 5748
Each example is highly annotated and is associated with dialogue context. An example from the test set looks as follows (only the first and last utterances of the dialogue history are shown for readability purposes):
```
{
"id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_25",
"mt": "Tu m'en veux pour \u00e7a ?",
"norm": "",
"orig": "Are you blaming me for this?",
"ref": "C'est moi que vous critiquez pour \u00e7a\u00a0?",
"utterance_meta": {
"eval_judgment": "medium",
"eval_verbatim": "",
"eval_problems": [
"coherence"
],
"lang": "english"
},
"dialogue_meta": {
"start_time": "2018-04-25T16:20:36.087170",
"end_time": "",
"translation_model": "baseline",
"final_evaluation_user1": {
"style": "average",
"coherence": "average",
"grammaticality": "good",
"meaning": "average",
"word_choice": "average"
},
"final_evaluation_user2": {
"style": "",
"coherence": "",
"grammaticality": "",
"meaning": "",
"word_choice": ""
},
"scenario": [
[
"You are both stuck in a lift at work.",
"Vous \u00eates tous les deux bloqu\u00e9(e)s dans un ascenseur au travail."
],
[
"You are an employee and you are with your boss.",
"Vous \u00eates un(e) employ\u00e9(e) et vous \u00eates avez votre patron(ne)"
],
[
"You are the boss and are with an employee.",
"Vous \u00eates le ou la patron(ne) et vous \u00eates avec un(e) employ\u00e9(e)"
]
],
"user1": {
"role_num": 1,
"role": [
"You are an employee and you are with your boss.",
"Vous \u00eates un(e) employ\u00e9(e) et vous \u00eates avez votre patron(ne)"
],
"initiated_dialogue": true,
"turn_number": 2,
"lang": "french"
},
"user2": {
"role_num": 2,
"role": [
"You are the boss and are with an employee.",
"Vous \u00eates le ou la patron(ne) et vous \u00eates avec un(e) employ\u00e9(e)"
],
"initiated_dialogue": false,
"turn_number": 1,
"lang": "english"
}
},
"dialogue_history": [
{
"id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_0",
"orig": "We appear to have stopped moving.",
"norm": "",
"mt": "On semble avoir arr\u00eat\u00e9 de bouger.",
"ref": "J'ai l'impression qu'on s'est arr\u00eat\u00e9s.",
"utterance_meta": {
"eval_judgment": "medium",
"eval_verbatim": "",
"eval_problems": [
"style"
],
"lang": "english"
}
},
[...]
{
"id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_24",
"orig": "La sonnerie s'est arr\u00eat\u00e9, je pense que personne ne va nous r\u00e9pondre.",
"norm": "",
"mt": "The ringing stopped, and I don't think anyone's gonna answer us.",
"ref": "It stopped ringing. I don't think anybody's going to reply.",
"utterance_meta": {
"eval_judgment": "perfect",
"eval_verbatim": "",
"eval_problems": [],
"lang": "french"
}
}
]
}
```
### Data Fields
#### plain_text
- `id`: a `string` feature.
- `orig`: a `string` feature.
- `norm`: a `string` feature.
- `mt`: a `string` feature.
- `ref`: a `string` feature.
- `utterance_meta`: a dictionary feature containing:
- `eval_judgment`: a `string` feature.
- `eval_verbatim`: a `string` feature.
- `eval_problems`: a list feature containing:
- up to 5 `string` features.
- `lang`: a `string` feature.
- `dialogue_meta`: a dictionary feature containing:
- `start_time` : a `string` feature.
- `end_time`: a `string` feature.
- `translation_model`: a `string` feature.
- `final_evaluation_user1`: a dictionary feature containing:
- `style`: a `string` feature.
- `coherence`: a `string` feature.
- `grammaticality`: a `string` feature.
- `meaning`: a `string` feature.
- `word_choice`: a `string` feature.
- `final_evaluation_user2`: a dictionary feature containing:
- `style`: a `string` feature.
- `coherence`: a `string` feature.
- `grammaticality`: a `string` feature.
- `meaning`: a `string` feature.
- `word_choice`: a `string` feature.
- `scenario`: a list feature containing
- 3 lists each containing 2 `string` features.
- `user1`: a dictionary feature containing:
- `role_num`: an `int` feature.
- `role`: a list feature containing:
- 2 `string` features.
- `initiated_dialogue`: a `bool` feature.
- `turn_number`: an `int` value.
- `lang`: a `string` value.
- `user2`: a dictionary feature containing:
- `role_num`: an `int` feature.
- `role`: a list feature containing:
- 2 `string` features.
- `initiated_dialogue`: a `bool` feature.
- `turn_number`: an `int` value.
- `lang`: a `string` value.
- `dialogue_history`: a list feature containing:
- dictionary features containing:
- `id`: a `string` feature.
- `orig`: a `string` feature.
- `norm`: a `string` feature.
- `mt`: a `string` feature.
- `ref`: a `string` feature.
- `utterance_meta`: a dictionary feature containing:
- `eval_judgment`: a `string` feature.
- `eval_verbatim`: a `string` feature.
- `eval_problems`: a list feature containing:
- up to 5 `string` features.
- `lang`: a `string` feature.
### Data Splits
DiaBLa is a test set only.
| name |test |
|----------|------:|
|plain_text| 5748|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Original data was collected through a [dedicated online chat platform](https://github.com/rbawden/diabla-chat-interface) and involved native speakers of English and of French. As well as producing the original text, participants also annotated the quality of the machine-translated outputs of their partners' utterances (which they saw instead of their partners' original text) based on their monolingual intuitions and the dialogue context.
Each dialogue is assigned one of 12 role-play scenarios and where appropriate each participant is assigned a role to play in the dialogue.
#### Who are the source language producers?
The source text producers were native French and native English volunteers (mainly British English). See the paper for very basic information concerning their backgrounds (age categories and experience in NLP).
### Annotations
#### Annotation process
On top of the original dialogue text (a mixture of utterances in English and in French), the following "annotations" are provided:
- machine translated version of the original text (produced in real time and presented during the dialogue), produced by one of two MT systems, both trained using [Marian](https://github.com/marian-nmt/marian).
- judgments of MT quality by participants (overall quality, particular problems, verbatim comments)
- manually produced normalised version of the original text (for spelling mistakes, grammatical errors, missing punctuation, etc.)
- manually produced reference translations
#### Who are the annotators?
The judgments of MT quality were produced by the dialogue participants themselves in real time. The normalised version of the text and the reference translations were manually produced by the authors of the paper. Translations were always done into the translator's native language and all translations were verified and post-edited by a bilingual English-French speaker.
### Personal and Sensitive Information
A priori the dataset does not contain personal and sensitive information. Participants were instructed not to give any personal information and to assume the roles assigned in the role play scenario. Usernames were anonymised prior to distribution and any mention of either usernames or real names in the dialogues were replaced by generic names of the same gender as the participant. Only basic user information was collected to get an idea of the distribution of participants and to potentially see how multilingual ability influences quality judgments (rough age categories, experience in NLP or research, native languages, familiarity with the other language (either English or French), other languages spoken and gender). Gender was included because it is an important factor in translation (particularly for the direction English-to-French), and this was explained in advance to the participants in the FAQs.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was collected by Rachel Bawden, Eric Bilinski, Thomas Lavergne and Sophie Rosset (see citation below).
### Licensing Information
The dataset is available under a CC BY-SA 4.0 licence.
### Citation Information
If you use or are inspired by this dataset, please cite:
```
@article{bawden_DiaBLa:-A-Corpus-of_2021,
author = {Bawden, Rachel and Bilinski, Eric and Lavergne, Thomas and Rosset, Sophie},
doi = {10.1007/s10579-020-09514-4},
title = {DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine Translation},
year = {2021},
journal = {Language Resources and Evaluation},
publisher = {Springer Verlag},
volume = {55},
pages = {635--660},
url = {https://hal.inria.fr/hal-03021633},
pdf = {https://hal.inria.fr/hal-03021633/file/diabla-lre-personal-formatting.pdf},
}
```
### Contributions
This dataset was added by Rachel Bawden [@rbawden](https://github.com/rbawden). |
rewardsignal | null | null | null | false | 4 | false | rewardsignal/reddit_writing_prompts | 2021-06-03T15:22:49.000Z | null | false | 806ef7fb97f3bc4cb4c1ce8af3dc16502aa65dc6 | [] | [] | https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts/resolve/main/README.md | # This repo consists of data downloaded from reddit.com/r/writingprompts
## prompt_responses_full.csv
* There are 193842 prompt responses in the file, and they together represent the 10 years of submissions prior to March, 13th, 2020.
I gather the following metadata for each top-level comment response to a submission story prompt:
* prompt_id: int
* The id of the Reddit submission that the writing prompt is from according to the Reddit API.
* prompt: str
* The text of the writing prompt.
* prompt_score: int
* The total karma score of the Reddit submission that the writing prompt is from.
* prompt_created_utc: int
* The prompt creation time in Unix epoch seconds
* response_id: int
* The id of the Reddit comment containing the response to the writing prompt.
* response: str
* The text of the response.
* response_score: int
* The total karma score of the Reddit comment that the given response is from.
* response_created_utc: int
* The response creation time in Unix epoch seconds.
* response_rank: int
* The index of the response in the list of responses for the given prompt sorted according to response_score from highest score to lowest.
* num_responses: int
* The total number of responses to the given prompt.
* response_children: List[str]
* The subcomments on the comment containing the given response to the given prompt.
## comparisons_train.csv and comparisons_test.csv
The comparison data is extracted from pairs of responses for a given prompt.
* There are 35200 comparisons in comparisons_test.csv and 334704 comparisons in comparisons_train.csv
* The comparisons dataset is filtered to remove comparisons between responses with an absolute difference in score less than 100.
* This is to ensure that comparisons are only made between responses that have a significant quality difference.
In particular, each row in the comparisons dataframe consists of the following:
* comparison: str
* The comparison string consists of the writing prompt, the first response, and the second response, separated with labels, and padded on the left to 1023 tokens.
* truth: int
* 0 if the first response has a higher score, 1 if the second response has a higher score (note that there are no ties because of the minimum score gap constraint)
* prompt_id: int
* The id of the Reddit submission that the writing prompt is from according to the Reddit API.
* prompt: str
* The text of the writing prompt.
* zero_id: int
* The id of the Reddit comment containing the first response in the comparison.
* one_id: int
* The id of the Reddit comment containing the second response in the comparison.
* zero_response: str
* The text of the first response in the comparison.
* one_response: str
* The text of the second response in the comparison.
* score_gap: int
* The absolute difference between the score of the first response and the score of the second response.
* zero_score: int
* The score of the first response.
* one_score: int
* The score of the second response.
* tokens_gap: int
* The absolute difference between the number of tokens in the first response and the number of tokens in the second response.
* zero_tokens: int
* The number of tokens in the first response as measured by the gpt2 tokenizer
* one_tokens: int
* The number of tokens in the second response as measured by the gpt2 tokenizer
* zero_delay: int
* The number of hours elapsed between the Reddit submission containing the prompt and the Reddit comment containing the first response.
* one_delay: int
* The number of hours elapsed between the Reddit submission containing the prompt and the Reddit comment containing the second response.
|
robz | null | null | null | false | 1 | false | robz/test | 2022-02-17T13:54:07.000Z | null | false | 6a4e89d29202fab0ded138253c6193f1ebd98c45 | [] | [] | https://huggingface.co/datasets/robz/test/resolve/main/README.md | # Test Dataset
This is a test dataset |
rocca | null | null | null | false | 1 | false | rocca/sims4-faces | 2022-03-12T06:58:39.000Z | null | false | d4431eb9768d77852272755a3679b1fc28a45062 | [] | [] | https://huggingface.co/datasets/rocca/sims4-faces/resolve/main/README.md | A collection of >200k screenshots from the Sims 4 character creator (face and upper-torso only), using the randomize button.
* There are ~100k masculine faces (`masc` folder), ~100k feminine faces (`fem` folder), ~12k faces with a masculine physical frame and feminine attire/makeup (`masc2fem` folder).
* All images are 917x917.
* Each image is about 40kb.
* The examples below are cropped slightly off-center, but in the actual data the characters are more centered.
* The files are named from `1.jpg` through to `N.jpg` (no zero-padding). For `fem`, `N=101499`. For `masc`, `N=103615`. For `masc2fem`, `N=12123`.
## fem examples:

## masc examples:

## masc2fem examples:

|
ronaldvanos | null | null | null | false | 1 | false | ronaldvanos/testdata | 2021-11-09T12:56:07.000Z | null | false | 1bc98b7baa0108710ff2c0cca45bdf13451fb492 | [] | [] | https://huggingface.co/datasets/ronaldvanos/testdata/resolve/main/README.md | #this is a test dataset and should not be used by anyone
#i am not the owner of the data
|
rookieguy12 | null | null | null | false | 1 | false | rookieguy12/dataset | 2021-11-23T09:00:07.000Z | null | false | 40779edab2b798158e00080373e75c506e7da8c5 | [] | [] | https://huggingface.co/datasets/rookieguy12/dataset/resolve/main/README.md | |
rosettarandd | null | null | null | false | 1 | false | rosettarandd/rosetta_balcanica | 2021-11-14T17:45:31.000Z | null | false | 1ad7203a10de7e474ea9d3f8030207ee46b19c5a | [] | [] | https://huggingface.co/datasets/rosettarandd/rosetta_balcanica/resolve/main/README.md | # Dataset Summary
We present *rosetta-balcanica* a manually extracted multilingual machine translation dataset for low resource
western Balkan languages. The documents were sourced from Organization for Security and Co-operation in Europe (OSCE)
website by applying appropriate language filters. Filtered list of documents can be found [here](https://www.osce.org/resources/documents?filters=%20sm_translations%3A%28sq%29&solrsort=score%20desc&rows=10).
# Languages Supported
Currently, our dataset has documents sourced from [Macedonian](https://github.com/ebegoli/rosetta-balcanica) and [Albanian](https://en.wikipedia.org/wiki/Albanian_language)(also known as Shqip).
|
roskoN | null | @inproceedings{li2017dailydialog,
title={DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},
author={Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},
booktitle={Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
pages={986--995},
year={2017}
} | The DailyDialog dataset as provided in the original form with a bit of preprocessing applied to enable dast prototyping.
The splits are as in the original distribution. | false | 32 | false | roskoN/dailydialog | 2021-08-06T14:14:18.000Z | null | false | 5214b2a66405abf87fd229e5c1007985501ffe3e | [] | [] | https://huggingface.co/datasets/roskoN/dailydialog/resolve/main/README.md | # DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
The data is based on the original distribution ([link to original website](http://yanran.li/dailydialog)) ([link to paper](https://aclanthology.org/I17-1099/)).
It is created as a convenience to enablefaster prototyping.
# License
DailyDialog dataset is licensed under CC BY-NC-SA 4.0.
If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. Any third party annotation is welcome. Note the dataset may not be adopted for commercial use. |
roskoN | null | @article{lee2019multi,
title={Multi-domain task-completion dialog challenge},
author={Lee, S and Schulz, H and Atkinson, A and Gao, J and Suleman, K and El Asri, L and Adada, M and Huang, M and Sharma, S and Tay, W and others},
journal={Dialog system technology challenges},
volume={8},
pages={9},
year={2019}
} | The DSTC8 dataset as provided in the original form.
The only difference is that the splits are in separate zip files.
In the orignal output it is one big archive containing all splits. | false | 13 | false | roskoN/dstc8-reddit-corpus | 2021-04-23T00:19:35.000Z | null | false | b43be7fa91a2d03f72682cca175ec5271d89b880 | [] | [] | https://huggingface.co/datasets/roskoN/dstc8-reddit-corpus/resolve/main/README.md | # DSTC8 Reddit Corpus
The data is based of the following repository:
> [https://github.com/microsoft/dstc8-reddit-corpus](https://github.com/microsoft/dstc8-reddit-corpus)
The dataset is created is a convenience to enable skipping the lengthy extraction process. |
s-myk | null | null | null | false | 1 | false | s-myk/test | 2021-09-27T09:55:17.000Z | null | false | 07ad52a2252150dda5dda2ab234915574d6c46b6 | [] | [] | https://huggingface.co/datasets/s-myk/test/resolve/main/README.md | |
s50227harry | null | null | null | false | 2 | false | s50227harry/test1 | 2022-03-01T13:15:42.000Z | null | false | e52b561f896d97568d9c10ecae2816729b2a6036 | [] | [] | https://huggingface.co/datasets/s50227harry/test1/resolve/main/README.md | |
sagnikrayc | null | @inproceedings{richardson-etal-2013-mctest,
title = "{MCT}est: A Challenge Dataset for the Open-Domain Machine Comprehension of Text",
author = "Richardson, Matthew and
Burges, Christopher J.C. and
Renshaw, Erin",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1020",
pages = "193--203",
} | MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. | false | 672 | false | sagnikrayc/mctest | 2022-10-25T00:16:37.000Z | mctest | false | 00355bee8104a40d80665be0e4570f4a8b2c96f7 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"language_bcp47:en-US",
"tags:explanations-in-question-answering"
] | https://huggingface.co/datasets/sagnikrayc/mctest/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mctest
language_bcp47:
- en-US
tags:
- explanations-in-question-answering
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/mcobzarenco/mctest/)
- **Paper:** [MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text](https://www.aclweb.org/anthology/D13-1020.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Microsoft Research License Agreement.
### Citation Information
[More Information Needed]
### Contributions
|
sagnikrayc | null | @article{dhingra2017quasar,
title={Quasar: Datasets for Question Answering by Search and Reading},
author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},
journal={arXiv preprint arXiv:1707.03904},
year={2017}
} | We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. | false | 18 | false | sagnikrayc/quasar | 2022-10-25T09:54:36.000Z | quasar-1 | false | ef167bca1e2bd18115fb6b6d58e5c888b30f7fde | [] | [
"arxiv:1707.03904",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en-US",
"license:bsd-3-clause",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/sagnikrayc/quasar/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en-US
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
-
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: quasar-1
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/bdhingra/quasar)
- **Paper:** [Quasar: Datasets for Question Answering by Search and Reading](https://arxiv.org/abs/1707.03904)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
|
sagteam | null | \ | he corpus for the author profiling analysis contains texts in Russian-language which labeled for 5 tasks:
1) gender -- 13530 texts with the labels, who wrote this: text female or male;
2) age -- 13530 texts with the labels, how old the person who wrote the text. This is a number from 12 to 80. In addition, for the classification task we added 5 age groups: 1-19; 20-29; 30-39; 40-49; 50+;
3) age imitation -- 7574 texts, where crowdsource authors is asked to write three texts:
a) in their natural manner,
b) imitating the style of someone younger,
c) imitating the style of someone older;
4) gender imitation -- 5956 texts, where the crowdsource authors is asked to write texts: in their origin gender and pretending to be the opposite gender;
5) style imitation -- 5956 texts, where crowdsource authors is asked to write a text on behalf of another person of your own gender, with a distortion of the authors usual style. | false | 10 | false | sagteam/author_profiling | 2022-08-09T12:33:07.000Z | null | false | 71a7c86c0432a0320f2b825c4064d00e79c4705b | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ru",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-c... | https://huggingface.co/datasets/sagteam/author_profiling/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ru
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: The Corpus for the analysis of author profiling in Russian-language texts.
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
---
# Dataset Card for [author_profiling]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/sag111/Author-Profiling
- **Repository:** https://github.com/sag111/Author-Profiling
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Sboev Alexander](mailto:sag111@mail.ru)
### Dataset Summary
The corpus for the author profiling analysis contains texts in Russian-language which labeled for 5 tasks:
1) gender -- 13448 texts with the labels, who wrote this: text female or male;
2) age -- 13448 texts with the labels, how old the person who wrote the text. This is a number from 12 to 80. In addition, for the classification task we added 5 age groups: 0-19; 20-29; 30-39; 40-49; 50+;
3) age imitation -- 8460 texts, where crowdsource authors is asked to write three texts:
a) in their natural manner,
b) imitating the style of someone younger,
c) imitating the style of someone older;
4) gender imitation -- 4988 texts, where the crowdsource authors is asked to write texts: in their origin gender and pretending to be the opposite gender;
5) style imitation -- 4988 texts, where crowdsource authors is asked to write a text on behalf of another person of your own gender, with a distortion of the authors usual style.
Dataset is collected sing the Yandex.Toloka service [link](https://toloka.yandex.ru/en).
You can read the data using the following python code:
```
def load_jsonl(input_path: str) -> list:
"""
Read list of objects from a JSON lines file.
"""
data = []
with open(input_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line.rstrip('\n|\r')))
print('Loaded {} records from {}/n'.format(len(data), input_path))
return data
path_to_file = "./data/train.jsonl"
data = load_jsonl(path_to_file)
```
or you can use HuggingFace style:
```
from datasets import load_dataset
train_df = load_dataset('sagteam/author_profiling', split='train')
valid_df = load_dataset('sagteam/author_profiling', split='validation')
test_df = load_dataset('sagteam/author_profiling', split='test')
```
#### Here are some statistics:
1. For Train file:
- No. of documents -- 9564;
- No. of unique texts -- 9553;
- Text length in characters -- min: 197, max: 2984, mean: 500.5;
- No. of documents written -- by men: 4704, by women: 4860;
- No. of unique authors -- 2344; men: 1172, women: 1172;
- Age of the authors -- min: 13, max: 80, mean: 31.2;
- No. of documents by age group -- 0-19: 813, 20-29: 4188, 30-39: 2697, 40-49: 1194, 50+: 672;
- No. of documents with gender imitation: 1215; without gender imitation: 2430; not applicable: 5919;
- No. of documents with age imitation -- younger: 1973; older: 1973; without age imitation: 1973; not applicable: 3645;
- No. of documents with style imitation: 1215; without style imitation: 2430; not applicable: 5919.
2. For Valid file:
- No. of documents -- 1320;
- No. of unique texts -- 1316;
- Text length in characters -- min: 200, max: 2809, mean: 520.8;
- No. of documents written -- by men: 633, by women: 687;
- No. of unique authors -- 336; men: 168, women: 168;
- Age of the authors -- min: 15, max: 79, mean: 32.2;
- No. of documents by age group -- 1-19: 117, 20-29: 570, 30-39: 339, 40-49: 362, 50+: 132;
- No. of documents with gender imitation: 156; without gender imitation: 312; not applicable: 852;
- No. of documents with age imitation -- younger: 284; older: 284; without age imitation: 284; not applicable: 468;
- No. of documents with style imitation: 156; without style imitation: 312; not applicable: 852.
3. For Test file:
- No. of documents -- 2564;
- No. of unique texts -- 2561;
- Text length in characters -- min: 199, max: 3981, mean: 515.6;
- No. of documents written -- by men: 1290, by women: 1274;
- No. of unique authors -- 672; men: 336, women: 336;
- Age of the authors -- min: 12, max: 67, mean: 31.8;
- No. of documents by age group -- 1-19: 195, 20-29: 1131, 30-39: 683, 40-49: 351, 50+: 204;
- No. of documents with gender imitation: 292; without gender imitation: 583; not applicable: 1689;
- No. of documents with age imitation -- younger: 563; older: 563; without age imitation: 563; not applicable: 875;
- No. of documents with style imitation: 292; without style imitation: 583; not applicable: 1689.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class and multi-label text classification.
The baseline models currently achieve the following F1-weighted metrics scores (table):
| Model name | gender | age_group | gender_imitation | age_imitation | style_imitation | no_imitation | average |
| ------------------- | ------ | --------- | ---------------- | ------------- | --------------- | ------------ | ------- |
| Dummy-stratified | 0.49 | 0.29 | 0.56 | 0.32 | 0.57 | 0.55 | 0.46 |
| Dummy-uniform | 0.49 | 0.23 | 0.51 | 0.32 | 0.51 | 0.51 | 0.43 |
| Dummy-most_frequent | 0.34 | 0.27 | 0.53 | 0.17 | 0.53 | 0.53 | 0.40 |
| LinearSVC + TF-IDF | 0.67 | 0.37 | 0.62 | 0.72 | 0.71 | 0.71 | 0.63 |
### Languages
The text in the dataset is in Russian.
## Dataset Structure
### Data Instances
Each instance is a text in Russian with some author profiling annotations.
An example for an instance from the dataset is shown below:
```
{
'id': 'crowdsource_4916',
'text': 'Ты очень симпатичный, Я давно не с кем не встречалась. Ты мне сильно понравился, ты умный интересный и удивительный, приходи ко мне в гости , у меня есть вкусное вино , и приготовлю вкусный ужин, посидим пообщаемся, узнаем друг друга поближе.',
'account_id': 'account_#1239',
'author_id': 411,
'age': 22,
'age_group': '20-29',
'gender': 'male',
'no_imitation': 'with_any_imitation',
'age_imitation': 'None',
'gender_imitation': 'with_gender_imitation',
'style_imitation': 'no_style_imitation'
}
```
### Data Fields
Data Fields includes:
- id -- unique identifier of the sample;
- text -- authors text written by a crowdsourcing user;
- author_id -- unique identifier of the user;
- account_id -- unique identifier of the crowdsource account;
- age -- age annotations;
- age_group -- age group annotations;
- no_imitation -- imitation annotations.
Label codes:
- 'with_any_imitation' -- there is some imitation in the text;
- 'no_any_imitation' -- the text is written without any imitation
- age_imitation -- age imitation annotations.
Label codes:
- 'younger' -- someone younger than the author is imitated in the text;
- 'older' -- someone older than the author is imitated in the text;
- 'no_age_imitation' -- the text is written without age imitation;
- 'None' -- not supported (the text was not written for this task)
- gender_imitation -- gender imitation annotations.
Label codes:
- 'no_gender_imitation' -- the text is written without gender imitation;
- 'with_gender_imitation' -- the text is written with a gender imitation;
- 'None' -- not supported (the text was not written for this task)
- style_imitation -- style imitation annotations.
Label codes:
- 'no_style_imitation' -- the text is written without style imitation;
- 'with_style_imitation' -- the text is written with a style imitation;
- 'None' -- not supported (the text was not written for this task).
### Data Splits
The dataset includes a set of train/valid/test splits with 9564, 1320 and 2564 texts respectively.
The unique authors do not overlap between the splits.
## Dataset Creation
### Curation Rationale
The formed dataset of examples consists of texts in Russian using a crowdsourcing platform. The created dataset can be used to improve the accuracy of supervised classifiers in author profiling tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from crowdsource platform. Each text was written by the author specifically for the task provided.
#### Who are the source language producers?
Russian-speaking Yandex.Toloka users.
### Annotations
#### Annotation process
We used a crowdsourcing platform to collect texts. Each respondent is asked to fill a questionnaire including their gender, age and native language.
For age imitation task the respondents are to choose a
topic out of a few suggested, and write three texts on it:
1) Text in their natural manner;
2) Text imitating the style of someone younger;
3) Text imitating the style of someone older.
For gender and style imitation task each author wrote three texts in certain different styles:
1) Text in the authors natural style;
2) Text imitating other gender style;
3) Text in a different style but without gender imitation.
The topics to choose from are the following.
- An attempt to persuade some arbitrary listener to meet the respondent at their place;
- A story about some memorable event/acquisition/rumour or whatever else the imaginary listener is supposed to enjoy;
- A story about oneself or about someone else, aiming to please the listener and win their favour;
- A description of oneself and one’s potential partner for a dating site;
- An attempt to persuade an unfamiliar person to come;
- A negative tour review.
The task does not pass checking and is considered improper work if it contains:
- Irrelevant answers to the questionnaire;
- Incoherent jumble of words;
- Chunks of text borrowed from somewhere else;
- Texts not conforming to the above list of topics.
Texts checking is performed firstly by automated search for borrowings (by an anti-plagiarism website), and then by manual review of compliance to the task.
#### Who are the annotators?
Russian-speaking Yandex.Toloka users.
### Personal and Sensitive Information
All personal data was anonymized. Each author has been assigned an impersonal, unique identifier.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Researchers at AI technology lab at NRC "Kurchatov Institute". See the [website](https://sagteam.ru/).
### Licensing Information
Apache License 2.0.
### Citation Information
If you have found our results helpful in your work, feel free to cite our publication.
```
@article{сбоев2022сравнение,
title={СРАВНЕНИЕ ТОЧНОСТЕЙ МЕТОДОВ НА ОСНОВЕ ЯЗЫКОВЫХ И ГРАФОВЫХ НЕЙРОСЕТЕВЫХ МОДЕЛЕЙ ДЛЯ ОПРЕДЕЛЕНИЯ ПРИЗНАКОВ АВТОРСКОГО ПРОФИЛЯ ПО ТЕКСТАМ НА РУССКОМ ЯЗЫКЕ},
author={Сбоев, АГ and Молошников, ИА and Рыбка, РБ and Наумов, АВ and Селиванов, АА},
journal={Вестник Национального исследовательского ядерного университета МИФИ},
volume={10},
number={6},
pages={529--539},
year={2021},
publisher={Общество с ограниченной ответственностью МАИК "Наука/Интерпериодика"}
}
```
### Contributions
Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset.
|
sc2qa | null | @article{zhou2021generating,
author = {Li Zhou, Kevin Small, Yong Zhang, Sandeep Atluri},
title = "{Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning}",
conference = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)},
year = 2021,
} | \ | false | 1 | false | sc2qa/sc2q_commoncrawl | 2022-03-30T18:34:35.000Z | null | false | 38bfb57d96df0df3b254f0dcde663b6e8d7e4b5a | [] | [
"arxiv:2109.04689"
] | https://huggingface.co/datasets/sc2qa/sc2q_commoncrawl/resolve/main/README.md | For details, please refer to the following links.
Github repo: https://github.com/amazon-research/SC2QA-DRIL
Paper: [Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning](https://arxiv.org/pdf/2109.04689.pdf) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.