id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
NYTK/HuCOLA | 2022-10-21T16:08:35.000Z | [
"task_ids:text-simplification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:hu",
"license:cc-by-sa-4.0",
"region:us"
] | NYTK | null | null | 0 | 82 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- hu
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: HuCOLA
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
- summarization
- text-simplification
---
# Dataset Card for HuCOLA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuCOLA dataset](https://github.com/nytud/HuCOLA)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Corpus of Linguistic Acceptability (HuCOLA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu).
### Supported Tasks and Leaderboards
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is aN id, a sentence and a label.
An example:
```
{"Sent_id": "dev_0",
"Sent": "A földek eláradtak.",
"Label": "0"}
```
### Data Fields
- Sent_id: unique id of the instances, an integer between 1 and 1000;
- Sent: a Hungarian sentence;
- label: '0' for wrong, '1' for good sentences.
### Data Splits
HuCOLA has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of sentences in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 7276 | 80%|
| validation | 900 |10%|
| test | 900 |10%|
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment). The evaluation metric is Matthew's correlation coefficient.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was collected by two human annotators from 3 main linguistic books on Hungarian language:
- Kiefer Ferenc (ed.) (1992), Strukturális magyar nyelvtan 1. Mondattan. Budapest, Akadémiai Kiadó.
- Alberti, Gábor and Laczkó, Tibor (eds) (2018), Syntax of Hungarian Nouns and Noun Phrases. I., II. Comprehensive grammar resources. Amsterdam University Press, Amsterdam.
- Katalin É. Kiss and Veronika Hegedűs (eds) (2021), Postpositions and Postpositional Phrases. Amsterdam: Amsterdam University Press.
The process of collecting sentences partly followed the one described in Warstadt et. al (2018). The guideline of our process is available in the repository of [HuCOLA](https://github.com/nytud/HuCOLA).
### Annotations
#### Annotation process
Each instance was annotated by 4 human annotators for its acceptability (see the annotation guidelines in the repository of [HuCOLA](https://github.com/nytud/HuCOLA)).
#### Who are the annotators?
The annotators were native Hungarian speakers (of various ages, from 20 to 67) without any linguistic backround.
## Additional Information
### Licensing Information
HuCOLA is released under the CC-BY-SA 4.0 licence.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. | 5,304 | [
[
-0.0257568359375,
-0.07147216796875,
0.0042572021484375,
0.03070068359375,
-0.00750732421875,
-0.0263671875,
-0.04547119140625,
-0.0262298583984375,
0.0179443359375,
0.027801513671875,
-0.0299835205078125,
-0.0633544921875,
-0.04364013671875,
0.0213470458984... |
Narsil/test_data | 2022-02-14T11:24:59.000Z | [
"region:us"
] | Narsil | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
OmarN121/train | 2022-01-03T18:49:44.000Z | [
"region:us"
] | OmarN121 | null | null | 0 | 82 | 2022-03-02T23:29:22 | ---
YAML tags:
- copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,621 | [
[
-0.03265380859375,
-0.034698486328125,
0.00994873046875,
0.01904296875,
-0.01483917236328125,
0.016937255859375,
-0.022979736328125,
-0.0256500244140625,
0.045867919921875,
0.04412841796875,
-0.0626220703125,
-0.083251953125,
-0.051544189453125,
0.0049934387... |
RollingMuffin/test_scripts | 2022-02-18T13:58:25.000Z | [
"region:us"
] | RollingMuffin | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
ShreyaR/DepressionDetection | 2022-03-24T11:31:29.000Z | [
"region:us"
] | ShreyaR | null | null | 2 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
Tevatron/scifact-corpus | 2021-09-13T23:32:11.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{Wadden2020FactOF,
title={Fact or Fiction: Verifying Scientific Claims},
author={David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
booktitle={EMNLP},
year={2020},
} | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
Tevatron/wikipedia-curated-corpus | 2021-09-23T01:58:40.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
} | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Tevatron/wikipedia-curated | 2021-09-23T02:09:24.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov,
Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
} | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
TurkuNLP/turku_hockey_data2text | 2021-11-30T13:41:47.000Z | [
"region:us"
] | TurkuNLP | The Turku Hockey Data2Text corpus was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting. This dataset is a collection of 3,454 ice hockey games, each including game statistics and a news article describing the game. Each game includes manual alignment of events (such as goals or penalties) and sentences describing the specific event in natural language extracted from the news article. The corpus includes 12,827 annotated events. The natural language passages are manually curated not to include any information not derivable from the input data or world knowledge. | @inproceedings{kanerva2019newsgen,
Title = {Template-free Data-to-Text Generation of Finnish Sports News},
Author = {Jenna Kanerva and Samuel R{\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},
booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa’19)},
year={2019}
} | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
Valahaar/wsdmt | 2021-05-15T12:42:12.000Z | [
"region:us"
] | Valahaar | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
XiangPan/snli_break | 2021-09-20T05:45:54.000Z | [
"region:us"
] | XiangPan | The SNLI corpus (version 1.0) is a collection of 570k human-written English
sentence pairs manually labeled for balanced classification with the labels
entailment, contradiction, and neutral, supporting the task of natural language
inference (NLI), also known as recognizing textual entailment (RTE). | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
YuAnthony/tnews | 2022-01-19T09:48:58.000Z | [
"region:us"
] | YuAnthony | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
adalbertojunior/punctuation-ptbr-light | 2022-01-26T19:01:29.000Z | [
"region:us"
] | adalbertojunior | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.0169830322265625,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283... |
adalbertojunior/punctuation-ptbr | 2021-12-19T15:02:20.000Z | [
"region:us"
] | adalbertojunior | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
albertvillanova/carbon_24 | 2022-10-24T15:25:03.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:other-crystallography",
"size_categories:unknown",
"language:cif",
"license:mit",
"material-property-optimization",
"material-reconstruction",
"material-generation",
"arxiv:... | albertvillanova | null | null | 0 | 82 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- cif
license:
- mit
multilinguality:
- other-crystallography
size_categories:
- unknown
source_datasets: []
task_categories:
- other
task_ids: []
pretty_name: Carbon-24
tags:
- material-property-optimization
- material-reconstruction
- material-generation
---
# Dataset Card for Carbon-24
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/txie-93/cdvae/tree/main/data/carbon_24
- **Paper:** [Crystal Diffusion Variational Autoencoder for Periodic Material Generation](https://arxiv.org/abs/2110.06197)
- **Leaderboard:**
- **Point of Contact:** [Tian Xie](mailto:txie@csail.mit.edu)
### Dataset Summary
Carbon-24 contains 10k carbon materials, which share the same composition, but have different structures. There is 1 element and the materials have 6 - 24 atoms in the unit cells.
Carbon-24 includes various carbon structures obtained via ab initio random structure searching (AIRSS) (Pickard & Needs, 2006; 2011) performed at 10 GPa.
The original dataset includes 101529 carbon structures, and we selected the 10% of the carbon structure with the lowest energy per atom to create Carbon-24. All 10153 structures are at local energy minimum after DFT relaxation. The most stable structure is diamond at 10 GPa. All remaining structures are thermodynamically unstable but may be kinetically stable.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Please consider citing the following papers:
```
@article{xie2021crystal,
title={Crystal Diffusion Variational Autoencoder for Periodic Material Generation},
author={Tian Xie and Xiang Fu and Octavian-Eugen Ganea and Regina Barzilay and Tommi Jaakkola},
year={2021},
eprint={2110.06197},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
and
```
@misc{carbon2020data,
doi = {10.24435/MATERIALSCLOUD:2020.0026/V1},
url = {https://archive.materialscloud.org/record/2020.0026/v1},
author = {Pickard, Chris J.},
keywords = {DFT, ab initio random structure searching, carbon},
language = {en},
title = {AIRSS data for carbon at 10GPa and the C+N+H+O system at 1GPa},
publisher = {Materials Cloud},
year = {2020},
copyright = {info:eu-repo/semantics/openAccess}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| 4,486 | [
[
-0.03289794921875,
-0.0382080078125,
0.0226898193359375,
0.00791168212890625,
-0.0154571533203125,
0.0330810546875,
-0.009185791015625,
-0.01511383056640625,
0.0257415771484375,
0.0177001953125,
-0.05609130859375,
-0.06781005859375,
-0.02630615234375,
0.0195... |
allegro/polish-question-passage-pairs | 2021-09-23T19:55:15.000Z | [
"region:us"
] | allegro | null | null | 4 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
anton-l/common_language | 2022-10-21T16:20:41.000Z | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|common_voice",
"language:ar",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de"... | anton-l | This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database.
The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language).
The dataset has been extracted from CommonVoice to train language-id systems. | @dataset{ganesh_sinisetty_2021_5036977,
author = {Ganesh Sinisetty and
Pavlo Ruban and
Oleksandr Dymov and
Mirco Ravanelli},
title = {CommonLanguage},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5036977},
url = {https://doi.org/10.5281/zenodo.5036977}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
pretty_name: Common Language
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ia
- id
- it
- ja
- ka
- kab
- ky
- lv
- mn
- mt
- nl
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sl
- sv
- ta
- tr
- tt
- uk
- zh
language_bcp47:
- ar
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy-NL
- ia
- id
- it
- ja
- ka
- kab
- ky
- lv
- mn
- mt
- nl
- pl
- pt
- rm-sursilv
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- tr
- tt
- uk
- zh-CN
- zh-HK
- zh-TW
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- speech-processing
task_ids:
- speech-classification
---
# Dataset Card for common_language
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5036977
- **Repository:** https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
### Supported Tasks and Leaderboards
The baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):
https://github.com/speechbrain/speechbrain
### Languages
List of included language:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file, and its label `language`. Additional fields include `age`, `client_id`, `gender` and `sentence`.
```python
{
'client_id': 'itln_trn_sp_175',
'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav',
'sentence': 'Con gli studenti è leggermente simile.',
'age': 'not_defined',
'gender': 'not_defined',
'language': 22
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`language` (`ClassLabel`): The language of the recording (see the `Languages` section above)
`sentence` (`string`): The sentence the user was prompted to speak
`age` (`string`): The age of the speaker.
`gender` (`string`): The gender of the speaker
### Data Splits
The dataset is already balanced and split into train, dev (validation) and test sets.
| Name | Train | Dev | Test |
|:---------------------------------:|:------:|:------:|:-----:|
| **# of utterances** | 177552 | 47104 | 47704 |
| **# unique speakers** | 11189 | 1297 | 1322 |
| **Total duration, hr** | 30.04 | 7.53 | 7.53 |
| **Min duration, sec** | 0.86 | 0.98 | 0.89 |
| **Mean duration, sec** | 4.87 | 4.61 | 4.55 |
| **Max duration, sec** | 21.72 | 105.67 | 29.83 |
| **Duration per language, min** | ~40 | ~10 | ~10 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
The Mongolian and Ukrainian languages are spelled as "Mangolian" and "Ukranian" in this version of the dataset.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@dataset{ganesh_sinisetty_2021_5036977,
author = {Ganesh Sinisetty and
Pavlo Ruban and
Oleksandr Dymov and
Mirco Ravanelli},
title = {CommonLanguage},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5036977},
url = {https://doi.org/10.5281/zenodo.5036977}
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset.
| 7,729 | [
[
-0.034149169921875,
-0.049957275390625,
0.0039215087890625,
0.0251312255859375,
-0.01479339599609375,
-0.00164794921875,
-0.04180908203125,
-0.032806396484375,
0.042022705078125,
0.031982421875,
-0.0511474609375,
-0.0767822265625,
-0.034454345703125,
0.00724... |
anuragshas/bg_opus100_processed | 2022-01-30T14:36:57.000Z | [
"region:us"
] | anuragshas | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
birgermoell/sv_corpora_parliament_processed | 2022-01-17T15:48:56.000Z | [
"region:us"
] | birgermoell | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
biu-nlp/qa_discourse | 2022-05-16T08:37:44.000Z | [
"region:us"
] | biu-nlp | The dataset contains question-answer pairs to model discourse relations.
While answers roughly correspond to spans of the sentence, these spans could have been freely adjusted by annotators to grammaticaly fit the question;
Therefore, answers are given just as text and not as identified spans of the original sentence.
See the paper for details: QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines, Pyatkin et. al., 2020 | @inproceedings{pyatkin2020qadiscourse,
title={QADiscourse-Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines},
author={Pyatkin, Valentina and Klein, Ayal and Tsarfaty, Reut and Dagan, Ido},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2804--2819},
year={2020}
} | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
bs-modeling-metadata/OSCAR_Entity_13_000 | 2021-09-15T14:20:53.000Z | [
"region:us"
] | bs-modeling-metadata | null | null | 0 | 82 | 2022-03-02T23:29:22 | The dataset is in the form of a json lines file with 10,657 examples, where an example consists of text (extracted from the first 13,000 rows of OSCAR unshuffled English dataset) and metadata fields (entities).
Structure of an example.
```
{
"text": "This is exactly the sort of article to raise the profile of the club around the Midlands. Very positive and really focusses on how the club has improved over a short period of time and the bright prospects for the future \n\"Oxford Town\" - professional as always at the Birmingham Mail. Not only is Oxford a city, but Oxford United are pretty recognisable name to anyone who has ever taken even a vague interest in English football.",
"metadata": [
{
"key": "entity",
"type": "local",
"char_start_idx": 80,
"char_end_idx": 88,
"value": "Midlands"
},
{
"key": "entity",
"type": "local",
"char_start_idx": 225,
"char_end_idx": 236,
"value": "Oxford Town"
},
{
"key": "entity",
"type": "local",
"char_start_idx": 270,
"char_end_idx": 285,
"value": "Birmingham_Mail"
},
{
"key": "entity",
"type": "local",
"char_start_idx": 299,
"char_end_idx": 305,
"value": "Oxford"
},
{
"key": "entity",
"type": "local",
"char_start_idx": 318,
"char_end_idx": 331,
"value": "Oxford_United_Stars_F.C."
},
{
"key": "entity",
"type": "local",
"char_start_idx": 415,
"char_end_idx": 422,
"value": "England"
}
]
}
```
| 1,677 | [
[
-0.025970458984375,
-0.01543426513671875,
0.01666259765625,
-0.00862884521484375,
-0.0193634033203125,
0.01360321044921875,
0.01242828369140625,
-0.01593017578125,
0.039520263671875,
0.04345703125,
-0.049591064453125,
-0.06842041015625,
-0.027862548828125,
0... |
bstad/github-issues | 2022-01-17T18:07:38.000Z | [
"region:us"
] | bstad | null | null | 1 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
calvpang/github-issues | 2022-01-27T04:11:18.000Z | [
"region:us"
] | calvpang | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
castorini/msmarco_v1_passage_doc2query-t5_expansions | 2022-06-21T17:45:43.000Z | [
"language:English",
"license:Apache License 2.0",
"region:us"
] | castorini | null | null | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- English
license: "Apache License 2.0"
---
# Dataset Summary
The repo provides queries generated for the MS MARCO V1 passage corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model.
# Dataset Structure
All three folds (train, dev and test) share the same corpus. The queries are generated from this corpus.
An example data entry looks as follows:
```
{
"id": "0",
"predicted_queries": ["what was important to the success of the manhattan project", "why was the manhattan project important?", "what was important about the manhattan project", "why was the success of the manhattan project so important?", "who was the manhattan project a scientific project for", "what was the manhattan project important for", "why was the manhattan project a success", "how was the success of the manhattan project", "why was the manhattan project important to the success of the project?", "what is the importance of communication amongst scientific minds", "what was the importance of scientific communication for the success of the manhattan project", "what was the purpose of the manhattan project", "why was the manhattan project significant?", "why was the manhattan project important", "why did scientists believe in atomic power", "why did scientists and engineers have to communicate?", "why was the manhattan project a success", "what was the purpose of the manhattan project", "why did scientists and engineers want to be involved in the manhattan project", "why are the scientists so valuable", "which of the following was an important outcome of the manhattan project?", "why was the manhattan project successful", "why was the manhattan project an important scientific achievement", "what was the success of manhattan", "what was the result of the manhattan project", "why was communications important to the success of the manhattan project?", "why the manhattan project was important", "why is it important to know who is the manhattan project", "what was the most important accomplishment to the success of the manhattan project?", "why was the manhattan project an important achievement?", "why was the manhattan project important to the success of the atomic bomb", "how did the manhattan project impact scientists?", "what were the effects of the manhattan project", "what were the results of the manhattan project and how did they affect the public", "what was the manhattan project", "why did scientists contribute to the success of the manhattan project", "why was communication important in the manhattan project", "what was the effect of the manhattan project on the world", "what was the importance of communication in the success of the manhattan project?", "why was communications important to the success of the manhattan project?", "why was the manhattan project important", "what was the manhattan project", "why was the success of the manhattan project important", "why was manhattan project a success", "what was important about the manhattan project", "what benefited from the success of the new york nuclear bomb", "what was the significance to the success of the manhattan project?", "why is communication important", "why was the manhattan project an important achievement", "why did the manhattan project work", "what was the manhattan project's success", "what was the significance of the manhattan experiment", "how important was communication to the success of the manhattan project", "why is communication important to the success of the manhattan project?", "what was the importance of the manhattan project", "why did scientists believe the manhattan project had the greatest impact on science?", "what was a critical effect of the manhattan project?", "why did the manhattan project succeed", "what was the importance of the manhattan project", "why was the manhattan project important", "why was the manhattan project a success?", "what was the importance of communication and communication during the manhattan project", "why was the manhattan project significant?", "what was the importance of communication in the manhattan project?", "why was communication important to the success of the manhattan project?", "why was the manhattan project an important achievement", "what was important about the manhattan project", "why was the manhattan project a success", "why were the scientists at the manhattan project so successful?", "why did the manhattan project really work", "what was the success of the manhattan project", "what is the importance of communication during the manhattan project", "why was the manhattan project important", "why was communication important?", "what was the importance of communication in the success of the manhattan project?", "why was the manhattan project successful?", "which statement reflects the success of the manhattan project?", "why did the manhattan project succeed", "why was the manhattan project a great success", "why was the manhattan project important"]
}
```
# Load Dataset
An example to load the dataset:
```
dataset = load_dataset('castorini/msmarco_v1_passage_doc2query-t5_expansions', data_files='d2q.jsonl.gz')
```
# Citation Information
```
@article{docTTTTTquery,
title={From doc2query to {docTTTTTquery}},
author={Nogueira, Rodrigo and Lin, Jimmy},
year={2019}
}
@article{emdt5,
author={Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin},
title={The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models},
journal={arXiv:2101.05667},
year={2021},
}
| 6,013 | [
[
-0.0377197265625,
-0.060943603515625,
0.05560302734375,
0.01549530029296875,
-0.016387939453125,
0.0304107666015625,
0.004978179931640625,
-0.022064208984375,
0.033416748046875,
0.0305633544921875,
-0.031707763671875,
-0.0380859375,
-0.033935546875,
0.014945... |
classla/reldi_sr | 2022-10-25T07:30:33.000Z | [
"task_categories:other",
"task_ids:lemmatization",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"language:sr",
"license:cc-by-sa-4.0",
"structure-prediction",
"normalization",
"tokenization",
"region:us"
] | classla | The dataset contains 5462 training samples, 711 validation samples and 725 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent_id'),
list of tokens ('tokens'), list of lemmas ('lemmas'), list of UPOS tags ('upos_tags'),
list of Multext-East tags ('xpos_tags), list of morphological features ('feats'),
and list of IOB tags ('iob_tags'), which are encoded as class labels. | null | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- sr
license:
- cc-by-sa-4.0
task_categories:
- other
task_ids:
- lemmatization
- named-entity-recognition
- part-of-speech
tags:
- structure-prediction
- normalization
- tokenization
---
This dataset is based on 3,748 Serbian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities.
The dataset contains 5462 training samples (sentences), 711 validation samples and 725 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'),
list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'),
list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'),
and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels.
If you are using this dataset in your research, please cite the following paper:
```
@article{Miličević_Ljubešić_2016,
title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets},
volume={4},
url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007},
DOI={10.4312/slo2.0.2016.2.156-188},
number={2},
journal={Slovenščina 2.0: empirical, applied and interdisciplinary research},
author={Miličević, Maja and Ljubešić, Nikola},
year={2016},
month={Sep.},
pages={156–188} }
``` | 1,467 | [
[
-0.01319122314453125,
-0.032684326171875,
0.01727294921875,
0.01416015625,
-0.034576416015625,
0.00946807861328125,
-0.0310821533203125,
-0.02398681640625,
0.040283203125,
0.042755126953125,
-0.0474853515625,
-0.065673828125,
-0.043060302734375,
0.0225372314... |
collectivat/tv3_parla | 2022-12-12T09:01:48.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ca",
"license:cc-by-nc-4.0",
... | collectivat | This corpus includes 240 hours of Catalan speech from broadcast material.
The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018.
The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA);
we processed their material and hereby making it available under their terms of use.
This project was supported by the Softcatalà Association. | @inproceedings{kulebi18_iberspeech,
author={Baybars Külebi and Alp Öktem},
title={{Building an Open Source Automatic Speech Recognition System for Catalan}},
year=2018,
booktitle={Proc. IberSPEECH 2018},
pages={25--29},
doi={10.21437/IberSPEECH.2018-6}
} | 3 | 82 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-generation
task_ids:
- language-modeling
pretty_name: TV3Parla
---
# Dataset Card for TV3Parla
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://collectivat.cat/asr#tv3parla
- **Repository:**
- **Paper:** [Building an Open Source Automatic Speech Recognition System for Catalan](https://www.isca-speech.org/archive/iberspeech_2018/kulebi18_iberspeech.html)
- **Point of Contact:** [Col·lectivaT](mailto:info@collectivat.cat)
### Dataset Summary
This corpus includes 240 hours of Catalan speech from broadcast material.
The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018.
The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA);
we processed their material and hereby making it available under their terms of use.
This project was supported by the Softcatalà Association.
### Supported Tasks and Leaderboards
The dataset can be used for:
- Language Modeling.
- Automatic Speech Recognition (ASR) transcribes utterances into words.
### Languages
The dataset is in Catalan (`ca`).
## Dataset Structure
### Data Instances
```
{
'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav',
'audio': {'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav',
'array': array([-0.01168823, 0.01229858, 0.02819824, ..., 0.015625 ,
0.01525879, 0.0145874 ]),
'sampling_rate': 16000},
'text': 'algunes montoneres que que et feien anar ben col·locat i el vent també hi jugava una mica de paper bufava vent de cantó alguns cops o de cul i el pelotón el vent el porta molt malament hi havia molts nervis'
}
```
### Data Fields
- `path` (str): Path to the audio file.
- `audio` (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling
rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and
resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might
take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column,
*i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `text` (str): Transcription of the audio file.
### Data Splits
The dataset is split into "train" and "test".
| | train | test |
|:-------------------|-------:|-----:|
| Number of examples | 159242 | 2220 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{kulebi18_iberspeech,
author={Baybars Külebi and Alp Öktem},
title={{Building an Open Source Automatic Speech Recognition System for Catalan}},
year=2018,
booktitle={Proc. IberSPEECH 2018},
pages={25--29},
doi={10.21437/IberSPEECH.2018-6}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| 5,137 | [
[
-0.03814697265625,
-0.0325927734375,
-0.0109100341796875,
0.03192138671875,
-0.0261688232421875,
0.0050506591796875,
-0.028045654296875,
-0.0186767578125,
0.0300750732421875,
0.03814697265625,
-0.040130615234375,
-0.0689697265625,
-0.044708251953125,
0.01094... |
damlab/HIV_V3_bodysite | 2022-02-08T21:12:25.000Z | [
"region:us"
] | damlab | null | null | 0 | 82 | 2022-03-02T23:29:22 | # Dataset Description
## Dataset Summary
This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database.
It contains 5,510 unique V3 sequences, each annotated with its corresponding bodysite that it was associated with.
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Data Instances: Each column represents the protein amino acid sequence of the HIV V3 loop.
The ID field indicates the Genbank reference ID for future cross-referencing.
There are 2,935 total V3 sequences, with 91% being CCR5 tropic and 23% CXCR4 tropic.
Data Fields: ID, sequence, fold, periphery-tcell, periphery-monocyte, CNS, lung, breast-milk, gastric, male-genitals, female-genitals, umbilical-cord, organ
Data Splits: None
## Dataset Creation
Curation Rationale:
Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/20/2021.
## Considerations for Using the Data
Social Impact of Dataset: This dataset can be used to study the mechanism by which HIV V3 loops allow for study of HIV compartmentalization.
Discussion of Biases: DDue to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D.
Currently, there was no effort made to balance the performance across these classes.
As such, one should consider refinement with additional sequences to perform well on non-B sequences.
Additionally, this dataset is highly biased to peripheral T-cells.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
---
license: mit
--- | 1,693 | [
[
-0.00969696044921875,
-0.020538330078125,
0.0153961181640625,
0.00682830810546875,
-0.0133819580078125,
0.01116180419921875,
0.02099609375,
-0.0193023681640625,
0.01474761962890625,
0.031036376953125,
-0.040252685546875,
-0.056976318359375,
-0.028778076171875,
... |
dram-conflict/horror-scripts | 2022-02-21T16:26:48.000Z | [
"region:us"
] | dram-conflict | This dataset is designed to generate scripts. | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dvilasuero/ag_news_training_set_losses | 2021-09-21T10:10:25.000Z | [
"region:us"
] | dvilasuero | null | null | 0 | 82 | 2022-03-02T23:29:22 | # AG News train losses
This dataset is part of an experiment using [Rubrix](https://github.com/recognai/rubrix), an open-source Python framework for human-in-the loop NLP data annotation and management. | 203 | [
[
-0.0193328857421875,
-0.039825439453125,
0.0166778564453125,
0.00823211669921875,
0.002613067626953125,
0.01116943359375,
-0.0024204254150390625,
0.01824951171875,
0.0078125,
0.02166748046875,
-0.047332763671875,
-0.0253753662109375,
-0.03271484375,
-0.01933... |
edge2992/rri-short | 2021-12-10T13:23:02.000Z | [
"region:us"
] | edge2992 | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
eugenesiow/PIRM | 2022-10-21T04:01:16.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:cc-by-nc-sa-4.0",
"other-image-super-resolution",
"arxiv:1809.07517",
"region:us"
] | eugenesiow | The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing.
These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc.
Images vary in size, and are typically ~300K pixels in resolution.
This dataset was first used for evaluating the perceptual quality of super-resolution algorithms in The 2018 PIRM
challenge on Perceptual Super-resolution, in conjunction with ECCV 2018. | @misc{shoeiby2019pirm2018,
title={PIRM2018 Challenge on Spectral Image Super-Resolution: Dataset and Study},
author={Mehrdad Shoeiby and Antonio Robles-Kelly and Ran Wei and Radu Timofte},
year={2019},
eprint={1904.00540},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: PIRM
tags:
- other-image-super-resolution
---
# Dataset Card for PIRM
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/roimehrez/PIRM2018
- **Repository**: https://huggingface.co/datasets/eugenesiow/PIRM
- **Paper**: https://arxiv.org/abs/1809.07517
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing.
These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc.
Images vary in size, and are typically ~300K pixels in resolution.
This dataset was first used for evaluating the perceptual quality of super-resolution algorithms in The 2018 PIRM
challenge on Perceptual Super-resolution, in conjunction with ECCV 2018.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/PIRM', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/PIRM_valid_HR/1.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/PIRM_valid_LR_x2/1.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|test|
|-------|---:|---:|
|bicubic_x2|100|100|
|bicubic_x3|100|100|
|bicubic_x4|100|100|
|unknown_x4|100|100|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Blau et al. (2018)](https://arxiv.org/abs/1809.07517)
### Licensing Information
This dataset is published under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@misc{blau20192018,
title={The 2018 PIRM Challenge on Perceptual Image Super-resolution},
author={Yochai Blau and Roey Mechrez and Radu Timofte and Tomer Michaeli and Lihi Zelnik-Manor},
year={2019},
eprint={1809.07517},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| 5,203 | [
[
-0.056182861328125,
-0.03497314453125,
0.009063720703125,
-0.0061798095703125,
-0.024017333984375,
-0.018341064453125,
-0.019683837890625,
-0.017364501953125,
0.0193328857421875,
0.0211029052734375,
-0.047393798828125,
-0.054351806640625,
-0.041351318359375,
... |
fatvvs/autonlp-data-entity_model_conll2003 | 2021-06-08T10:15:34.000Z | [
"region:us"
] | fatvvs | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
flax-sentence-embeddings/stackexchange_title_body_jsonl | 2021-07-02T08:03:58.000Z | [
"region:us"
] | flax-sentence-embeddings | null | null | 0 | 82 | 2022-03-02T23:29:22 | jsonl.gz format from https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml
Each line contains a dict in the format: \
{"text": ["title", "body"], "tags": ["tag1", "tag2"]}
The following parameters have been used for filtering: \
min_title_len = 20 \
min_body_len = 20 \
max_body_len = 4096 \
min_score = 0
If a stackexchange contained less than 10k questions (after filtering), it is written to the `small_stackexchanges.jsonl.gz` file.
This is a dump of the files from https://archive.org/details/stackexchange
downloaded via torrent on 2021-07-01.
Publication date 2021-06-07 \
Usage Attribution-ShareAlike 4.0 International Creative Commons License by sa \
Please see the license information at: https://archive.org/details/stackexchange
## Examples (lines) per file:
stackoverflow.com-Posts.jsonl.gz: 18,562,443\
math.stackexchange.com.jsonl.gz: 1,338,443\
small_stackexchanges.jsonl.gz: 448,146\
superuser.com.jsonl.gz: 435,463\
askubuntu.com.jsonl.gz: 347,925\
serverfault.com.jsonl.gz: 270,904\
tex.stackexchange.com.jsonl.gz: 202,954\
unix.stackexchange.com.jsonl.gz: 185,997\
stats.stackexchange.com.jsonl.gz: 173,466\
physics.stackexchange.com.jsonl.gz: 173,307\
electronics.stackexchange.com.jsonl.gz: 143,582\
gis.stackexchange.com.jsonl.gz: 131,000\
mathoverflow.net.jsonl.gz: 120,851\
apple.stackexchange.com.jsonl.gz: 110,622\
english.stackexchange.com.jsonl.gz: 109,522\
salesforce.stackexchange.com.jsonl.gz: 105,260\
wordpress.stackexchange.com.jsonl.gz: 100,474\
magento.stackexchange.com.jsonl.gz: 99991\
sharepoint.stackexchange.com.jsonl.gz: 94011\
gaming.stackexchange.com.jsonl.gz: 88912\
meta.stackexchange.com.jsonl.gz: 83510\
ell.stackexchange.com.jsonl.gz: 83271\
dba.stackexchange.com.jsonl.gz: 81871\
blender.stackexchange.com.jsonl.gz: 80766\
drupal.stackexchange.com.jsonl.gz: 79717\
mathematica.stackexchange.com.jsonl.gz: 73131\
scifi.stackexchange.com.jsonl.gz: 61528\
diy.stackexchange.com.jsonl.gz: 60083\
security.stackexchange.com.jsonl.gz: 58000\
softwareengineering.stackexchange.com.jsonl.gz: 53942\
android.stackexchange.com.jsonl.gz: 51608\
gamedev.stackexchange.com.jsonl.gz: 46485\
codereview.stackexchange.com.jsonl.gz: 45765\
rpg.stackexchange.com.jsonl.gz: 42303\
travel.stackexchange.com.jsonl.gz: 41227\
cs.stackexchange.com.jsonl.gz: 38314\
meta.stackoverflow.com.jsonl.gz: 36456\
webmasters.stackexchange.com.jsonl.gz: 34559\
chemistry.stackexchange.com.jsonl.gz: 34506\
academia.stackexchange.com.jsonl.gz: 34331\
ethereum.stackexchange.com.jsonl.gz: 32760\
judaism.stackexchange.com.jsonl.gz: 32028\
money.stackexchange.com.jsonl.gz: 32021\
raspberrypi.stackexchange.com.jsonl.gz: 30625\
graphicdesign.stackexchange.com.jsonl.gz: 30233\
webapps.stackexchange.com.jsonl.gz: 29697\
ux.stackexchange.com.jsonl.gz: 29403\
datascience.stackexchange.com.jsonl.gz: 27397\
worldbuilding.stackexchange.com.jsonl.gz: 26763\
bitcoin.stackexchange.com.jsonl.gz: 25374\
biology.stackexchange.com.jsonl.gz: 24447\
workplace.stackexchange.com.jsonl.gz: 24189\
photo.stackexchange.com.jsonl.gz: 23753\
cooking.stackexchange.com.jsonl.gz: 23705\
crypto.stackexchange.com.jsonl.gz: 23231\
mechanics.stackexchange.com.jsonl.gz: 22868\
japanese.stackexchange.com.jsonl.gz: 22056\
dsp.stackexchange.com.jsonl.gz: 21252\
emacs.stackexchange.com.jsonl.gz: 21055\
music.stackexchange.com.jsonl.gz: 20636\
movies.stackexchange.com.jsonl.gz: 20181\
softwarerecs.stackexchange.com.jsonl.gz: 20142\
aviation.stackexchange.com.jsonl.gz: 20139\
arduino.stackexchange.com.jsonl.gz: 19553\
law.stackexchange.com.jsonl.gz: 17941\
puzzling.stackexchange.com.jsonl.gz: 17851\
quant.stackexchange.com.jsonl.gz: 17261\
rus.stackexchange.com.jsonl.gz: 16871\
bicycles.stackexchange.com.jsonl.gz: 16353\
space.stackexchange.com.jsonl.gz: 15142\
gardening.stackexchange.com.jsonl.gz: 15136\
philosophy.stackexchange.com.jsonl.gz: 14829\
german.stackexchange.com.jsonl.gz: 13950\
networkengineering.stackexchange.com.jsonl.gz: 13454\
hinduism.stackexchange.com.jsonl.gz: 13450\
craftcms.stackexchange.com.jsonl.gz: 12574\
civicrm.stackexchange.com.jsonl.gz: 12543\
boardgames.stackexchange.com.jsonl.gz: 12149\
christianity.stackexchange.com.jsonl.gz: 12108\
history.stackexchange.com.jsonl.gz: 12021\
politics.stackexchange.com.jsonl.gz: 11894\
expressionengine.stackexchange.com.jsonl.gz: 11866\
islam.stackexchange.com.jsonl.gz: 11853\
anime.stackexchange.com.jsonl.gz: 11444\
economics.stackexchange.com.jsonl.gz: 11115\
french.stackexchange.com.jsonl.gz: 10794\
engineering.stackexchange.com.jsonl.gz: 10753\
cstheory.stackexchange.com.jsonl.gz: 10642\
vi.stackexchange.com.jsonl.gz: 10551\
astronomy.stackexchange.com.jsonl.gz: 10462\
writers.stackexchange.com.jsonl.gz: 10157\
skeptics.stackexchange.com.jsonl.gz: 10009\
**Total: 25,333,327**
| 4,801 | [
[
-0.052001953125,
-0.03424072265625,
0.040130615234375,
0.027069091796875,
-0.00933837890625,
0.0267486572265625,
0.003978729248046875,
-0.032989501953125,
0.0228271484375,
0.0153045654296875,
-0.03656005859375,
-0.05108642578125,
-0.05035400390625,
0.0138931... |
frtna/deneme | 2022-01-04T06:32:37.000Z | [
"region:us"
] | frtna | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fuyun1107/clip-for-vlp | 2021-09-07T09:52:10.000Z | [
"region:us"
] | fuyun1107 | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
gabella/demo_data_raw | 2022-01-05T19:48:23.000Z | [
"region:us"
] | gabella | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
gar1t/test | 2021-09-15T17:55:27.000Z | [
"region:us"
] | gar1t | null | null | 0 | 82 | 2022-03-02T23:29:22 | # Test Dataset
Just a test - nothing to see here!
| 51 | [
[
-0.04071044921875,
-0.03631591796875,
0.002277374267578125,
0.0307464599609375,
-0.04205322265625,
-0.0067138671875,
0.019775390625,
0.016937255859375,
0.0295562744140625,
0.046722412109375,
-0.054840087890625,
-0.0513916015625,
-0.021942138671875,
-0.010734... |
ghomasHudson/hotpotExtended | 2022-01-13T21:45:03.000Z | [
"region:us"
] | ghomasHudson | null | null | 0 | 82 | 2022-03-02T23:29:22 | # HotpotQA-extended
> Version of the HotpotQA dataset with full Wikipedia articles.
The HotpotQA dataset consists of questions from crowd workers which require information from multiple Wikipedia articles in order to answer,thus testing the ability for models to perform multi-hop question answering. The data is commonly presented as a list of paragraphs containing relevant information plus a setting where the addition of ’distractor paragraphs’ fully test the ability of the model to comprehend which information is relevant to the question asked.
In this dataset, we increase the length of the inputs by expanding each paragraph with its full Wikipedia page as well as adding additional distractor articles from similar topics in order to meet the 10,000 token minimum length requirement for this benchmark. | 825 | [
[
-0.065185546875,
-0.062744140625,
0.01514434814453125,
0.021270751953125,
-0.01514434814453125,
-0.0249786376953125,
-0.00794219970703125,
-0.017578125,
0.01812744140625,
0.046722412109375,
-0.039276123046875,
-0.006671905517578125,
-0.038360595703125,
0.027... |
ghomasHudson/vlsp | 2022-10-25T09:20:37.000Z | [
"language:en",
"region:us"
] | ghomasHudson | Very Long version of the scientific papers summarization dataset. Only includes theses over 10,000 tokens long. | """
_DESCRIPTION = | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
---
# Dataset Card for vlsp
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/ghomasHudson/very_long_scientific_papers
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset following the methodology of the scientific_papers dataset, but specifically designed for very long documents (>10,000 words). This is gathered from arxiv.org by searching for theses.
The dataset has 2 features:
- article: the body of the document.
- abstract: the abstract of the document.
### Supported Tasks and Leaderboards
Summarization
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
Only a test set is provided.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
| 2,689 | [
[
-0.036895751953125,
-0.0238494873046875,
0.0225830078125,
0.0229644775390625,
-0.0206756591796875,
0.0107574462890625,
-0.018951416015625,
-0.03961181640625,
0.035919189453125,
0.050933837890625,
-0.0462646484375,
-0.06610107421875,
-0.0450439453125,
0.00823... |
gigant/romanian_speech_synthesis_0_8_1 | 2022-10-24T17:38:35.000Z | [
"task_categories:automatic-speech-recognition",
"language:ro",
"license:unknown",
"region:us"
] | gigant | \
The Romanian speech synthesis (RSS) corpus was recorded in a hemianechoic chamber (anechoic walls and ceiling; floor partially anechoic) at the University of Edinburgh. We used three high quality studio microphones: a Neumann u89i (large diaphragm condenser), a Sennheiser MKH 800 (small diaphragm condenser with very wide bandwidth) and a DPA 4035 (headset-mounted condenser). Although the current release includes only speech data recorded via Sennheiser MKH 800, we may release speech data recorded via other microphones in the future. All recordings were made at 96 kHz sampling frequency and 24 bits per sample, then downsampled to 48 kHz sampling frequency. For recording, downsampling and bit rate conversion, we used ProTools HD hardware and software. We conducted 8 sessions over the course of a month, recording about 500 sentences in each session. At the start of each session, the speaker listened to a previously recorded sample, in order to attain a similar voice quality and intonation. | \
@article{Stan2011442,
author = {Adriana Stan and Junichi Yamagishi and Simon King and
Matthew Aylett},
title = {The {R}omanian speech synthesis ({RSS}) corpus:
Building a high quality {HMM}-based speech synthesis
system using a high sampling rate},
journal = {Speech Communication},
volume = {53},
number = {3},
pages = {442--450},
note = {},
abstract = {This paper first introduces a newly-recorded high
quality Romanian speech corpus designed for speech
synthesis, called ''RSS'', along with Romanian
front-end text processing modules and HMM-based
synthetic voices built from the corpus. All of these
are now freely available for academic use in order to
promote Romanian speech technology research. The RSS
corpus comprises 3500 training sentences and 500 test
sentences uttered by a female speaker and was recorded
using multiple microphones at 96 kHz sampling
frequency in a hemianechoic chamber. The details of the
new Romanian text processor we have developed are also
given. Using the database, we then revisit some basic
configuration choices of speech synthesis, such as
waveform sampling frequency and auditory frequency
warping scale, with the aim of improving speaker
similarity, which is an acknowledged weakness of
current HMM-based speech synthesisers. As we
demonstrate using perceptual tests, these configuration
choices can make substantial differences to the quality
of the synthetic speech. Contrary to common practice in
automatic speech recognition, higher waveform sampling
frequencies can offer enhanced feature extraction and
improved speaker similarity for HMM-based speech
synthesis.},
doi = {10.1016/j.specom.2010.12.002},
issn = {0167-6393},
keywords = {Speech synthesis, HTS, Romanian, HMMs, Sampling
frequency, Auditory scale},
url = {http://www.sciencedirect.com/science/article/pii/S0167639310002074},
year = 2011
} | 2 | 82 | 2022-03-02T23:29:22 | ---
language:
- ro
license:
- unknown
size_categories:
ro:
- 1K<n<10K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: Romanian Speech Synthesis
---
## Dataset Description
- **Homepage:** https://romaniantts.com/rssdb/
- **Paper:** https://www.sciencedirect.com/science/article/abs/pii/S0167639310002074
### Dataset Summary
The Romanian speech synthesis (RSS) corpus was recorded in a hemianechoic chamber (anechoic walls and ceiling; floor partially anechoic) at the University of Edinburgh. We used three high quality studio microphones: a Neumann u89i (large diaphragm condenser), a Sennheiser MKH 800 (small diaphragm condenser with very wide bandwidth) and a DPA 4035 (headset-mounted condenser). Although the current release includes only speech data recorded via Sennheiser MKH 800, we may release speech data recorded via other microphones in the future. All recordings were made at 96 kHz sampling frequency and 24 bits per sample, then downsampled to 48 kHz sampling frequency. For recording, downsampling and bit rate conversion, we used ProTools HD hardware and software. We conducted 8 sessions over the course of a month, recording about 500 sentences in each session. At the start of each session, the speaker listened to a previously recorded sample, in order to attain a similar voice quality and intonation.
### Languages
Romanian
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train and test.
The train split consists of 3180 audio clips and the related sentences.
The test split consists of 536 audio clips and the related sentences.
### Citation Information
```
@article{Stan2011442,
author = {Adriana Stan and Junichi Yamagishi and Simon King and
Matthew Aylett},
title = {The {R}omanian speech synthesis ({RSS}) corpus:
Building a high quality {HMM}-based speech synthesis
system using a high sampling rate},
journal = {Speech Communication},
volume = {53},
number = {3},
pages = {442--450},
note = {},
abstract = {This paper first introduces a newly-recorded high
quality Romanian speech corpus designed for speech
synthesis, called ''RSS'', along with Romanian
front-end text processing modules and HMM-based
synthetic voices built from the corpus. All of these
are now freely available for academic use in order to
promote Romanian speech technology research. The RSS
corpus comprises 3500 training sentences and 500 test
sentences uttered by a female speaker and was recorded
using multiple microphones at 96 kHz sampling
frequency in a hemianechoic chamber. The details of the
new Romanian text processor we have developed are also
given. Using the database, we then revisit some basic
configuration choices of speech synthesis, such as
waveform sampling frequency and auditory frequency
warping scale, with the aim of improving speaker
similarity, which is an acknowledged weakness of
current HMM-based speech synthesisers. As we
demonstrate using perceptual tests, these configuration
choices can make substantial differences to the quality
of the synthetic speech. Contrary to common practice in
automatic speech recognition, higher waveform sampling
frequencies can offer enhanced feature extraction and
improved speaker similarity for HMM-based speech
synthesis.},
doi = {10.1016/j.specom.2010.12.002},
issn = {0167-6393},
keywords = {Speech synthesis, HTS, Romanian, HMMs, Sampling
frequency, Auditory scale},
url = {http://www.sciencedirect.com/science/article/pii/S0167639310002074},
year = 2011
}
```
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. | 4,901 | [
[
-0.05047607421875,
-0.04730224609375,
0.00025153160095214844,
0.019012451171875,
-0.01548004150390625,
-0.008148193359375,
-0.02777099609375,
-0.0272216796875,
0.0241851806640625,
0.017791748046875,
-0.04510498046875,
-0.04840087890625,
-0.01555633544921875,
... |
huggingartists/architects | 2022-10-25T09:23:24.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 1 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/architects"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.189248 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d2cd8787bdf913fc1518987f971c6bd3.960x960x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/architects">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Architects</div>
<a href="https://genius.com/artists/architects">
<div style="text-align: center; font-size: 14px;">@architects</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/architects).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/architects")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|134| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/architects")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,178 | [
[
-0.045684814453125,
-0.038909912109375,
0.00653839111328125,
0.019989013671875,
-0.0166778564453125,
0.00029397010803222656,
-0.0206298828125,
-0.0330810546875,
0.062347412109375,
0.02471923828125,
-0.06500244140625,
-0.06524658203125,
-0.03924560546875,
0.0... |
huggingartists/booker | 2022-10-25T09:25:20.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/booker"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.782002 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/fb0d7cebfd97c76d99f1015b6ddc0e55.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/booker">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Booker</div>
<a href="https://genius.com/artists/booker">
<div style="text-align: center; font-size: 14px;">@booker</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/booker).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/booker")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|196| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/booker")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,148 | [
[
-0.0445556640625,
-0.033538818359375,
0.005001068115234375,
0.0222930908203125,
-0.0174102783203125,
-0.0024967193603515625,
-0.019683837890625,
-0.032562255859375,
0.064453125,
0.025543212890625,
-0.0703125,
-0.06121826171875,
-0.041778564453125,
0.01106262... |
huggingartists/egor-letov | 2022-10-25T09:28:41.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/egor-letov"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.673046 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/faa3dae99bf1fe365927608fd55c745a.330x330x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/egor-letov">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Егор Летов (Egor Letov)</div>
<a href="https://genius.com/artists/egor-letov">
<div style="text-align: center; font-size: 14px;">@egor-letov</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/egor-letov).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/egor-letov")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|543| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/egor-letov")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,191 | [
[
-0.04779052734375,
-0.037567138671875,
0.006816864013671875,
0.020660400390625,
-0.0175018310546875,
-0.0011167526245117188,
-0.022216796875,
-0.0321044921875,
0.06451416015625,
0.0232696533203125,
-0.06793212890625,
-0.061309814453125,
-0.041107177734375,
0... |
huggingartists/freddie-dredd | 2022-10-25T09:29:57.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/freddie-dredd"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.261399 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f198be5e1dfd71285efa66c8b223ae6d.400x400x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/freddie-dredd">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Freddie Dredd</div>
<a href="https://genius.com/artists/freddie-dredd">
<div style="text-align: center; font-size: 14px;">@freddie-dredd</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/freddie-dredd).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/freddie-dredd")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|212| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/freddie-dredd")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,202 | [
[
-0.04962158203125,
-0.037261962890625,
0.00646209716796875,
0.0195770263671875,
-0.017822265625,
0.006710052490234375,
-0.0215301513671875,
-0.031585693359375,
0.06634521484375,
0.0258026123046875,
-0.0692138671875,
-0.06103515625,
-0.04132080078125,
0.00981... |
huggingartists/machine-gun-kelly | 2022-10-25T09:37:17.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/machine-gun-kelly"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.826338 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/bee1868cba78bf4b170886b3368c4ae8.640x640x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/machine-gun-kelly">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Machine Gun Kelly</div>
<a href="https://genius.com/artists/machine-gun-kelly">
<div style="text-align: center; font-size: 14px;">@machine-gun-kelly</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/machine-gun-kelly).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/machine-gun-kelly")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|373| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/machine-gun-kelly")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,234 | [
[
-0.04595947265625,
-0.042572021484375,
0.0081787109375,
0.0166015625,
-0.018798828125,
0.00408172607421875,
-0.02398681640625,
-0.032806396484375,
0.059326171875,
0.02850341796875,
-0.07037353515625,
-0.0679931640625,
-0.0489501953125,
0.008575439453125,
... |
huggingartists/the-69-eyes | 2022-10-25T09:46:18.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 82 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/the-69-eyes"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.162381 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/9e0451fa9d3f8cf38aa11994dbd934a8.600x600x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-69-eyes">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The 69 Eyes</div>
<a href="https://genius.com/artists/the-69-eyes">
<div style="text-align: center; font-size: 14px;">@the-69-eyes</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-69-eyes).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-69-eyes")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|168| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-69-eyes")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,186 | [
[
-0.0467529296875,
-0.038299560546875,
0.00984954833984375,
0.0168304443359375,
-0.0196685791015625,
-0.001651763916015625,
-0.0216064453125,
-0.036346435546875,
0.0648193359375,
0.0275115966796875,
-0.06597900390625,
-0.062164306640625,
-0.040252685546875,
0... |
huggingface/task-page-images | 2022-01-13T10:20:18.000Z | [
"region:us"
] | huggingface | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
husnu/tquad-v1v2 | 2022-01-14T20:09:29.000Z | [
"region:us"
] | husnu | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
hyeonduck/whiteboard_abuse_dataset | 2021-12-17T07:35:22.000Z | [
"region:us"
] | hyeonduck | null | null | 0 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
imvladikon/knesset_meetings_corpus | 2022-10-23T11:45:02.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:he",
"license:pddl",
"region:us"
] | imvladikon | null | null | 1 | 82 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- he
license:
- pddl
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Knesset Meetings Corpus
---
# Dataset Card
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://zenodo.org/record/2707356](https://zenodo.org/record/2707356)
- **Repository:** [https://github.com/NLPH/knesset-2004-2005](https://github.com/NLPH/knesset-2004-2005)
- **Paper:**
- **Point of Contact:**
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
An example of a sample:
```
{
"text": <text content of given document>,
"path": <file path to docx>
}
```
Dataset usage
Available "kneset16","kneset17","knesset_tagged" configurations
And only train set.
```python
train_ds = load_dataset("imvladikon/knesset_meetings_corpus", "kneset16", split="train")
```
The Knesset Meetings Corpus 2004-2005 is made up of two components:
* Raw texts - 282 files made up of 867,725 lines together. These can be downloaded in two formats:
* As ``doc`` files, encoded using ``windows-1255`` encoding:
* ``kneset16.zip`` - Contains 164 text files made up of 543,228 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/docs/kneset16.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset16.zip?raw=true>`_
* ``kneset17.zip`` - Contains 118 text files made up of 324,497 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/docs/kneset17.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset17.zip?raw=true>`_
* As ``txt`` files, encoded using ``utf8`` encoding:
* ``kneset.tar.gz`` - An archive of all the raw text files, divided into two folders: `[Github mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/kneset.tar.gz>`_
* ``16`` - Contains 164 text files made up of 543,228 lines together.
* ``17`` - Contains 118 text files made up of 324,497 lines together.
* ``knesset_txt_16.tar.gz``- Contains 164 text files made up of 543,228 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/utf8/knesset_txt_16.tar.gz>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/knesset_txt_16.tar.gz?raw=true>`_
* ``knesset_txt_17.zip`` - Contains 118 text files made up of 324,497 lines together. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/txt/utf8/knesset_txt_17.zip>`_ `[Github Mirror] <https://github.com/NLPH/knesset-2004-2005/blob/master/knesset_txt_17.zip?raw=true>`_
* Tokenized and morphologically tagged texts - Tagged versions exist only for the files in the ``16`` folder. The texts are encoded using `MILA's XML schema for corpora <http://www.mila.cs.technion.ac.il/eng/resources_standards.html>`_. These can be downloaded in two ways:
* ``knesset_tagged_16.tar.gz`` - An archive of all tokenized and tagged files. `[MILA host] <http://yeda.cs.technion.ac.il:8088/corpus/software/corpora/knesset/tagged/knesset_tagged_16.tar.gz>`_ `[Archive.org mirror] <https://archive.org/details/knesset_transcripts_2004_2005>`_
Mirrors
-------
This repository is a mirror of this dataset `found on MILA's website <http://www.mila.cs.technion.ac.il/eng/resources_corpora_haknesset.html>`_.
Zenodo mirror: `https://zenodo.org/record/2707356 <https://zenodo.org/record/2707356>`_
License
-------
All Knesset meeting protocols are in the `public domain <https://en.wikipedia.org/wiki/Public_domain>`_ (`רשות הציבור <https://he.wikipedia.org/wiki/%D7%A8%D7%A9%D7%95%D7%AA_%D7%94%D7%A6%D7%99%D7%91%D7%95%D7%A8>`_) by law. These files are thus in the public doamin and do not require any license or public domain dedication to set their status.
.. |DOI| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.2707356.svg
:target: https://doi.org/10.5281/zenodo.2707356
.. |LICENCE| image:: https://github.com/NLPH/knesset-2004-2005/blob/master/public_domain_shield.svg
:target: https://en.wikipedia.org/wiki/Public_domain
.. |PUBDOM| image:: https://github.com/NLPH/knesset-2004-2005/blob/master/public_domain.png
:target: https://en.wikipedia.org/wiki/Public_domain
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is available under the [ Open Data Commons Public Domain Dedication & License 1.0](https://opendatacommons.org/licenses/pddl/).
### Citation Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Contributions
| 7,969 | [
[
-0.049957275390625,
-0.034515380859375,
0.0254669189453125,
0.01245880126953125,
-0.03253173828125,
-0.009124755859375,
-0.0311737060546875,
-0.0298919677734375,
0.037445068359375,
0.0369873046875,
-0.039093017578125,
-0.074951171875,
-0.046142578125,
0.0193... |
jhonparra18/spanish_billion_words_clean | 2022-01-27T04:27:24.000Z | [
"region:us"
] | jhonparra18 | null | null | 4 | 82 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
mozilla-foundation/common_voice_4_0 | 2023-07-29T16:00:01.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | 1 | 82 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 1K<n<10K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fr:
- 100K<n<1M
ga-IE:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 1K<n<10K
it:
- 10K<n<100K
ja:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lv:
- 1K<n<10K
mn:
- 1K<n<10K
nl:
- 10K<n<100K
pt:
- 10K<n<100K
rm-sursilv:
- n<1K
ru:
- 10K<n<100K
rw:
- 10K<n<100K
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 1K<n<10K
ta:
- 1K<n<10K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- n<1K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 4
language_bcp47:
- ab
- ar
- br
- ca
- cnh
- cv
- cy
- de
- dv
- en
- eo
- es
- et
- eu
- fa
- fr
- ga-IE
- ia
- id
- it
- ja
- kab
- ky
- lv
- mn
- nl
- pt
- rm-sursilv
- ru
- rw
- sah
- sl
- sv-SE
- ta
- tr
- tt
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 4
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 4257 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 3401 validated hours in 40 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Mongolian, Persian, Portuguese, Romansh Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_4_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| 10,013 | [
[
-0.040283203125,
-0.0538330078125,
0.0106048583984375,
0.033721923828125,
-0.0186004638671875,
0.0032863616943359375,
-0.042724609375,
-0.0170135498046875,
0.03216552734375,
0.041900634765625,
-0.0557861328125,
-0.071044921875,
-0.0318603515625,
0.0198059082... |
m-newhauser/senator-tweets | 2022-03-07T16:37:44.000Z | [
"region:us"
] | m-newhauser | null | null | 1 | 82 | 2022-03-07T16:37:35 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bigbio/scai_disease | 2022-12-22T15:46:35.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | SCAI Disease is a dataset annotated in 2010 with mentions of diseases and
adverse effects. It is a corpus containing 400 randomly selected MEDLINE
abstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This
evaluation corpus was annotated by two individuals who hold a Master’s degree
in life sciences. | @inproceedings{gurulingappa:lrec-ws10,
author = {Harsha Gurulingappa and Roman Klinger and Martin Hofmann-Apitius and Juliane Fluck},
title = {An Empirical Evaluation of Resources for the Identification of Diseases and Adverse Effects in Biomedical Literature},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2010},
} | 1 | 82 | 2022-11-13T22:12:00 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: SCAI Disease
homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for SCAI Disease
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
SCAI Disease is a dataset annotated in 2010 with mentions of diseases and
adverse effects. It is a corpus containing 400 randomly selected MEDLINE
abstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This
evaluation corpus was annotated by two individuals who hold a Master’s degree
in life sciences.
## Citation Information
```
@inproceedings{gurulingappa:lrec-ws10,
author = {Harsha Gurulingappa and Roman Klinger and Martin Hofmann-Apitius and Juliane Fluck},
title = {An Empirical Evaluation of Resources for the Identification of Diseases and Adverse Effects in Biomedical Literature},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2010},
}
```
| 1,399 | [
[
0.0169830322265625,
-0.03131103515625,
0.033416748046875,
0.0062408447265625,
-0.019378662109375,
-0.0222930908203125,
-0.0043792724609375,
-0.041778564453125,
0.055816650390625,
0.0283966064453125,
-0.0092010498046875,
-0.048248291015625,
-0.06475830078125,
... |
Joanne/Metaphors_and_Analogies | 2023-05-30T20:40:56.000Z | [
"task_categories:question-answering",
"task_categories:token-classification",
"language:en",
"region:us"
] | Joanne | [Unified Benchmark for Metaphor Identification] | null | 0 | 82 | 2023-03-10T19:57:35 | ---
task_categories:
- question-answering
- token-classification
language:
- en
---
# Metaphors and analogies datasets
These datasets contain word pairs and quadruples forming analogies, metaphoric mapping or sematically unacceptable compositions.
- Pair instances are pairs of nouns A and B in a sentence of the form "A is a B".
- Quadruple instances are of the form : < (A,B),(C,D) >
There is an analogy when A is to B what C is to D.
The analogy is also a metaphor when the (A,B) and (C,D) form a metaphoric mapping, usually when they come from different domains.
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
Language : English
### Datasets and paper links
| Name | Size | Labels | Description |
| ---------: | :----- |:-------- | :-------------------------------------------------------------------------- |
| `Cardillo` | 260 *2 | 1, 2 | Pairs of "A is-a B" sentences composed of one metaphoric and one literal sentence. The two sentences of a given pair share the same B term. |
| `Jankowiak`| 120*3 | 0, 1, 2 | Triples of "A is-a/is-like-a B" sentences with exactly one literal, one semantic abnormal and one metaphoric sentence. |
| `Green` | 40*3 | 0, 1, 2 | Triples of proportional analogies, made of 4 terms <A, B, Ci, Di> each. One stem <A,B> is composed with 3 different <Ci,Di> pairs, to form exaclty one near analogy, one far analogy and one non analogic quadruple|
| `Kmiecik` | 720 | 0, 1, 2 | Quadruples <A,B,C,D> labelled as analogy:True/False and far_analogy: True/False|
| `SAT-met` | 160?*5 | 0, 1, 2, 12 | One pair stem <A,B> to combine with 5 different pairs <Ci,Di> and attempt to form proportional analogies. Only one <Ci,Di> forms an analogy with <A,B> We additionally labelled the analogies as **metaphoric**:True/False|
| Name | Paper Citation | Paper link | Dataset link |
| ---------: | :------- | :------------------------------ |-----------------------------------------: |
| `Cardillo` | | [Cardillo (2010)](https://link.springer.com/article/10.3758/s13428-016-0717-1) [Cardillo (2017)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2952404/ ) | |
| `Jankowiak`| | [Jankowiak (2020)]( https://link-springer-com.abc.cardiff.ac.uk/article/10.1007/s10936-020-09695-7) | |
| `Green` | Green, A. E., Kraemer, D. J. M., Fugelsang, J., Gray, J. R., & Dunbar, K. (2010). Connecting Long Distance: Semantic Distance in Analogical Reasoning Modulates Frontopolar Cortex Activity. Cerebral Cortex, 10, 70-76. | [Green (20)]() ||
| `Kmiecik` |Kmiecik, M. J., Brisson, R. J., & Morrison, R. G. (2019). The time course of semantic and relational processing during verbal analogical reasoning. Brain and Cognition, 129, 25-34. | [Kmiecik (20)]() ||
| `SAT-met` | | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) | |
### Labels :
- Pairs
- **0** : anomaly
- **1** : literal
- **2** : metaphor
- Quadruples :
- **0** : not an analogy
- **1** : an analogy but not a metaphor
- **2** : an analogy and a metaphor or a far analogy
- **12** : maybe a metaphor, somewhere between 1 and 2
### Dataset Splits
- Both lexical and random splits are available for classification experiments.
- Size of the splits :
- **train** : 50 %
- **validation** : 10 %
- **test** : 40 %
- Additionally, for all datasets, the `5-folds` field gives frozen splits for a five-folds cross validation experiment with train/val/test = 70/10/20% of the sets.
# Datasets for Classification
- Task : binary classification or 3-classes classification of pairs or quadruples. Each pair or quadruple is to classify between anomaly, non-metaphoric and metaphoric.
## Pairs
### Datasets names & splits :
| Original set | Dataset name | Split |
|-------------:| :------------ | :------ |
| Cardillo | Pairs\_Cardillo\_random_split | random |
| | Pairs\_Cardillo\_lexical_split | lexical |
| Jankowiac | Pairs\_Jankowiac\_random_split | random |
| | Pairs\_Jankowiac\_lexical_split | lexical |
### Data fields :
| Field | Description | Type |
| -------------:| :------------ | ---- |
| corpus | name of the orgiginal dataset | str |
| id | instance id | str |
| set_id | id of the set containing the given instance in the multiple choice task | int |
| label | 0, 1, 2 | int |
| sentence | A is-a B sentence. | str |
| A | A expression in the sentence | str |
| B | B expression in the sentence | str |
| A\_position | position of A in the sentence | list(int) |
| B\_position | position of B in the sentence | list(int) |
| 5-folds | frozen splits for cross validation | list(str) |
### Examples :
| Name | Example | Label|
| -------: | :------------------------------------- | :-------- |
|Cardillo | | |
|Jankowiac | | |
## Quadruples
### Datasets names & splits
| Original set | dataset name | Split |
| -------: | :------------------------------------- | :-------- |
|Green | Quadruples\_Green\_random_split | random |
| | Quadruples\_Green\_lexical_split | lexical |
|Kmiecik | Quadruples\_Kmiecik\_random_split | random |
| | Quadruples\_Kmiecik\_lexical\_split\_on\_AB | lexical AB |
| | Quadruples\_Kmiecik\_lexical_split\_on\_CD | lexical CD |
|SAT | Quadruples\_SAT\_random\_split | random | random |
| | Quadruples\_SAT\_lexical\_split | lexical | lexical |
### Data fields :
| Field| Description | Type |
| -------------: | :------------ | :------------ |
| corpus | Name of the orgiginal dataset | str |
| id | Element id | str |
| set\_id | Id of the set containing the given instance in the multiple-choice task datasets | int |
| label | 0, 1, 2, 12 | int |
| AB | pair of terms | list(str) |
| CD | pair of terms | list(str) |
| 5-folds | frozen splits for cross validation | list(str) |
### Examples :
| Name | Example | Label|
|-------: | :------------------------------------- | :-------- |
|Green | | |
|Kmiecik | | |
| SAT | | |
# Datasets for multiple choice questions or permutation
- Task : One stem and multiple choices. The stem and its possible combinations are to be combined to form a sentence. The resulting sentence has a label <0,1,2>.
## Pairs
### Datasets names & splits :
| Original set | dataset name | Split |
| -----------|------| :---- |
| Cardillo | Pairs\_Cardillo\_set | test only |
| Jankowiac | Pairs\_Jankowiac\_set |test only |
### Data fields :
| Field | Description | Type |
| -------------: | :------------ | :------------ |
| corpus | Name of the orgiginal dataset | str |
| id | Element id | str |
| pair_ids | Ids of each pair as appearing in the classification datasets. | list(str) |
| labels | 0, 1, 2 | list(int) |
| sentences | List of the sentences composing the set | list(str) |
| A\_positions | Positions of the A's in each sentence | list(list(int)) |
| B\_positions | Positions of the B's in each sentence | list(list(int)) |
| answer | Index of the metaphor | int |
| stem | Term shared between the sentences of the set. | str |
| 5-folds | frozen splits for cross validation | list(str) |
### Examples :
| Name | Stem | Sentences |Label|
|-------: |-------: | :------------------------------------- | :-------- |
|Cardillo | comet | The astronomer's obssession was a comet. | 1 |
| | | The politician's career was a comet. | 2 |
| Jankoviac | harbour | This banana is like a harbour | 0 |
| | | A house is a harbour | 2|
| | | This area is a harbour | 1 |
## Quadruples
### Datasets names & splits :
| Original set | dataset name | Split |
| ----------: | :------| :---- |
| Green | Quadruples\_Green\_set | test only |
| SAT | Quadruples\_SAT\_met_set | test only |
### Data fields :
| Field | Description | Type |
|-------------: | :------------ | :------------ |
| corpus | name of the orgiginal dataset | str |
| id | Element id | str |
| pair\_ids | Ids of the instances as appearing in the clasification datasets | list(str) |
| labels | 0, 1, 2, 12 | list(int) |
| answer | temp | int |
| stem | Word pair to compose with all the other pairs of the set | list(str) |
| pairs | List of word pairs | list(list(str)) |
| 5-folds | Frozen splits for cross validation | list(str) |
### Examples :
| Name | Example | Label|
|-------: | :------------------------------------- | :-------- |
|Green | | |
| | | |
| SAT | | |
| 8,821 | [
[
-0.0256805419921875,
-0.057373046875,
0.036529541015625,
0.01551055908203125,
-0.007762908935546875,
-0.005588531494140625,
-0.01259613037109375,
-0.0209503173828125,
0.033843994140625,
0.0208892822265625,
-0.04901123046875,
-0.052734375,
-0.03533935546875,
... |
Francesco/poker-cards-cxcvz | 2023-03-30T09:14:35.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 2 | 82 | 2023-03-30T09:14:05 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': poker-cards
'1': 59
'2': 10 Diamonds
'3': 10 Hearts
'4': 10 Spades
'5': 10 Trefoils
'6': 2 Diamonds
'7': 2 Hearts
'8': 2 Spades
'9': 2 Trefoils
'10': 3 Diamonds
'11': 3 Hearts
'12': 3 Spades
'13': 3 Trefoils
'14': 4 Diamonds
'15': 4 Hearts
'16': 4 Spades
'17': 4 Trefoils
'18': 5 Diamonds
'19': 5 Hearts
'20': 5 Spades
'21': 5 Trefoils
'22': 6 Diamonds
'23': 6 Hearts
'24': 6 Spades
'25': 6 Trefoils
'26': 7 Diamonds
'27': 7 Hearts
'28': 7 Spades
'29': 7 Trefoils
'30': 8 Diamonds
'31': 8 Hearts
'32': 8 Spades
'33': 8 Trefoils
'34': 9 Diamonds
'35': 9 Hearts
'36': 9 Spades
'37': 9 Trefoils
'38': A Diamonds
'39': A Hearts
'40': A Spades
'41': A Trefoils
'42': J Diamonds
'43': J Hearts
'44': J Spades
'45': J Trefoils
'46': K Diamonds
'47': K Hearts
'48': K Spades
'49': K Trefoils
'50': Q Diamonds
'51': Q Hearts
'52': Q Spades
'53': Q Trefoils
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: poker-cards-cxcvz
tags:
- rf100
---
# Dataset Card for poker-cards-cxcvz
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/poker-cards-cxcvz
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
poker-cards-cxcvz
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/poker-cards-cxcvz
### Citation Information
```
@misc{ poker-cards-cxcvz,
title = { poker cards cxcvz Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/poker-cards-cxcvz } },
url = { https://universe.roboflow.com/object-detection/poker-cards-cxcvz },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 4,831 | [
[
-0.04022216796875,
-0.040740966796875,
0.00936126708984375,
-0.007762908935546875,
-0.043121337890625,
-0.0125732421875,
-0.01088714599609375,
-0.042388916015625,
0.018218994140625,
0.040130615234375,
-0.0400390625,
-0.08734130859375,
-0.03997802734375,
0.00... |
Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish | 2023-05-09T05:27:32.000Z | [
"region:us"
] | Lajonbot | null | null | 5 | 82 | 2023-05-09T05:27:30 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
ArtifactAI/arxiv-physics-instruct-tune-30k | 2023-06-20T04:35:10.000Z | [
"doi:10.57967/hf/0801",
"region:us"
] | ArtifactAI | null | null | 15 | 82 | 2023-06-20T04:25:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ttxy/resume_ner | 2023-08-25T11:02:49.000Z | [
"task_categories:token-classification",
"language:code",
"license:bsd",
"ner",
"region:us"
] | ttxy | null | null | 0 | 82 | 2023-06-26T04:06:00 | ---
language:
- code
pretty_name: "resume ner dataseet"
tags:
- ner
license: "bsd"
task_categories:
- token-classification
---
中文 resume ner 数据集, 来源: https://github.com/luopeixiang/named_entity_recognition 。
数据的格式如下,它的每一行由一个字及其对应的标注组成,标注集采用BIOES,句子之间用一个空行隔开。
```text
美 B-LOC
国 E-LOC
的 O
华 B-PER
莱 I-PER
士 E-PER
我 O
跟 O
他 O
谈 O
笑 O
风 O
生 O
```
# 效果
## 不同模型的效果对比:
<img src="https://file.ddot.cc/imagehost/2023/8bb93212-5812-4211-91b8-7a6bda841e1b.png">
## Bert-tiny 结果
|model | precision | recall | f1-score | support |
|---|---|---|---|---|
|BERT-tiny | 0.9490 | 0.9538 | 0.9447 | 全部 |
|BERT-tiny | 0.9278 | 0.9251 | 0.9313 | 使用 100 train |
注:
- 后面再测试,BERT-tiny(softmax) + 100 训练样本,暂时没有复现 0.9313 的结果,最好结果 0.8612
- BERT-tiny + LSTM(softmax) + 100 样本,`val_f1` 可达 0.8737
| 779 | [
[
-0.023345947265625,
-0.040435791015625,
0.0287322998046875,
0.032562255859375,
-0.0269775390625,
-0.0247344970703125,
-0.017730712890625,
-0.034149169921875,
0.04144287109375,
0.02850341796875,
-0.0228271484375,
-0.056976318359375,
-0.0423583984375,
0.018447... |
razhan/asosoft-speech | 2023-08-30T14:40:10.000Z | [
"region:us"
] | razhan | null | null | 1 | 82 | 2023-07-15T08:49:25 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 621160243.56
num_examples: 3240
- name: test
num_bytes: 113413557.0
num_examples: 600
download_size: 702412597
dataset_size: 734573800.56
---
# Dataset Card for "asosoft-speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 512 | [
[
-0.03887939453125,
-0.0302886962890625,
-0.0011148452758789062,
0.01226043701171875,
-0.01110076904296875,
0.006710052490234375,
-0.0211334228515625,
-0.0267486572265625,
0.06683349609375,
0.042327880859375,
-0.07135009765625,
-0.057586669921875,
-0.042907714843... |
abacusai/WikiQA-Free_Form_QA | 2023-07-27T14:37:54.000Z | [
"region:us"
] | abacusai | null | null | 7 | 82 | 2023-07-27T14:37:39 | ---
configs:
- config_name: default
data_files:
- split: 2k
path: data/2k-*
- split: 4k
path: data/4k-*
- split: 8k
path: data/8k-*
- split: 16k
path: data/16k-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: tok_len
dtype: int64
- name: value
dtype: string
splits:
- name: 2k
num_bytes: 3555934
num_examples: 600
- name: 4k
num_bytes: 6926324
num_examples: 600
- name: 8k
num_bytes: 13605196
num_examples: 600
- name: 16k
num_bytes: 24856440
num_examples: 600
download_size: 10741984
dataset_size: 48943894
---
# Dataset Card for "WikiQA-Free_Form_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 835 | [
[
-0.035552978515625,
-0.01136016845703125,
0.01561737060546875,
0.005336761474609375,
-0.01549530029296875,
-0.01151275634765625,
0.033905029296875,
-0.0010385513305664062,
0.0595703125,
0.02301025390625,
-0.054534912109375,
-0.049041748046875,
-0.020355224609375... |
giuid/qrecc_raw_context | 2023-08-07T10:58:53.000Z | [
"region:us"
] | giuid | null | null | 0 | 82 | 2023-08-03T15:36:03 | Entry not found | 15 | [
[
-0.0213623046875,
-0.0149383544921875,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.03790283... |
imoxto/prompt_injection_cleaned_dataset-v2 | 2023-08-08T09:30:19.000Z | [
"region:us"
] | imoxto | null | null | 1 | 82 | 2023-08-08T09:30:03 | ---
dataset_info:
features:
- name: model
dtype: string
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 670958021
num_examples: 535105
download_size: 79246765
dataset_size: 670958021
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "prompt_injection_cleaned_dataset-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 538 | [
[
-0.01861572265625,
-0.034637451171875,
0.0240325927734375,
0.0039520263671875,
-0.020263671875,
-0.0059814453125,
0.031951904296875,
-0.006160736083984375,
0.03802490234375,
0.046417236328125,
-0.050933837890625,
-0.0465087890625,
-0.032470703125,
-0.0117492... |
OpenAssistant/OASST-DE | 2023-09-28T08:26:17.000Z | [
"size_categories:1K<n<10K",
"language:de",
"license:apache-2.0",
"arxiv:2304.07327",
"region:us"
] | OpenAssistant | null | null | 4 | 82 | 2023-08-29T12:04:11 | ---
dataset_info:
features:
- name: conversation
list:
- name: role
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8022604.792326268
num_examples: 3721
download_size: 4325950
dataset_size: 8022604.792326268
license: apache-2.0
language:
- de
size_categories:
- 1K<n<10K
---
# German OpenAssistant Conversations Dataset (OASST-DE)
With the goal of advancing open-source, german-language LLM research, we present
OASST-DE: a high quality subset of a recent (25.08.23) dump from the [OpenAssistant website](https://www.open-assistant.io/)
translated to German using the GPT-3.5 API. More details on how the dataset was filtered and translated under [dataset creation.](#dataset-creation-process)
For more details on the OpenAssistant Project, look at the [first OASST dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), [the Open-Assistant GitHub repo](https://github.com/LAION-AI/Open-Assistant)
or [our paper](https://arxiv.org/abs/2304.07327).
This dataset was created as part of LAION's LeoLM (Linguistically Enhanced Open Language Model) project led by Björn Plüster.
Check out LeoLM-Chat trained with OASST-DE ([7b](https://huggingface.co/LeoLM/leo-hessianai-7b-chat), [13b](https://huggingface.co/LeoLM/leo-hessianai-13b-chat)) finetuned on OASST-DE and read [their blog post](https://laion.ai/blog/leo-lm/)) for more info on LeoLM.
## Dataset Creation Process
This dataset was created from a recent OASST dump by following these steps:
- Filter for Top1 response trees with assistant response leaves
- Filter first prompt quality >= 0.5
- Filter total conversation length < 1900 tokens to fit in GPT3.5 context length
- Filter for `'lang' == 'de'` -> add to dataset
- Filter for `'lang' == 'en'` (other languages often result in failed translations)
- Translate using GPT-3.5-turbo API (total cost ~15$).
This results in around 3.7k samples of high-quality assistant conversations.
## Dataset Structure
This dataset has only one `'conversation'` field. Each example is a list of an alternating conversation between `'prompter'` and `'assistant'`,
where each entry is a dict with `'text'` and `'role'` fields:
```json
"conversation": [
{"role": "prompter", "text": "Moin, wie geht's dir?"},
{"role": "assistant", "text": "Moin Moin! Mir geht es gut, und dir?"},
...
]
```
## Usage with 🤗Datasets:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst_de", split="train")
print(ds[0]["conversation"])
``` | 2,541 | [
[
-0.0304718017578125,
-0.0635986328125,
0.0294036865234375,
0.0024089813232421875,
-0.015655517578125,
-0.01544952392578125,
-0.01084136962890625,
-0.038818359375,
0.010528564453125,
0.035980224609375,
-0.04071044921875,
-0.052978515625,
-0.033538818359375,
-... |
BleachNick/MIC_full | 2023-10-28T23:49:01.000Z | [
"license:unknown",
"arxiv:2309.07915",
"region:us"
] | BleachNick | null | null | 4 | 82 | 2023-09-01T10:34:22 | ---
license: unknown
---
```bibtex
@misc{zhao2023mmicl,
title={MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning},
author={Haozhe Zhao and Zefan Cai and Shuzheng Si and Xiaojian Ma and Kaikai An and Liang Chen and Zixuan Liu and Sheng Wang and Wenjuan Han and Baobao Chang},
year={2023},
eprint={2309.07915},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 422 | [
[
-0.01403045654296875,
-0.025909423828125,
0.02349853515625,
0.002391815185546875,
-0.0074615478515625,
-0.0067596435546875,
-0.0226898193359375,
-0.043853759765625,
0.0117034912109375,
0.02203369140625,
-0.035308837890625,
-0.0283660888671875,
-0.03387451171875,... |
yzhuang/autotree_pmlb_100000_banana_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T17:16:23.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-07T17:16:07 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 1545200000
num_examples: 100000
- name: validation
num_bytes: 154520000
num_examples: 10000
download_size: 281108655
dataset_size: 1699720000
---
# Dataset Card for "autotree_pmlb_100000_banana_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 846 | [
[
-0.0341796875,
-0.021942138671875,
0.01416778564453125,
0.03369140625,
-0.022857666015625,
0.0222320556640625,
0.04779052734375,
-0.0006203651428222656,
0.061614990234375,
0.035400390625,
-0.05609130859375,
-0.0335693359375,
-0.058319091796875,
0.00373840332... |
yzhuang/autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T01:58:08.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-08T01:57:40 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 622819971
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 846 | [
[
-0.027435302734375,
-0.01380157470703125,
0.0086212158203125,
0.0169525146484375,
-0.018707275390625,
0.010040283203125,
0.046966552734375,
-0.002452850341796875,
0.0562744140625,
0.03204345703125,
-0.05999755859375,
-0.037994384765625,
-0.04132080078125,
0.... |
yzhuang/autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0 | 2023-09-08T03:18:22.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-08T03:17:52 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2159600000
num_examples: 100000
- name: validation
num_bytes: 215960000
num_examples: 10000
download_size: 848115506
dataset_size: 2375560000
---
# Dataset Card for "autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 851 | [
[
-0.026641845703125,
-0.0171966552734375,
0.0297698974609375,
0.0258026123046875,
-0.010589599609375,
0.01096343994140625,
0.049163818359375,
-0.0013189315795898438,
0.05401611328125,
0.033782958984375,
-0.05792236328125,
-0.047607421875,
-0.043426513671875,
... |
yzhuang/autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T06:27:42.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-08T06:27:06 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1008851704
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 863 | [
[
-0.023040771484375,
-0.028045654296875,
0.007709503173828125,
0.0240631103515625,
-0.0131072998046875,
0.004062652587890625,
0.051361083984375,
-0.00301361083984375,
0.041717529296875,
0.0267486572265625,
-0.07391357421875,
-0.04510498046875,
-0.046722412109375,... |
yzhuang/autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T06:58:08.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-08T06:57:21 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1014035692
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 866 | [
[
-0.024139404296875,
-0.02764892578125,
0.0098114013671875,
0.0227203369140625,
-0.0148468017578125,
0.0032405853271484375,
0.05206298828125,
-0.0041961669921875,
0.044189453125,
0.030670166015625,
-0.07586669921875,
-0.0484619140625,
-0.04638671875,
-0.00839... |
yzhuang/autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T16:34:51.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-08T16:34:12 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1062661836
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 846 | [
[
-0.028533935546875,
-0.013885498046875,
0.0024051666259765625,
0.0273284912109375,
-0.0154266357421875,
0.01282501220703125,
0.04180908203125,
0.0017538070678710938,
0.059539794921875,
0.030609130859375,
-0.06329345703125,
-0.0350341796875,
-0.0472412109375,
... |
yzhuang/autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T17:38:59.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 82 | 2023-09-08T17:38:26 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 809891483
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 847 | [
[
-0.0289306640625,
-0.01406097412109375,
0.01163482666015625,
0.031524658203125,
-0.019317626953125,
0.014251708984375,
0.049346923828125,
-0.0030612945556640625,
0.0543212890625,
0.03277587890625,
-0.059661865234375,
-0.033172607421875,
-0.049224853515625,
-... |
notrichardren/azaria-mitchell-diff-filtered | 2023-10-03T04:33:20.000Z | [
"region:us"
] | notrichardren | null | null | 0 | 82 | 2023-10-03T03:54:39 | ---
configs:
- config_name: default
data_files:
- split: cities
path: data/cities-*
- split: companies
path: data/companies-*
- split: animals
path: data/animals-*
- split: elements
path: data/elements-*
- split: inventions
path: data/inventions-*
- split: facts
path: data/facts-*
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: int64
- name: dataset
dtype: string
- name: qa_type
dtype: int64
- name: ind
dtype: int64
splits:
- name: cities
num_bytes: 7955
num_examples: 112
- name: companies
num_bytes: 14588
num_examples: 129
- name: animals
num_bytes: 11451
num_examples: 137
- name: elements
num_bytes: 11617
num_examples: 139
- name: inventions
num_bytes: 10559
num_examples: 127
- name: facts
num_bytes: 14809
num_examples: 159
download_size: 44699
dataset_size: 70979
---
# Dataset Card for "azaria-mitchell-diff-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,125 | [
[
-0.06488037109375,
-0.030242919921875,
0.023773193359375,
0.013946533203125,
-0.0281982421875,
-0.005474090576171875,
0.0270538330078125,
-0.0128326416015625,
0.047637939453125,
0.0433349609375,
-0.076171875,
-0.06268310546875,
-0.06805419921875,
0.003414154... |
minhtu0408/gdsc-model-dataset | 2023-10-07T14:04:44.000Z | [
"region:us"
] | minhtu0408 | null | null | 0 | 82 | 2023-10-05T11:49:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
KTH/speechdat | 2022-02-07T10:41:34.000Z | [
"region:us"
] | KTH | Speechdat dataset | null | 0 | 81 | 2022-03-02T23:29:22 | # Speechdat
Speechdat dataset
## Loading the dataset
You need to download the dataset from another folder. We assume the wav files are located inside a /wav folder inside speechdat.
```
from datasets import load_dataset
speechdat = load_dataset("./speechdat", split="train", data_dir="./speechdat/wav")
```
| 310 | [
[
-0.01739501953125,
-0.01165771484375,
-0.0191802978515625,
0.0222320556640625,
-0.017303466796875,
0.0275115966796875,
-0.0178375244140625,
-0.007049560546875,
0.019805908203125,
0.01885986328125,
-0.053955078125,
-0.018463134765625,
-0.0219573974609375,
0.0... |
McGill-NLP/mlquestions | 2021-11-11T10:01:14.000Z | [
"region:us"
] | McGill-NLP | null | null | 0 | 81 | 2022-03-02T23:29:22 | # Dataset Card for mlquestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/McGill-NLP/MLQuestions
- **Repository:** https://github.com/McGill-NLP/MLQuestions
- **Paper:** https://aclanthology.org/2021.emnlp-main.566.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Devang Kulshreshtha](mailto:devang.kulshreshtha@mila.quebec)
### Dataset Summary
The MLQuestions dataset consists of questions from Google search queries and passages from Wikipedia pages related to Machine learning domain . The dataset was created to support research in domain adaptation of question generation and passage retrieval models.
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
We release development and test sets where a typical data point comprises a passage denoted by `input_text` label and a question, with a `target_text` label.
An example from the MLQuestions test set looks as follows:
{
'input_text': 'Bayesian learning uses Bayes' theorem to determine the conditional probability of a hypotheses given some evidence or observations.'
'target_text': 'What is Bayesian learning in machine learning'
}
We also provide unsupervised questions and passages in two separate files - 'passages_unaligned.csv' and 'questions_unaligned.csv' with labels `input_text` and `target_text` respectively.
## Additional Information
### Licensing Information
https://github.com/McGill-NLP/MLQuestions/blob/main/LICENSE.md
### Citation Information
If you find this useful in your research, please consider citing:
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
} | 3,684 | [
[
-0.0222625732421875,
-0.054931640625,
0.02691650390625,
0.01296234130859375,
0.003559112548828125,
-0.003200531005859375,
-0.004810333251953125,
0.0005230903625488281,
0.006908416748046875,
0.05865478515625,
-0.061187744140625,
-0.04595947265625,
-0.01708984375,... |
PaulLerner/viquae_dataset | 2022-02-15T17:43:12.000Z | [
"region:us"
] | PaulLerner | null | null | 2 | 81 | 2022-03-02T23:29:22 | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
| 72 | [
[
-0.0347900390625,
-0.00936126708984375,
0.052093505859375,
0.034576416015625,
-0.038116455078125,
-0.008087158203125,
0.00572967529296875,
-0.0323486328125,
0.017791748046875,
0.049652099609375,
-0.042724609375,
-0.041015625,
-0.0229034423828125,
0.008270263... |
Recognai/news | 2022-02-17T14:33:02.000Z | [
"region:us"
] | Recognai | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.057159423828125,
0.028839111328125,
-0.035064697265625,
0.04644775390625,
0.052520751953125,
0.00504302978515625,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.0604248046875,
0.... |
RohanAiLab/persian_blog | 2022-10-25T09:06:36.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"source_datasets:original",
"language:fa",
"region:us"
] | RohanAiLab | persian_blog is a dataset consist of 400K blog posts from various websites and has types of tones.
this dataset can be used in different NLG tasks and as a show-case it's is used in training reformer-persian. | https://saied71.github.io/RohanAiLab/,
author={Saied Alimoradi},
year={2021}
} | 2 | 81 | 2022-03-02T23:29:22 | ---
language:
- fa
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: persian_blog
---
# Persian Blog
# Dataset Summary
persian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text : چرا کودکان به روانشناس نیاز دارند؟ روانشناسی کودکانکودکان همچون غنچههای زیبا هستند که برای شکوفایی و به ثمر رسیدن نیاز به مراقبت و رسیدگی دارند . روانشناس کودک فردیست که از زمان بدو تولد کودک در مراحل مختلف زندگی کودک در کنار والدین وی میباشد و به چگونگی تربیت کودک کمک میکند تا به بهترین شکل رشد کند . چرا که روانشناس کودک با روحیات ، نیازها و مشکلات کودکان و همچنین چگونگی برقراری ارتباط بین کودک و والدین وی آشنایی کامل دارد .بسیاری از کودکان در سنین مختلف بخاطر شرایط زندگی ، دچار انواع ناسازگاریها و مشکلات در زندگی خود میشود از ناسازگاری کودکان میتوان به موارد زیر اشاره کرد : 1 . پرخاشگری 2 . بد دهنی 3 . اختلال در خوابیدن 4 . اختلال در غذا خوردن و کم اشتهایی 5 . حالت افسردگی و اضطراب 6 . ترس از محیط پیرامون 7 . عدم آمادگی برای ورود به جامعه 8 . وجود مشکل در محیط مدرسه 9 . عدم تمرکز 10 . جویدن ناخن ها 11 . انزوا و گوشه گیری 12 . عدم هم بازی شدن با هم سن و سال هاو .این گونه ناسازگاریها در زندگی آینده کودک نقش به سزایی دارد .روانشناس کودکیک روانشناس کودک خوب ، با دلسوزی و با تکیه بر تجربیات و تخصص خود میکوشد تا رفتارهای کودک را مورد ارزیابی و بررسی قرار دهد سپس سعی میکند تا رفتارهای بعدی کودک را پیش بینی کند و منشاء این مشکلات و سطح پیشرفت آن را بیابد. سپس او بهترین روشهای درمان برای بهبود اختلال کودک را مییابد و با کمک والدین این ناسازگاریها ، مشکلات و ناهنجاریها را حل کرده و نهایتا رابطهای دوستانه و صمیمانه بین کودک و والدین وی ایجاد مینماید تاآیندهای درخشان در انتظار کودک شما باشد .
```
# Citation
```
rohanailab@gmail.com
title={persian_blog},
author={Saied Alimoradi},
year={2021}
}
``` | 2,431 | [
[
-0.050567626953125,
-0.03924560546875,
0.027587890625,
0.024169921875,
-0.033905029296875,
-0.00701904296875,
-0.003719329833984375,
-0.0275115966796875,
0.042144775390625,
0.0322265625,
-0.03082275390625,
-0.0579833984375,
-0.050018310546875,
0.027923583984... |
RohanAiLab/persian_news_dataset | 2022-10-21T16:13:59.000Z | [
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:multi-class-classification",
"source_datasets:original",
"language:fa",
"region:us"
] | RohanAiLab | persian_news_dataset is a collection of 5 million news articles.
News articles have been gathered from more than 10 news agencies for the last 12 years.
The dataset is provided by Rohan AI lab for research purposes.
for more information refer to this link: | https://saied71.github.io/RohanAiLab/,
author={Saied Alimoradi},
year={2021}
} | 1 | 81 | 2022-03-02T23:29:22 | ---
pretty_name: persian_news_datset
language:
- fa
source_datasets:
- original
task_categories:
- text-classification
- sequence-modeling
task_ids:
- language-modeling
- multi-class-classification
---
# Persian_News_Dataset
# Dataset Summary
persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,...
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text :سهشنبه شب از دور برگشت مرحله نیمهنهایی لیگ قهرمانان اروپا، منچسترسیتی در ورزشگاه «اتحاد» میزبان پاریسنژرمن بود و با ارائه نمایشی حساب شده و تحسین برانگیز به پیروزی دو بر صفر دست یافت.بازی رفت در پاریس با برتری دو بر یک سیتی به اتمام رسیده بود و با این اوصاف تیم تحت هدایت «پپ گواردیولا» در مجموع با پیروزی چهار بر یک، راهی فینال شد.بارش برف موجب سفیدپوش شدن زمین شده بود و همین امر بر عملکرد تیمها تاثیر گذاشت. دیدار در حالی آغاز به کار کرد که «امباپه» ستاره پاریسیها که به تازگی از مصدومیت رهایی پیدا کرده است، نیمکتنشین بود.بازی با حملات میهمان آغاز شد و در دقیقه هفتم داور هلندی با تصمیمی عجیب اعتقاد داشت توپ به دست «زینچنکو» مدافع سیتی برخورد کرده و نقطه پنالتی را نشان داد، اما با استفاده از سیستم کمک داور ویدئویی، پنالتی پس گرفته شد. سیتی خیلی زود به هدفش رسید و در دقیقه ۱۰ حرکت عالی او و پاس به «دیبروین» موجب شد تا توپ در یک رفت و برگشت به «ریاض محرز» رسیده و این بازیکن الجزایری گل نخست بازی را برای میزبان به ارمغان آورد.در دقیقه ۱۶ ضربه سر «مارکینیوش» مدافع پیشتاخته پاریسنژرمن با بدشانسی به تیرک دروازه سیتی برخورد کرد.در ادامه برای دقایقی، بازیکنان در میانه میدان خطاهای متعددی انجام دادند و این امر موجب ایجاد چند درگیری شد.هرچند نماینده فرانسه درپی جبران مافات بود اما برنامهای برای رسیدن به این مهم نداشت تا نیمه نخست با همین یک گل همراه شود.در نیمه دوم هم حملات پاریسیها سودی نداشت و در طرف مقابل منچسترسیتی، بازی بسیار هوشمندانهای ارائه کرد.در دقیقه ۶۲ و در ضد حملهای برق آسا، «فیل فودن» با پاسی عالی توپ را به «ریاض محرز» رساند تا این بازیکن گل دوم خود و تیمش را ثبت کرده و سند صعود سیتی به فینال را امضا کند.در دقیقه ۶۸ «آنخل دیماریا» وینگر آرژانتینی تیم پاریسنژرمن پس از درگیری با «فرناندینو» با کارت قرمز داور از زمین اخراج شد تا کار تیمش تمام شود.در این بازی پاریسنژرمن با تفکرات «پوچتینو»، طراحی حملات خود را به «نیمار» سپرده بود اما این بازیکن مطرح برزیلی با حرکات انفرادی بیش از از اندازه، عملکرد خوبی نداشت و حملات تیمش را خراب کرد.در نهایت بازی با پیروزی سیتی همراه شد و مالکان ثروتمند منچسترسیتی به آرزوی خود رسیده و پس از سالها سرمایهگذاری به دیدار نهایی رسیدند. این اولین حضور سیتی در فینال لیگ قهرمانان اروپا است.چهارشنبه شب در دیگر دیدار دور برگشت نیمهنهایی، چلسی انگلیس در ورزشگاه «استمفورد بریج» شهر لندن پذیرای رئالمادرید اسپانیا است. بازی رفت با تساوی یک بر یک به اتمام رسید
title:آرزوی سیتی برآورده شد؛ صعود شاگردان «گواردیولا» به فینال
category:ورزش
```
# Citation
```
rohanailab@gmail.com
title={persian_news_dataset},
author={Saied Alimoradi},
year={2021}
}
``` | 3,455 | [
[
-0.044830322265625,
-0.043609619140625,
0.0275421142578125,
0.0247650146484375,
-0.0305633544921875,
-0.01149749755859375,
0.00852203369140625,
-0.031524658203125,
0.03631591796875,
0.03997802734375,
-0.026275634765625,
-0.0516357421875,
-0.05181884765625,
0... |
Sunbird/salt-dataset | 2022-03-28T13:04:56.000Z | [
"region:us"
] | Sunbird | null | null | 3 | 81 | 2022-03-02T23:29:22 | A parallel text corpus, **SALT -- (Sunbird African Language Translation Dataset)**, was created for five Ugandan languages (Luganda,
Runyankore, Acholi, Lugbara and Ateso) and various methods were explored to train and evaluate translation models. | 248 | [
[
-0.0291900634765625,
-0.01235198974609375,
0.034149169921875,
0.0261993408203125,
-0.01085662841796875,
0.0015935897827148438,
-0.0264129638671875,
-0.03070068359375,
0.021484375,
0.04632568359375,
-0.042724609375,
-0.03857421875,
-0.054412841796875,
0.04425... |
YuAnthony/chid | 2022-02-23T05:19:14.000Z | [
"region:us"
] | YuAnthony | null | null | 1 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
addy88/sanskrit-asr-84-eval | 2021-12-14T13:42:29.000Z | [
"region:us"
] | addy88 | null | null | 1 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
albertvillanova/sat | 2022-10-24T15:25:54.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|bible_para",
"source_datasets:extended|kde4",
"source_... | albertvillanova | SAT (Style Augmented Translation) dataset contains roughly 3.3 million English-Vietnamese pairs of texts. | \ | 0 | 81 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- vi
license:
- unknown
multilinguality:
- translation
size_categories:
- 1M<n<10M
source_datasets:
- original
- extended|bible_para
- extended|kde4
- extended|opus_gnome
- extended|open_subtitles
- extended|tatoeba
task_categories:
- text-generation
- translation
task_ids: []
pretty_name: SAT
tags:
- conditional-text-generation
---
# Dataset Card for SAT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://blog.vietai.org/sat/
- **Repository:** https://github.com/vietai/sat
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SAT (Style Augmented Translation) dataset contains roughly 3.3 million English-Vietnamese pairs of texts.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
The languages in the dataset are:
- Vietnamese (`vi`)
- English (`en`)
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'Rachel Pike : The science behind a climate headline',
'vi': 'Khoa học đằng sau một tiêu đề về khí hậu'
}
}
```
### Data Fields
- `translation`:
- `en`: Parallel text in English.
- `vi`: Parallel text in Vietnamese.
### Data Splits
The dataset is split in "train" and "test".
| | train | test |
|--------------------|--------:|-----:|
| Number of examples | 3359574 | 7221 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
Unknown.
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| 3,368 | [
[
-0.018035888671875,
-0.048858642578125,
0.0254974365234375,
0.0218658447265625,
-0.0154266357421875,
0.007781982421875,
-0.0259246826171875,
-0.0182952880859375,
0.02923583984375,
0.044403076171875,
-0.04974365234375,
-0.07086181640625,
-0.037200927734375,
0... |
allegro/klej-cdsc-r | 2021-11-29T19:14:36.000Z | [
"region:us"
] | allegro | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
allegro/summarization-allegro-articles | 2021-09-24T14:10:17.000Z | [
"region:us"
] | allegro | null | null | 2 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
allegro/summarization-polish-summaries-corpus | 2021-09-24T14:17:27.000Z | [
"region:us"
] | allegro | null | null | 2 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
aminedjebbie/Multi-Arabic-dialects | 2022-02-10T20:28:50.000Z | [
"region:us"
] | aminedjebbie | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
aseifert/merlin | 2022-10-21T16:21:58.000Z | [
"multilinguality:translation",
"size_categories:unknown",
"language:cz",
"language:de",
"language:it",
"region:us"
] | aseifert | null | null | 1 | 81 | 2022-03-02T23:29:22 | ---
annotations_creators: []
language_creators: []
language:
- cz
- de
- it
license: []
multilinguality:
- translation
pretty_name: merlin
size_categories:
- unknown
source_datasets: []
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# MERLIN corpus
Project URL: https://merlin-platform.eu/C_mcorpus.php
Dataset URL: https://clarin.eurac.edu/repository/xmlui/handle/20.500.12124/6
The MERLIN corpus is a written learner corpus for Czech, German, and Italian that has been designed to illustrate the Common European Framework of Reference for Languages (CEFR) with authentic learner data. The corpus contains learner texts produced in standardized language certifications covering CEFR levels A1-C1. The MERLIN annotation scheme includes a wide range of language characteristics that provide researchers with concrete examples of learner performance and progress across multiple proficiency levels. | 936 | [
[
-0.036407470703125,
-0.0323486328125,
-0.0013360977172851562,
0.0235443115234375,
0.0182037353515625,
0.01165008544921875,
-0.013427734375,
-0.05291748046875,
0.0168304443359375,
0.031097412109375,
-0.027069091796875,
-0.045318603515625,
-0.045196533203125,
... |
bertin-project/mc4-sampling | 2022-11-07T12:40:51.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1... | bertin-project | A sampling-enabled version of mC4, the colossal, cleaned version of Common Crawl's web crawl corpus.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is a version of the processed version of Google's mC4 dataset by AllenAI, in which sampling methods are implemented to perform on the fly. | @article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
} | 8 | 81 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4-sampling
language_bcp47:
- bg-Latn
- el-Latn
- hi-Latn
- ja-Latn
- ru-Latn
- zh-Latn
---
# Dataset Card for mC4-sampling
## Table of Contents
- [Dataset Card for mC4-sampling](#dataset-card-for-mc4-sampling)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Sampling](#dataset-sampling)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/bertin-project/bertin-roberta-base-spanish
### Dataset Summary
This dataset builds upon the AllenAI version of the original [mC4](https://huggingface.co/datasets/allenai/c4) and adds sampling methods to perform perplexity-based filtering on the fly. Please, refer to [BERTIN Project](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
The original dataset is mC4, the multilingual colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
108 languages are available and are reported in the [`mc4` dataset](https://huggingface.co/datasets/mc4#dataset-summary).
You can load the mC4 subset of any language like this:
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
```
And if you can even specify a list of languages:
```python
from datasets import load_dataset
mc4_subset_with_five_languages = load_dataset("mc4", languages=["en", "fr", "es", "de", "zh"])
```
### Dataset Sampling
There are 3 main different ways of getting sampled versions of mc4 using this dataset.
#### Random
Arguably, the simplest of methods. It keeps a document based on a probability threshold we called `factor`. It defaults to `0.5` for random sampling:
```python
def _should_keep_doc_random(self, doc, factor=None, **kwargs):
factor = 0.5 if factor is None else factor
return self.rng.uniform() <= factor
```
The way to use this sampling method is by adding an extra parameter to the instantiation of the dataset:
```python
from datasets import load_dataset
mc4random = load_dataset(
"bertin-project/mc4-sampling", "es",
split="train",
streaming=True,
sampling_method="random",
factor=0.5,
)
for sample in mc4random:
print(sample)
break
```
#### Gaussian
This sampling method tries to adjust to the underlying distribution while oversampling the central quartiles of the perplexity distribution of the documents in mC4 for a given language. Two parameters control the shape of the approximation, `factor` (peakness of the exponential function) and `width` (spread). Default values are selected for Spanish.
```python
def _should_keep_doc_gaussian(self, doc, factor=None, width=None, boundaries=None, **kwargs):
perplexity = self.get_perplexity(doc)
width = (9 / 2) if width is None else width
factor = 0.78 if factor is None else factor
median = 662247.50212365 if boundaries is None else boundaries[1]
exponential = np.exp((-1 / width) * ((perplexity - median) / median) ** 2)
weighted_perplexity = factor * exponential
return self.rng.uniform() < weighted_perplexity
```
In order to use this sampling methods, information about the quartile boundaries of the underlying distribution need to be calculated beforehand and passed in to the instantiation of the dataset. Moreover, the path to a [KenLM model](https://github.com/kpu/kenlm/) (5-gram language model) or an object with a method `.score(text:str) -> float` need to also be passed in for the calculation of the perplexity value of a document. KenLM can be installed with pip:
```bash
pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
from datasets import load_dataset
mc4gaussian = load_dataset(
"bertin-project/mc4-sampling",
"es",
split="train",
streaming=True,
sampling_method="gaussian",
perplexity_model="./es.arpa.bin",
boundaries=[536394.99320948, 662247.50212365, 919250.87225178],
factor=0.78,
width=9/2,
)
for sample in mc4gaussian:
print(sample)
break
```
Facebook has created and released 5-gram Kneser-Ney models for 100 languages available to download and use within the KenLM library. To download your own Kneser-Ney language model, chose a language code from the next list:
```bash
af,ar,az,be,bg,bn,ca,cs,da,de,el,en,es,et,fa,fi,fr,gu,he,hi,hr,hu,hy,id,is,it,ja,ka,kk,km,kn,ko,lt,lv,mk,ml,mn,mr,my,ne,nl,no,pl,pt,ro,ru,uk,zh
```
And run the next download command replacing `lang` with your own language code:
```bash
wget http://dl.fbaipublicfiles.com/cc_net/lm/lang.arpa.bin
```
### Stepwise
The stepwise sampling method uses a simple criteria by oversampling from the central quartiles inversely proportionally their range. Only `boundaries`, `factor` (strength of the oversampling), and `perplexity_model` are needed:
```python
def _should_keep_doc_step(self, doc, factor=None, boundaries=None, **kwargs):
perplexity = self.get_perplexity(doc)
factor = 1.5e5 if factor is None else factor
if boundaries is None:
boundaries = [536394.99320948, 662247.50212365, 919250.87225178]
if perplexity <= boundaries[0]:
quartile_range = boundaries[0]
elif boundaries[0] < perplexity < boundaries[1]:
quartile_range = boundaries[1] - boundaries[0]
elif boundaries[1] < perplexity < boundaries[2]:
quartile_range = boundaries[2] - boundaries[1]
elif perplexity >= boundaries[2]:
quartile_range = 10 * boundaries[2]
probability = factor / quartile_range
return self.rng.uniform() < probability
```
In order to use this sampling method, a similar invocation is needed:
```python
mc4stepwsie = load_dataset(
"bertin-project/mc4-sampling",
"es",
split="train",
streaming=True,
sampling_method="stepwise",
perplexity_model="./es.arpa.bin",
boundaries=[536394.99320948, 662247.50212365, 919250.87225178],
factor=1.5e5,
)
for sample in mc4stepwsie:
print(sample)
break
```
### Supported Tasks and Leaderboards
mC4-sampling is mainly intended to pretrain language models and word representations on a budget.
### Languages
The dataset supports 108 languages.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{'timestamp': '2018-06-24T01:32:39Z',
'text': 'Farm Resources in Plumas County\
Show Beginning Farmer Organizations & Professionals (304)\
There are 304 resources serving Plumas County in the following categories:\
Map of Beginning Farmer Organizations & Professionals serving Plumas County\
Victoria Fisher - Office Manager - Loyalton, CA\
Amy Lynn Rasband - UCCE Plumas-Sierra Administrative Assistant II - Quincy , CA\
Show Farm Income Opportunities Organizations & Professionals (353)\
There are 353 resources serving Plumas County in the following categories:\
Farm Ranch And Forest Retailers (18)\
Map of Farm Income Opportunities Organizations & Professionals serving Plumas County\
Warner Valley Wildlife Area - Plumas County\
Show Farm Resources Organizations & Professionals (297)\
There are 297 resources serving Plumas County in the following categories:\
Map of Farm Resources Organizations & Professionals serving Plumas County\
There are 57 resources serving Plumas County in the following categories:\
Map of Organic Certification Organizations & Professionals serving Plumas County',
'url': 'http://www.californialandcan.org/Plumas/Farm-Resources/'}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
The same splits as in [mC4 are available](https://huggingface.co/datasets/mc4#data-splits).
## Additional Information
### Licensing Information
BERTIN Project is releasing this dataset under the same terms AllenAI released mC4, that is, those of the ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
To cite this dataset:
```bibtex
@article{BERTIN,
author = {Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury},
title = {{BERTIN}: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
keywords = {},
abstract = {The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pretraining sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403},
pages = {13--23}
}
```
If you use this dataset, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
To cite the original `mc4` dataset:
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Dataset contributed by [@versae](https://github.com/versae).
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding the original mC4 dataset.
| 11,457 | [
[
-0.0455322265625,
-0.033935546875,
0.03839111328125,
0.0266571044921875,
0.0032138824462890625,
-0.01165771484375,
-0.00687408447265625,
-0.0091400146484375,
0.0264892578125,
0.039886474609375,
-0.04718017578125,
-0.049041748046875,
-0.035614013671875,
0.038... |
cestwc/cnn_dailymail-metaeval100 | 2022-01-07T10:55:08.000Z | [
"region:us"
] | cestwc | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
cheulyop/ksponspeech | 2021-10-02T04:27:13.000Z | [
"region:us"
] | cheulyop | KsponSpeech is a large-scale spontaneous speech corpus of Korean conversations. This corpus contains 969 hrs of general open-domain dialog utterances, spoken by about 2,000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. KsponSpeech is publicly available on an open data hub site of the Korea government. (https://aihub.or.kr/aidata/105) | @article{bang2020ksponspeech,
title={KsponSpeech: Korean spontaneous speech corpus for automatic speech recognition},
author={Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
journal={Applied Sciences},
volume={10},
number={19},
pages={6936},
year={2020},
publisher={Multidisciplinary Digital Publishing Institute}
} | 3 | 81 | 2022-03-02T23:29:22 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for [KsponSpeech]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
KsponSpeech is a large-scale spontaneous speech corpus of Korean conversations. This corpus contains 969 hrs of general open-domain dialog utterances, spoken by about 2,000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. KsponSpeech is publicly available on an open data hub site of the Korea government. (https://aihub.or.kr/aidata/105)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| 3,261 | [
[
-0.0295562744140625,
-0.033447265625,
0.01861572265625,
0.025665283203125,
-0.0242767333984375,
0.0154876708984375,
-0.0347900390625,
-0.01490020751953125,
0.042938232421875,
0.053955078125,
-0.06390380859375,
-0.07635498046875,
-0.044036865234375,
0.0055046... |
csebuetnlp/xnli_bn | 2022-08-21T13:14:56.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bn",
"license:cc-by-nc-sa-4.0",
"arxiv:2101.00204",
... | csebuetnlp | This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
MNLI data used in XNLI and state-of-the-art English to Bengali translation model. | @misc{bhattacharjee2021banglabert,
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
year={2021},
eprint={2101.00204},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1 | 81 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- text-classification
task_ids:
- natural-language-inference
language:
- bn
license:
- cc-by-nc-sa-4.0
---
# Dataset Card for `xnli_bn`
## Table of Contents
- [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
- **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglabert)
### Languages
* `Bengali`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/xnli_bn")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
"sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম",
"sentence2": "আমি তার সাথে আবার কথা বলিনি।",
"label": "contradiction"
}
```
### Data Fields
The data fields are as follows:
- `sentence1`: a `string` feature indicating the premise.
- `sentence2`: a `string` feature indicating the hypothesis.
- `label`: a classification label, where possible values are `contradiction` (0), `entailment` (1), `neutral` (2) .
### Data Splits
| split |count |
|----------|--------|
|`train`| 381449 |
|`validation`| 2419 |
|`test`| 4895 |
## Dataset Creation
The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity threshold of 0.70 were discarded.
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglabert)
### Source Data
[XNLI](https://aclanthology.org/D18-1269/)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglabert)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglabert)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglabert)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglabert)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglabert)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@misc{bhattacharjee2021banglabert,
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
year={2021},
eprint={2101.00204},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. | 6,302 | [
[
-0.0256195068359375,
-0.0474853515625,
-0.000865936279296875,
0.0299835205078125,
-0.01348114013671875,
-0.0012369155883789062,
-0.0302886962890625,
-0.0291900634765625,
0.02459716796875,
0.0193328857421875,
-0.040313720703125,
-0.054656982421875,
-0.03903198242... |
ctu-aic/csfever | 2022-11-01T05:56:15.000Z | [
"license:cc-by-sa-3.0",
"arxiv:1803.05355",
"arxiv:2201.11115",
"region:us"
] | ctu-aic | CsFEVER is a Czech localisation of the English FEVER datgaset. | @article{DBLP:journals/corr/abs-2201-11115,
author = {Jan Drchal and
Herbert Ullrich and
Martin R{\'{y}}par and
Hana Vincourov{\'{a}} and
V{\'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Czech Datasets for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 1 | 81 | 2022-03-02T23:29:22 | ---
license: cc-by-sa-3.0
---
# CsFEVER experimental Fact-Checking dataset
Czech dataset for fact verification localized from the data points of [FEVER](https://arxiv.org/abs/1803.05355) using the localization scheme described in the [CTKFacts: Czech Datasets for Fact Verification](https://arxiv.org/abs/2201.11115) paper which is currently being revised for publication in LREV journal.
The version you are looking at was reformatted to *Claim*-*Evidence* string pairs for the specific task of NLI - a more general Document-Retrieval-ready interpretation of our datapoints which can be used for training and evaluating the DR models over the June 2016 wikipedia snapshot can be found in the [data_dr]() folder in the JSON Lines format.
## Data Statement
### Curation Rationale
TODO
| 803 | [
[
-0.00023102760314941406,
-0.04815673828125,
0.0283966064453125,
0.003101348876953125,
-0.0156402587890625,
-0.0008172988891601562,
-0.03076171875,
-0.0276336669921875,
0.020172119140625,
0.045074462890625,
-0.038818359375,
-0.06719970703125,
-0.015960693359375,
... |
DebateLabKIT/aaac | 2022-10-24T16:25:56.000Z | [
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:parsing",
"task_ids:text-simplification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolin... | DebateLabKIT | null | null | 3 | 81 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text-retrieval
- text-generation
task_ids:
- parsing
- text-simplification
paperswithcode_id: aaac
pretty_name: Artificial Argument Analysis Corpus
language_bcp47:
- en-US
tags:
- argument-mining
- conditional-text-generation
- structure-prediction
---
# Dataset Card for Artificial Argument Analysis Corpus (AAAC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Construction of the Synthetic Data](#construction-of-the-synthetic-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://debatelab.github.io/journal/deepa2.html
- **Repository:** None
- **Paper:** G. Betz, K. Richardson. *DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models*. https://arxiv.org/abs/2110.01509
- **Leaderboard:** None
### Dataset Summary
DeepA2 is a modular framework for deep argument analysis. DeepA2 datasets contain comprehensive logical reconstructions of informally presented arguments in short argumentative texts. This document describes two synthetic DeepA2 datasets for artificial argument analysis: AAAC01 and AAAC02.
```sh
# clone
git lfs clone https://huggingface.co/datasets/debatelab/aaac
```
```python
import pandas as pd
from datasets import Dataset
# loading train split as pandas df
df = pd.read_json("aaac/aaac01_train.jsonl", lines=True, orient="records")
# creating dataset from pandas df
Dataset.from_pandas(df)
```
### Supported Tasks and Leaderboards
The multi-dimensional datasets can be used to define various text-2-text tasks (see also [Betz and Richardson 2021](https://arxiv.org/abs/2110.01509)), for example:
* Premise extraction,
* Conclusion extraction,
* Logical formalization,
* Logical reconstrcution.
### Languages
English.
## Dataset Structure
### Data Instances
The following histograms (number of dataset records with given property) describe and compare the two datasets AAAC01 (train split, N=16000) and AAAC02 (dev split, N=4000).
|AAAC01 / train split|AAAC02 / dev split|
|-|-|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
### Data Fields
The following multi-dimensional example record (2-step argument with one implicit premise) illustrates the structure of the AAAC datasets.
#### argument_source
```
If someone was discovered in 'Moonlight', then they won't play the lead in 'Booksmart',
because being a candidate for the lead in 'Booksmart' is sufficient for not being an
Oscar-Nominee for a role in 'Eighth Grade'. Yet every BAFTA-Nominee for a role in 'The
Shape of Water' is a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'.
And if someone is a supporting actor in 'Black Panther', then they could never become the
main actor in 'Booksmart'. Consequently, if someone is a BAFTA-Nominee for a role in
'The Shape of Water', then they are not a candidate for the lead in 'Booksmart'.
```
#### reason_statements
```json
[
{"text":"being a candidate for the lead in 'Booksmart' is sufficient for
not being an Oscar-Nominee for a role in 'Eighth Grade'","starts_at":96,
"ref_reco":2},
{"text":"every BAFTA-Nominee for a role in 'The Shape of Water' is a
fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'",
"starts_at":221,"ref_reco":4},
{"text":"if someone is a supporting actor in 'Black Panther', then they
could never become the main actor in 'Booksmart'","starts_at":359,
"ref_reco":5}
]
```
#### conclusion_statements
```json
[
{"text":"If someone was discovered in 'Moonlight', then they won't play the
lead in 'Booksmart'","starts_at":0,"ref_reco":3},
{"text":"if someone is a BAFTA-Nominee for a role in 'The Shape of Water',
then they are not a candidate for the lead in 'Booksmart'","starts_at":486,
"ref_reco":6}
]
```
#### distractors
`[]`
#### argdown_reconstruction
```
(1) If someone is a fan-favourite since 'Moonlight', then they are an Oscar-Nominee for a role in 'Eighth Grade'.
(2) If someone is a candidate for the lead in 'Booksmart', then they are not an Oscar-Nominee for a role in 'Eighth Grade'.
--
with hypothetical syllogism {variant: ["negation variant", "transposition"], uses: [1,2]}
--
(3) If someone is beloved for their role in 'Moonlight', then they don't audition in
'Booksmart'.
(4) If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'.
(5) If someone is a supporting actor in 'Black Panther', then they don't audition in
'Booksmart'.
--
with generalized dilemma {variant: ["negation variant"], uses: [3,4,5]}
--
(6) If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are not a
candidate for the lead in 'Booksmart'.
```
#### premises
```json
[
{"ref_reco":1,"text":"If someone is a fan-favourite since 'Moonlight', then
they are an Oscar-Nominee for a role in 'Eighth Grade'.","explicit":false},
{"ref_reco":2,"text":"If someone is a candidate for the lead in
'Booksmart', then they are not an Oscar-Nominee for a role in 'Eighth
Grade'.","explicit":true},
{"ref_reco":4,"text":"If someone is a BAFTA-Nominee for a role in 'The
Shape of Water', then they are a fan-favourite since 'Moonlight' or a
supporting actor in 'Black Panther'.","explicit":true},
{"ref_reco":5,"text":"If someone is a supporting actor in 'Black Panther',
then they don't audition in 'Booksmart'.","explicit":true}
]
```
#### premises_formalized
```json
[
{"form":"(x): ${F2}x -> ${F5}x","ref_reco":1},
{"form":"(x): ${F4}x -> ¬${F5}x","ref_reco":2},
{"form":"(x): ${F1}x -> (${F2}x v ${F3}x)","ref_reco":4},
{"form":"(x): ${F3}x -> ¬${F4}x","ref_reco":5}
]
```
#### conclusion
```json
[{"ref_reco":6,"text":"If someone is a BAFTA-Nominee for a role in 'The Shape
of Water', then they are not a candidate for the lead in 'Booksmart'.",
"explicit":true}]
```
#### conclusion_formalized
```json
[{"form":"(x): ${F1}x -> ¬${F4}x","ref_reco":6}]
```
#### intermediary_conclusions
```json
[{"ref_reco":3,"text":"If someone is beloved for their role in 'Moonlight',
then they don't audition in 'Booksmart'.","explicit":true}]
```
#### intermediary_conclusions_formalized
```json
[{"form":"(x): ${F2}x -> ¬${F4}x","ref_reco":3}]
```
#### plcd_subs
```json
{
"F1":"BAFTA-Nominee for a role in 'The Shape of Water'",
"F2":"fan-favourite since 'Moonlight'",
"F3":"supporting actor in 'Black Panther'",
"F4":"candidate for the lead in 'Booksmart'",
"F5":"Oscar-Nominee for a role in 'Eighth Grade'"
}
```
### Data Splits
Number of instances in the various splits:
| Split | AAAC01 | AAAC02 |
| :--- | :---: | :---: |
| TRAIN | 16,000 | 16,000 |
| DEV | 4,000 | 4,000 |
| TEST | 4,000 | 4,000 |
To correctly load a specific split, define `data_files` as follows:
```python
>>> data_files = {"train": "aaac01_train.jsonl", "eval": "aaac01_dev.jsonl", "test": "aaac01_test.jsonl"}
>>> dataset = load_dataset("debatelab/aaac", data_files=data_files)
```
## Dataset Creation
### Curation Rationale
Argument analysis refers to the interpretation and logical reconstruction of argumentative texts. Its goal is to make an argument transparent, so as to understand, appreciate and (possibly) criticize it. Argument analysis is a key critical thinking skill.
Here's a first example of an informally presented argument, **Descartes' Cogito**:
> I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT 7:25, CSM 2:16f)
And here's a second example, taken from the *Debater's Handbook*, **Pro Censorship**:
> Freedom of speech is never an absolute right but an aspiration. It ceases to be a right when it causes harm to others -- we all recognise the value of, for example, legislating against incitement to racial hatred. Therefore it is not the case that censorship is wrong in principle.
Given such texts, argument analysis aims at answering the following questions:
1. Does the text present an argument?
2. If so, how many?
3. What is the argument supposed to show (conclusion)?
4. What exactly are the premises of the argument?
* Which statements, explicit in the text, are not relevant for the argument?
* Which premises are required, but not explicitly stated?
5. Is the argument deductively valid, inductively strong, or simply fallacious?
To answer these questions, argument analysts **interpret** the text by (re-)constructing its argument in a standardized way (typically as a premise-conclusion list) and by making use of logical streamlining and formalization.
A reconstruction of **Pro Censorship** which answers the above questions is:
```argdown
(1) Freedom of speech is never an absolute right but an aspiration.
(2) Censorship is wrong in principle only if freedom of speech is an
absolute right.
--with modus tollens--
(3) It is not the case that censorship is wrong in principle
```
There are typically multiple, more or less different interpretations and logical reconstructions of an argumentative text. For instance, there exists an [extensive debate](https://plato.stanford.edu/entries/descartes-epistemology/) about how to interpret **Descartes' Cogito**, and scholars have advanced rival interpretation of the argument. An alternative reconstruction of the much simpler **Pro Censorship** might read:
```argdown
(1) Legislating against incitement to racial hatred is valuable.
(2) Legislating against incitement to racial hatred is an instance of censorship.
(3) If some instance of censorship is valuable, censorship is not wrong in
principle.
-----
(4) Censorship is not wrong in principle.
(5) Censorship is wrong in principle only if and only if freedom of speech
is an absolute right.
-----
(4) Freedom of speech is not an absolute right.
(5) Freedom of speech is an absolute right or an aspiration.
--with disjunctive syllogism--
(6) Freedom of speech is an aspiration.
```
What are the main reasons for this kind of underdetermination?
* **Incompleteness.** Many relevant parts of an argument (statements, their function in the argument, inference rules, argumentative goals) are not stated in its informal presentation. The argument analyst must infer the missing parts.
* **Additional material.** Over and above what is strictly part of the argument, informal presentations contain typically further material: relevant premises are repeated in slightly different ways, further examples are added to illustrate a point, statements are contrasted with views by opponents, etc. etc. It's argument analyst to choice which of the presented material is really part of the argument.
* **Errors.** Authors may err in the presentation of an argument, confounding, e.g., necessary and sufficient conditions in stating a premise. Following the principle of charity, benevolent argument analysts correct such errors and have to choose on of the different ways for how to do so.
* **Linguistic indeterminacy.** One and the same statement can be interpreted -- regarding its logical form -- in different ways.
* **Equivalence.** There are different natural language expressions for one and the same proposition.
AAAC datasets provide logical reconstructions of informal argumentative texts: Each record contains a source text to-be-reconstructed and further fields which describe an internally consistent interpretation of the text, notwithstanding the fact that there might be alternative interpretations of this very text.
### Construction of the Synthetic Data
Argument analysis starts with a text and reconstructs its argument (cf. [Motivation and Background](#curation-rationale)). In constructing our synthetic data, we inverse this direction: We start by sampling a complete argument, construct an informal presentation, and provide further info that describes both logical reconstruction and informal presentation. More specifically, the construction of the data involves the following steps:
1. [Generation of valid symbolic inference schemes](#step-1-generation-of-symbolic-inference-schemes)
2. [Assembling complex ("multi-hop") argument schemes from symbolic inference schemes](#step-2-assembling-complex-multi-hop-argument-schemes-from-symbolic-inference-schemes)
3. [Creation of (precise and informal) natural-language argument](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)
4. [Substitution of placeholders with domain-specific predicates and names](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)
5. [Creation of the argdown-snippet](#step-5-creation-of-the-argdown-snippet)
7. [Paraphrasing](#step-6-paraphrasing)
6. [Construction of a storyline for the argument source text](#step-7-construction-of-a-storyline-for-the-argument-source-text)
8. [Assembling the argument source text](#step-8-assembling-the-argument-source-text)
9. [Linking the precise reconstruction and the informal argumentative text](#step-9-linking-informal-presentation-and-formal-reconstruction)
#### Step 1: Generation of symbolic inference schemes
We construct the set of available inference schemes by systematically transforming the following 12 base schemes (6 from propositional and another 6 from predicate logic):
* modus ponens: `['Fa -> Gb', 'Fa', 'Gb']`
* chain rule: `['Fa -> Gb', 'Gb -> Hc', 'Fa -> Hc']`
* adjunction: `['Fa', 'Gb', 'Fa & Gb']`
* case analysis: `['Fa v Gb', 'Fa -> Hc', 'Gb -> Hc', 'Hc']`
* disjunctive syllogism: `['Fa v Gb', '¬Fa', 'Gb']`
* biconditional elimination: `['Fa <-> Gb', 'Fa -> Gb']`
* instantiation: `['(x): Fx -> Gx', 'Fa -> Ga']`
* hypothetical syllogism: `['(x): Fx -> Gx', '(x): Gx -> Hx', '(x): Fx -> Hx']`
* generalized biconditional elimination: `['(x): Fx <-> Gx', '(x): Fx -> Gx']`
* generalized adjunction: `['(x): Fx -> Gx', '(x): Fx -> Hx', '(x): Fx -> (Gx & Hx)']`
* generalized dilemma: `['(x): Fx -> (Gx v Hx)', '(x): Gx -> Ix', '(x): Hx -> Ix', '(x): Fx -> Ix']`
* generalized disjunctive syllogism: `['(x): Fx -> (Gx v Hx)', '(x): Fx -> ¬Gx', '(x): Fx -> Hx']`
(Regarding the propositional schemes, we allow for `a`=`b`=`c`.)
Further symbolic inference schemes are generated by applying the following transformations to each of these base schemes:
* *negation*: replace all occurrences of an atomic formula by its negation (for any number of such atomic sentences)
* *transposition*: transpose exactly one (generalized) conditional
* *dna*: simplify by applying duplex negatio affirmat
* *complex predicates*: replace all occurrences of a given atomic formula by a complex formula consisting in the conjunction or disjunction of two atomic formulas
* *de morgan*: apply de Morgan's rule once
These transformations are applied to the base schemes in the following order:
> **{base_schemes}** > negation_variants > transposition_variants > dna > **{transposition_variants}** > complex_predicates > negation_variants > dna > **{complex_predicates}** > de_morgan > dna > **{de_morgan}**
All transformations, except *dna*, are monotonic, i.e. simply add further schemes to the ones generated in the previous step. Results of bold steps are added to the list of valid inference schemes. Each inference scheme is stored with information about which transformations were used to create it. All in all, this gives us 5542 schemes.
#### Step 2: Assembling complex ("multi-hop") argument schemes from symbolic inference schemes
The complex argument *scheme*, which consists in multiple inferences, is assembled recursively by adding inferences that support premises of previously added inferences, as described by the following pseudocode:
```
argument = []
intermediary_conclusion = []
inference = randomly choose from list of all schemes
add inference to argument
for i in range(number_of_sub_arguments - 1):
target = randomly choose a premise which is not an intermediary_conclusion
inference = randomly choose a scheme whose conclusion is identical with target
add inference to argument
add target to intermediary_conclusion
return argument
```
The complex arguments we create are hence trees, with a root scheme.
Let's walk through this algorithm by means of an illustrative example and construct a symbolic argument scheme with two sub-arguments. First, we randomly choose some inference scheme (random sampling is controlled by weights that compensate for the fact that the list of schemes mainly contains, for combinatorial reasons, complex inferences), say:
```json
{
"id": "mp",
"base_scheme_group": "modus ponens",
"scheme_variant": ["complex_variant"],
"scheme": [
["${A}${a} -> (${B}${a} & ${C}${a})",
{"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}],
["${A}${a}", {"A": "${F}", "a": "${a}"}],
["${A}${a} & ${B}${a}", {"A": "${G}", "B": "${H}", "a": "${a}"}]
],
"predicate-placeholders": ["F", "G", "H"],
"entity-placeholders": ["a"]
}
```
Now, the target premise (= intermediary conclusion) of the next subargument is chosen, say: premise 1 of the already added root scheme. We filter the list of schemes for schemes whose conclusion structurally matches the target, i.e. has the form `${A}${a} -> (${B}${a} v ${C}${a})`. From this filtered list of suitable schemes, we randomly choose, for example
```json
{
"id": "bicelim",
"base_scheme_group": "biconditional elimination",
"scheme_variant": [complex_variant],
"scheme": [
["${A}${a} <-> (${B}${a} & ${C}${a})",
{"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}],
["${A}${a} -> (${B}${a} & ${C}${a})",
{"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}]
],
"predicate-placeholders": ["F", "G", "H"],
"entity-placeholders": []
}
```
So, we have generated this 2-step symbolic argument scheme with two premises, one intermediary and one final conclusion:
```
(1) Fa <-> Ga & Ha
--
with biconditional elimination (complex variant) from 1
--
(2) Fa -> Ga & Ha
(3) Fa
--
with modus ponens (complex variant) from 2,3
--
(4) Ga & Ha
```
General properties of the argument are now determined and can be stored in the dataset (its `domain` is randomly chosen):
```json
"steps":2, // number of inference steps
"n_premises":2,
"base_scheme_groups":[
"biconditional elimination",
"modus ponens"
],
"scheme_variants":[
"complex variant"
],
"domain_id":"consumers_personalcare",
"domain_type":"persons"
```
#### Step 3: Creation of (precise and informal) natural-language argument schemes
In step 3, the *symbolic and formal* complex argument scheme is transformed into a *natural language* argument scheme by replacing symbolic formulas (e.g., `${A}${a} v ${B}${a}`) with suitable natural language sentence schemes (such as, `${a} is a ${A}, and ${a} is a ${B}` or `${a} is a ${A} and a ${B}`). Natural language sentence schemes which translate symbolic formulas are classified according to whether they are precise, informal, or imprecise.
For each symbolic formula, there are many (partly automatically, partly manually generated) natural-language sentence scheme which render the formula in more or less precise way. Each of these natural-language "translations" of a symbolic formula is labeled according to whether it presents the logical form in a "precise", "informal", or "imprecise" way. e.g.
|type|form|
|-|-|
|symbolic|`(x): ${A}x -> ${B}x`|
|precise|`If someone is a ${A}, then they are a ${B}.`|
|informal|`Every ${A} is a ${B}.`|
|imprecise|`${A} might be a ${B}.`|
The labels "precise", "informal", "imprecise" are used to control the generation of two natural-language versions of the argument scheme, a **precise** one (for creating the argdown snippet) and an **informal** one (for creating the source text). Moreover, the natural-language "translations" are also chosen in view of the domain (see below) of the to-be-generated argument, specifically in view of whether it is quantified over persons ("everyone", "nobody") or objects ("something, nothing").
So, as a **precise** rendition of our symbolic argument scheme, we may obtain:
```
(1) If, and only if, a is a F, then a is G and a is a H.
--
with biconditional elimination (complex variant) from 1
--
(2) If a is a F, then a is a G and a is a H.
(3) a is a F.
--
with modus ponens (complex variant) from 3,2
--
(4) a is G and a is a H.
```
Likewise, an **informal** rendition may be:
```
(1) a is a F if a is both a G and a H -- and vice versa.
--
with biconditional elimination (complex variant) from 1
--
(2) a is a G and a H, provided a is a F.
(3) a is a F.
--
with modus ponens (complex variant) from 3,2
--
(4) a is both a G and a H.
```
#### Step 4: Substitution of placeholders with domain-specific predicates and names
Every argument falls within a domain. A domain provides
* a list of `subject names` (e.g., Peter, Sarah)
* a list of `object names` (e.g., New York, Lille)
* a list of `binary predicates` (e.g., [subject is an] admirer of [object])
These domains are manually created.
Replacements for the placeholders are sampled from the corresponding domain. Substitutes for entity placeholders (`a`, `b` etc.) are simply chosen from the list of `subject names`. Substitutes for predicate placeholders (`F`, `G` etc.) are constructed by combining `binary predicates` with `object names`, which yields unary predicates of the form "___ stands in some relation to some object". This combinatorial construction of unary predicates drastically increases the number of replacements available and hence the variety of generated arguments.
Assuming that we sample our argument from the domain `consumers personal care`, we may choose and construct the following substitutes for placeholders in our argument scheme:
* `F`: regular consumer of Kiss My Face soap
* `G`: regular consumer of Nag Champa soap
* `H`: occasional purchaser of Shield soap
* `a`: Orlando
#### Step 5: Creation of the argdown-snippet
From the **precise rendition** of the natural language argument scheme ([step 3](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)) and the replacements for its placeholders ([step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)), we construct the `argdown-snippet` by simple substitution and formatting the complex argument in accordance with [argdown syntax](https://argdown.org).
This yields, for our example from above:
```argdown
(1) If, and only if, Orlando is a regular consumer of Kiss My Face soap,
then Orlando is a regular consumer of Nag Champa soap and Orlando is
a occasional purchaser of Shield soap.
--
with biconditional elimination (complex variant) from 1
--
(2) If Orlando is a regular consumer of Kiss My Face soap, then Orlando
is a regular consumer of Nag Champa soap and Orlando is a occasional
purchaser of Shield soap.
(3) Orlando is a regular consumer of Kiss My Face soap.
--
with modus ponens (complex variant) from 3,2
--
(4) Orlando is a regular consumer of Nag Champa soap and Orlando is a
occasional purchaser of Shield soap.
```
That's the `argdown_snippet`. By construction of such a synthetic argument (from formal schemes, see [step 2](#step-2-assembling-complex-multi-hop-argument-schemes-from-symbolic-inference-schemes)), we already know its conclusions and their formalization (the value of the field `explicit` will be determined later).
```json
"conclusion":[
{
"ref_reco":4,
"text":"Orlando is a regular consumer of Nag Champa
soap and Orlando is a occasional purchaser of
Shield soap.",
"explicit": TBD
}
],
"conclusion_formalized":[
{
"ref_reco":4,
"form":"(${F2}${a1} & ${F3}${a1})"
}
],
"intermediary_conclusions":[
{
"ref_reco":2,
"text":"If Orlando is a regular consumer of Kiss My
Face soap, then Orlando is a regular consumer of
Nag Champa soap and Orlando is a occasional
purchaser of Shield soap.",
"explicit": TBD
}
]
"intermediary_conclusions_formalized":[
{
"ref_reco":2,
"text":"${F1}${a1} -> (${F2}${a1} & ${F3}${a1})"
}
],
```
... and the corresponding keys (see [step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names))):
```json
"plcd_subs":{
"a1":"Orlando",
"F1":"regular consumer of Kiss My Face soap",
"F2":"regular consumer of Nag Champa soap",
"F3":"occasional purchaser of Shield soap"
}
```
#### Step 6: Paraphrasing
From the **informal rendition** of the natural language argument scheme ([step 3](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)) and the replacements for its placeholders ([step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)), we construct an informal argument (argument tree) by substitution.
The statements (premises, conclusions) of the informal argument are individually paraphrased in two steps
1. rule-based and in a domain-specific way,
2. automatically by means of a specifically fine-tuned T5 model.
Each domain (see [step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)) provides rules for substituting noun constructs ("is a supporter of X", "is a product made of X") with verb constructs ("supports x", "contains X"). These rules are applied whenever possible.
Next, each sentence is -- with a probability specified by parameter `lm_paraphrasing` -- replaced with an automatically generated paraphrase, using a [T5 model fine-tuned on the Google PAWS dataset](https://huggingface.co/Vamsi/T5_Paraphrase_Paws) and filtering for paraphrases with acceptable _cola_ and sufficiently high _STSB_ value (both as predicted by T5).
| |AAAC01|AAAC02|
|-|-|-|
|`lm_paraphrasing`|0.2|0.|
#### Step 7: Construction of a storyline for the argument source text
The storyline determines in which order the premises, intermediary conclusions and final conclusions are to be presented in the text paragraph to-be-constructed (`argument-source`). The storyline is constructed from the paraphrased informal complex argument (see [step 6](#step-6-paraphrasing))).
Before determining the order of presentation (storyline), the informal argument tree is pre-processed to account for:
* implicit premises,
* implicit intermediary conclusions, and
* implicit final conclusion,
which is documented in the dataset record as
```json
"presentation_parameters":{
"resolve_steps":[1],
"implicit_conclusion":false,
"implicit_premise":true,
"...":"..."
}
```
In order to make an intermediary conclusion *C* implicit, the inference to *C* is "resolved" by re-assigning all premisses *from* which *C* is directly inferred *to* the inference to the (final or intermediary) conclusion which *C* supports.
Original tree:
```
P1 ... Pn
—————————
C Q1 ... Qn
—————————————
C'
```
Tree with resolved inference and implicit intermediary conclusion:
```
P1 ... Pn Q1 ... Qn
———————————————————
C'
```
The original argument tree in our example reads:
```
(1)
———
(2) (3)
———————
(4)
```
This might be pre-processed (by resolving the first inference step and dropping the first premise) to:
```
(3)
———
(4)
```
Given such a pre-processed argument tree, a storyline, which determines the order of presentation, can be constructed by specifying the direction of presentation and a starting point. The **direction** is either
* forward (premise AND ... AND premise THEREFORE conclusion)
* backward (conclusion SINCE premise AND ... AND premise)
Any conclusion in the pre-processed argument tree may serve as starting point. The storyline is now constructed recursively, as illustrated in Figure~1. Integer labels of the nodes represent the order of presentation, i.e. the storyline. (Note that the starting point is not necessarily the statement which is presented first according to the storyline.)

So as to introduce redundancy, the storyline may be post-processed by repeating a premiss that has been stated previously. The likelihood that a single premise is repeated is controlled by the presentation parameters:
```json
"presentation_parameters":{
"redundancy_frequency":0.1,
}
```
Moreover, **distractors**, i.e. arbitrary statements sampled from the argument's very domain, may be inserted in the storyline.
#### Step 8: Assembling the argument source text
The `argument-source` is constructed by concatenating the statements of the informal argument ([step 6](#step-6-paraphrasing)) according to the order of the storyline ([step 7](#step-7-construction-of-a-storyline-for-the-argument-source-text)). In principle, each statement is prepended by a conjunction. There are four types of conjunction:
* THEREFORE: left-to-right inference
* SINCE: right-to-left inference
* AND: joins premises with similar inferential role
* MOREOVER: catch all conjunction
Each statement is assigned a specific conjunction type by the storyline.
For every conjunction type, we provide multiple natural-language terms which may figure as conjunctions when concatenating the statements, e.g. "So, necessarily,", "So", "Thus,", "It follows that", "Therefore,", "Consequently,", "Hence,", "In consequence,", "All this entails that", "From this follows that", "We may conclude that" for THEREFORE. The parameter
```json
"presentation_parameters":{
"drop_conj_frequency":0.1,
"...":"..."
}
```
determines the probability that a conjunction is omitted and a statement is concatenated without prepending a conjunction.
With the parameters given above we obtain the following `argument_source` for our example:
> Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap, since Orlando is a regular consumer of Kiss My Face soap.
#### Step 9: Linking informal presentation and formal reconstruction
We can identify all statements _in the informal presentation_ (`argument_source`), categorize them according to their argumentative function GIVEN the logical reconstruction and link them to the corresponding statements in the `argdown_snippet`. We distinguish `reason_statement` (AKA REASONS, correspond to premises in the reconstruction) and `conclusion_statement` (AKA CONJECTURES, correspond to conclusion and intermediary conclusion in the reconstruction):
```json
"reason_statements":[ // aka reasons
{
"text":"Orlando is a regular consumer of Kiss My Face soap",
"starts_at":109,
"ref_reco":3
}
],
"conclusion_statements":[ // aka conjectures
{
"text":"Orlando is a regular consumer of Nag Champa soap and
Orlando is a occasional purchaser of Shield soap",
"starts_at":0,
"ref_reco":4
}
]
```
Moreover, we are now able to classify all premises in the formal reconstruction (`argdown_snippet`) according to whether they are implicit or explicit given the informal presentation:
```json
"premises":[
{
"ref_reco":1,
"text":"If, and only if, Orlando is a regular consumer of Kiss
My Face soap, then Orlando is a regular consumer of Nag
Champa soap and Orlando is a occasional purchaser of
Shield soap.",
"explicit":False
},
{
"ref_reco":3,
"text":"Orlando is a regular consumer of Kiss My Face soap. ",
"explicit":True
}
],
"premises_formalized":[
{
"ref_reco":1,
"form":"${F1}${a1} <-> (${F2}${a1} & ${F3}${a1})"
},
{
"ref_reco":3,
"form":"${F1}${a1}"
}
]
```
#### Initial Data Collection and Normalization
N.A.
#### Who are the source language producers?
N.A.
### Annotations
#### Annotation process
N.A.
#### Who are the annotators?
N.A.
### Personal and Sensitive Information
N.A.
## Considerations for Using the Data
### Social Impact of Dataset
None
### Discussion of Biases
None
### Other Known Limitations
See [Betz and Richardson 2021](https://arxiv.org/abs/2110.01509).
## Additional Information
### Dataset Curators
Gregor Betz, Kyle Richardson
### Licensing Information
Creative Commons cc-by-sa-4.0
### Citation Information
```
@misc{betz2021deepa2,
title={DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models},
author={Gregor Betz and Kyle Richardson},
year={2021},
eprint={2110.01509},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
<!--Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.-->
| 36,224 | [
[
-0.048309326171875,
-0.046722412109375,
0.03204345703125,
-0.006389617919921875,
-0.02301025390625,
-0.002033233642578125,
0.01348114013671875,
-0.0172271728515625,
0.0352783203125,
0.0352783203125,
-0.0413818359375,
-0.04107666015625,
-0.051483154296875,
0.... |
dweb/squad_with_cola_scores | 2021-10-02T22:50:22.000Z | [
"region:us"
] | dweb | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
frahman/github-issues | 2021-11-14T16:45:36.000Z | [
"region:us"
] | frahman | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
frtna/es_it_Results-base-OPUS_Tatoeba | 2022-01-04T04:41:07.000Z | [
"region:us"
] | frtna | null | null | 0 | 81 | 2022-03-02T23:29:22 | - Model: [OPUS-MT](https://huggingface.co/Helsinki-NLP/opus-mt-es-it)
- Tested on: [Tatoeba]()
<br>
- Metric:
- bleu(tensorflow),
- sacrebleu(github->mjpost),
- google_bleu(nltk),
- rouge(google-research),
- meteor(nltk),
- ter(university of Maryland)
<br>
- Retrieved from: [Huggingface](https://huggingface.co/metrics/) [metrics](https://github.com/huggingface/datasets/blob/master/metrics/)
- Script used for translation and testing: [https://gitlab.com/hmtkvs/machine_translation/-/tree/production-stable](https://gitlab.com/hmtkvs/machine_translation/-/tree/production-stable)
## Info
## mtdata-OPUS Tatoeba (length=14178, single reference)
**bleu** : 0.5228
<br>
**sacrebleu** : 0.5652
<br>
**google_bleu** : 0.5454
<br>
**rouge-mid** : precision=0.7792, recall=0.7899, f_measure=0.7796
<br>
**meteor** : 0.7557
<br>
**ter** : score=0.3003, num_edits= 24654, ref_length= 82079.0
## OPUS Tatoeba (length = 5000, multi references)
**bleu** : 0.5165
<br>
**sacrebleu** : 0.7098
<br>
**google_bleu** : 0.5397
<br>
**rouge-mid** : precision=0.9965, recall=0.5021, f_measure=0.6665
<br>
**meteor** : 0.3344
<br>
**ter** : score: 0.6703, 'num_edits': 38883, 'ref_length': 58000.0 | 1,198 | [
[
-0.03253173828125,
-0.05218505859375,
0.025177001953125,
0.0286407470703125,
-0.0232391357421875,
-0.01258087158203125,
-0.034332275390625,
-0.0180511474609375,
0.0310821533203125,
0.02191162109375,
-0.044219970703125,
-0.057403564453125,
-0.04937744140625,
... |
frtna/opensubtitles_mt | 2021-12-05T20:53:04.000Z | [
"region:us"
] | frtna | This new dataset is designed to be used in the scope of PhD project. | @InProceedings{phd,
title = {Open Subtitles Machine Translation Dataset},
author={hmtkvs, Inc.
},
year={2021}
} | 0 | 81 | 2022-03-02T23:29:22 | annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- es
- it
licenses:
- cc-by-4.0
multilinguality:
- multilingual
- translation
pretty_name: ''
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- machine-translation | 286 | [
[
-0.00787353515625,
-0.020782470703125,
0.027008056640625,
0.04278564453125,
-0.012237548828125,
0.008697509765625,
-0.034027099609375,
-0.02490234375,
0.022796630859375,
0.06280517578125,
-0.056396484375,
-0.04547119140625,
-0.057464599609375,
0.061767578125... |
geninhu/vi_opus100_processed | 2022-01-23T02:34:05.000Z | [
"region:us"
] | geninhu | null | null | 0 | 81 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.