id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
NYTK/HuWNLI | 2023-03-27T09:53:33.000Z | [
"task_categories:other",
"task_ids:coreference-resolution",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu",
"license:cc-by-sa-4.0",
"structure... | NYTK | null | null | null | 3 | 21 | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- other
task_ids:
- coreference-resolution
pretty_name: HuWNLI
tags:
- structure-prediction
---
# Dataset Card for HuWNLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuWNLI dataset](https://github.com/nytud/HuWNLI)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian translation of the Winograd schemata formatted as an inference task. A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution (Levesque et al. 2012). This dataset is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and manually curating the original English Winograd schemata. The NLI format was created by replacing the ambiguous pronoun with each possible referent (the method is described in GLUE's paper, Wang et al. 2019). We extended the set of sentence pairs derived from the schemata by the translation of the sentence pairs that - together with the Winograd schema sentences - build up the WNLI dataset of GLUE.
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an orig_id, an id, two sentences and a label.
An example:
```
{"orig_id": "4",
"id": "4",
"sentence1": "A férfi nem tudta felemelni a fiát, mert olyan nehéz volt.",
"sentence2": "A fia nehéz volt.",
"Label": "1"
}
```
### Data Fields
- orig_id: the original id of this sentence pair (more precisely, its English counterpart's) in GLUE's WNLI dataset;
- id: unique id of the instances;
- sentence1: the premise;
- sentence2: the hypothesis;
- label: "1" if sentence2 is entailed by sentence1, and "0" otherwise.
### Data Splits
The data is distributed in three splits: training set (562), development set (59) and test set (134). The splits follow GLUE's WNLI's splits but contain fewer instances as many sentence pairs had to be thrown away for being untranslatable to Hungarian. The train and the development set have been extended from nli sentence pairs formatted from the Hungarian translation of 6 Winograd schemata left out from the original WNLI dataset.
The test set's sentence pairs are translated from GLUE's WNLI's test set. This set was distributed without labels. 3 annotators annotated the Hungarian sentence pairs.
The test set of HuWNLI is also distributed without labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the English Winograd schemata and the additional sentence pairs of GLUE's WNLI. Each schema and sentence pair was translated by a human translator. Each schema was manually curated by a linguistic expert. The schemata were transformed into nli format by a linguistic expert.
During the adaption method, we found two erroneous labels in GLUE's WNLI's train set (id 347 and id 464). We corrected them in our dataset.
## Additional Information
Average human performance on the test set is 92,78% (accuracy).
### Licensing Information
HuWNLI is released under the Creative Commons Attribution-ShareAlike 4.0 International License.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Héja, E., Laki, L. J., Takács, D., Yang, Z. Gy. and Váradi, T. (2023) Hát te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással \[Look at how much you have grown! - The first year of HuLU with new databases and with webservice\]. In: Berend, G., Gosztolya, G. and Vincze, V. (eds), XIX. Magyar Számítógépes Nyelvészeti Konferencia. Szeged, Szegedi Tudományegyetem, Informatikai Intézet. 217-230.
```
@inproceedings{ligetinagy2023hulu,
title={át te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással},
author={Ligeti-Nagy, N. and Héja, E. and Laki, L. J. and Takács, D. and Yang, Z. Gy. and Váradi, T.},
booktitle={XIX. Magyar Számítógépes Nyelvészeti Konferencia},
year={2023},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {217–230}
}
```
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából \[HuLU: Hungarian benchmark dataset to evaluate neural language models\]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {431–446}
}
```
and to:
Levesque, Hector, Davis, Ernest, Morgenstern, Leora (2012) he winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
```
@inproceedings{levesque2012winograd,
title={The Winograd Schema Challenge},
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
year={2012},
organization={Citeseer}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
NbAiLab/NPSC | 2023-04-25T09:52:08.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:no",
"language:nb",
"language:nn",
"license:cc0... | NbAiLab | The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway.
NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings.
The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words. | @inproceedings{johansen2019ner,
title={},
author={},
booktitle={LREC 2022},
year={2022},
url={https://arxiv.org/abs/}
} | null | 5 | 21 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- 'no'
- nb
- nn
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
pretty_name: NPSC
tags:
- speech-modeling
---
# Dataset Card for NbAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset.
This repository contains a version of the NPSC in the 🤗 Dataset Format. Note that the official release of the dataset, which can be found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/), contains more information than the version found here, including word-level metadata, metadata about the speakers, and detailed documentation.
## How to Use
```python
# Loads the 16K Bokmål corpus in streaming mode
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", config="16K_mp3_bokmaal", streaming=True)
```
## Dataset Summary
The NPSC dataset contains JSON lines with language training data. The data loader will add audio data to this structure. Here is an example json object:
```json
{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"meeting_date": "20170110",
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246,
"end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav","array": [.......]}
}
```
## Data Fields
|**Key** | **Type** | **Description** |
|:-----------|:------------|:------------|
|**sentence_id:** | Integer | Unique identifier of the sentence |
|**sentence_order** | Integer | A number indicating the order of the sentences in the meeting |
|**speaker_id** | Integer | The ID of the speaker. This can be linked to the original dataset containing thorough demographic and dialectal information about the speaker. |
|**meeting_date** | String | The date for the meeting in the format __yyyymmdd__ |
| **speaker_name** | String | Name of the speaker. All speakers were members of the Norwegian Parliament or members of the Norwegian Government at the meeting date |
| **sentence_text** | String | The sentence text. The transcribed text string of the sentence in non-normalized form. This is the text of the manual transcriptions, without any postprocessing (apart from corrections of known errors). It may contain interrupted words, non-standard words and function words with a pronunciation deviating from the written form. Detailed metadata about the words in the sentence can be found in the word-tokenized version of the corpus in the official release of the dataset. |
| **sentence_language_code** | String | The language code of the sentence. The following alternatives exists in the file: ['nb-NO'. 'nn-NO', 'en-US']|
| **text** | String | sentence text. This is a copy of "sentence_text". It is included here to make it more convenient to interleave with other datasets.|
| **start_time** | Integer | The start time of the sentence in milliseconds. This time is relative to the start of audiofile of the entire meeting, which can be accessed in the official release |
| **end_time** | Integer | End time. See comment above. |
| **normsentence_text** | String | Normalized sentence text. In this version of the transcription, numbers and dates are written in digits on standardized formats, and common abbreviations are used. These modifications to the original transcriptions are produced automatically using normalization grammars |
| **transsentence_text** | String | Translated sentence text. Whenever the original transcription is in Bokmål (nb-NO), this field contains a machine-translated version in Nynorsk (nn-NO), and vice versa |
| **translated** | Integer | A flag indicating whether a machine-translated version has been produced or not. Sentences in en-US have not been translated |
| **audio** | Array | The dataloader will encode the accociated audio files and provide them as an array containing 'path', 'sound array','sampling_rate' |
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. [Javier de la Rosa](mailto:versae@nb.no), [Freddy Wetjen](mailto:freddy.wetjen@nb.no), [Per Egil Kummervold](mailto:per.kummervold@nb.no), and [Andre Kaasen](mailto:andre.kasen@nb.no) all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
## License
The sound and the transcriptions are released under the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/). The curation of the HuggingFace Dataset is released under [CC-BY-SA-3-license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
```
@inproceedings{solberg2022norwegian,
title={The Norwegian Parliamentary Speech Corpus},
author={Solberg, Per Erik and Ortiz, Pablo},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
url={http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.106.pdf},
year={2022}
}
```
|
Tevatron/wikipedia-curated-corpus | 2021-09-23T01:58:40.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
} | null | 0 | 21 | Entry not found |
dalle-mini/open-images | 2021-09-10T07:09:01.000Z | [
"region:us"
] | dalle-mini | null | null | null | 4 | 21 | Entry not found |
echarlaix/vqa-lxmert | 2022-02-09T23:41:22.000Z | [
"license:apache-2.0",
"region:us"
] | echarlaix | VQA is a new dataset containing open-ended questions about images.
These questions require an understanding of vision, language and commonsense knowledge to answer. | @inproceedings{antol2015vqa,
title={Vqa: Visual question answering},
author={Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C Lawrence and Parikh, Devi},
booktitle={Proceedings of the IEEE international conference on computer vision},
pages={2425--2433},
year={2015}
} | null | 0 | 21 | ---
license: apache-2.0
---
|
qanastek/WMT-16-PubMed | 2022-10-22T15:20:12.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bg",
"language:cs",
"language:da",
"language:de",
"lan... | qanastek | WMT'16 Biomedical Translation Task - PubMed parallel datasets
http://www.statmt.org/wmt16/biomedical-translation-task.html | @inproceedings{bojar-etal-2016-findings,
title = Findings of the 2016 Conference on Machine Translation,
author = {
Bojar, Ondrej and
Chatterjee, Rajen and
Federmann, Christian and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Jimeno Yepes, Antonio and
Koehn, Philipp and
Logacheva, Varvara and
Monz, Christof and
Negri, Matteo and
Neveol, Aurelie and
Neves, Mariana and
Popel, Martin and
Post, Matt and
Rubino, Raphael and
Scarton, Carolina and
Specia, Lucia and
Turchi, Marco and
Verspoor, Karin and
Zampieri, Marcos
},
booktitle = Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers,
month = aug,
year = 2016,
address = Berlin, Germany,
publisher = Association for Computational Linguistics,
url = https://aclanthology.org/W16-2301,
doi = 10.18653/v1/W16-2301,
pages = 131--198,
} | null | 2 | 21 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
multilinguality:
- multilingual
pretty_name: WMT-16-PubMed
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- translation
- machine-translation
task_ids:
- translation
- machine-translation
---
# WMT-16-PubMed : European parallel translation corpus from the European Medicines Agency
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.statmt.org/wmt16/biomedical-translation-task.html
- **Repository:** https://github.com/biomedical-translation-corpora/corpora
- **Paper:** https://aclanthology.org/W16-2301/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`WMT-16-PubMed` is a parallel corpus for neural machine translation collected and aligned for ACL 2016 during the [WMT'16 Shared Task: Biomedical Translation Task](https://www.statmt.org/wmt16/biomedical-translation-task.html).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
The corpora consists of a pair of source and target sentences for all 4 different languages :
**List of languages :** `English (en)`,`Spanish (es)`,`French (fr)`,`Portuguese (pt)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/WMT-16-PubMed", split='train', download_mode='force_redownload')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```plain
lang doc_id workshop publisher source_text target_text
0 en-fr 26839447 WMT'16 Biomedical Translation Task - PubMed pubmed Global Health: Where Do Physiotherapy and Reha... La place des cheveux et des poils dans les rit...
1 en-fr 26837117 WMT'16 Biomedical Translation Task - PubMed pubmed Carabin Les Carabins
2 en-fr 26837116 WMT'16 Biomedical Translation Task - PubMed pubmed In Process Citation Le laboratoire d'Anatomie, Biomécanique et Org...
3 en-fr 26837115 WMT'16 Biomedical Translation Task - PubMed pubmed Comment on the misappropriation of bibliograph... Du détournement des références bibliographique...
4 en-fr 26837114 WMT'16 Biomedical Translation Task - PubMed pubmed Anti-aging medicine, a science-based, essentia... La médecine anti-âge, une médecine scientifiqu...
... ... ... ... ... ... ...
973972 en-pt 20274330 WMT'16 Biomedical Translation Task - PubMed pubmed Myocardial infarction, diagnosis and treatment Infarto do miocárdio; diagnóstico e tratamento
973973 en-pt 20274329 WMT'16 Biomedical Translation Task - PubMed pubmed The health areas politics A política dos campos de saúde
973974 en-pt 20274328 WMT'16 Biomedical Translation Task - PubMed pubmed The role in tissue edema and liquid exchanges ... O papel dos tecidos nos edemas e nas trocas lí...
973975 en-pt 20274327 WMT'16 Biomedical Translation Task - PubMed pubmed About suppuration of the wound after thoracopl... Sôbre as supurações da ferida operatória após ...
973976 en-pt 20274326 WMT'16 Biomedical Translation Task - PubMed pubmed Experimental study of liver lesions in the tre... Estudo experimental das lesões hepáticas no tr...
```
### Data Fields
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
`en-es` : 285,584
`en-fr` : 614,093
`en-pt` : 74,300
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://www.statmt.org/wmt16/biomedical-translation-task.html).
### Source Data
<!-- #### Initial Data Collection and Normalization
ddd -->
#### Who are the source language producers?
The shared task as been organized by :
* Antonio Jimeno Yepes (IBM Research Australia)
* Aurélie Névéol (LIMSI, CNRS, France)
* Mariana Neves (Hasso-Plattner Institute, Germany)
* Karin Verspoor (University of Melbourne, Australia)
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__Hugging Face WMT-16-PubMed__: Labrak Yanis, Dufour Richard (Not affiliated with the original corpus)
__WMT'16 Shared Task: Biomedical Translation Task__:
* Antonio Jimeno Yepes (IBM Research Australia)
* Aurélie Névéol (LIMSI, CNRS, France)
* Mariana Neves (Hasso-Plattner Institute, Germany)
* Karin Verspoor (University of Melbourne, Australia)
<!-- ### Licensing Information
ddd -->
### Citation Information
Please cite the following paper when using this dataset.
```latex
@inproceedings{bojar-etal-2016-findings,
title = Findings of the 2016 Conference on Machine Translation,
author = {
Bojar, Ondrej and
Chatterjee, Rajen and
Federmann, Christian and
Graham, Yvette and
Haddow, Barry and
Huck, Matthias and
Jimeno Yepes, Antonio and
Koehn, Philipp and
Logacheva, Varvara and
Monz, Christof and
Negri, Matteo and
Neveol, Aurelie and
Neves, Mariana and
Popel, Martin and
Post, Matt and
Rubino, Raphael and
Scarton, Carolina and
Specia, Lucia and
Turchi, Marco and
Verspoor, Karin and
Zampieri, Marcos,
},
booktitle = Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers,
month = aug,
year = 2016,
address = Berlin, Germany,
publisher = Association for Computational Linguistics,
url = https://aclanthology.org/W16-2301,
doi = 10.18653/v1/W16-2301,
pages = 131--198,
}
```
|
stas/wmt16-en-ro-pre-processed | 2021-02-16T03:58:06.000Z | [
"region:us"
] | stas | null | @InProceedings{huggingface:dataset,
title = {WMT16 English-Romanian Translation Data with further preprocessing},
authors={},
year={2016}
} | null | 0 | 21 | # WMT16 English-Romanian Translation Data w/ further preprocessing
The original instructions are [here](https://github.com/rsennrich/wmt16-scripts/tree/master/sample).
This pre-processed dataset was created by running:
```
git clone https://github.com/rsennrich/wmt16-scripts
cd wmt16-scripts
cd sample
./download_files.sh
./preprocess.sh
```
It was originally used by `transformers` [`finetune_trainer.py`](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/examples/seq2seq/finetune_trainer.py)
The data itself resides at https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
If you would like to convert it to jsonlines I've included a small script `convert-to-jsonlines.py` that will do it for you. But if you're using the `datasets` API, it will be done on the fly.
|
versae/bibles | 2022-08-27T09:11:17.000Z | [
"language:sq",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:ceb",
"language:zh",
"language:cs",
"language:da",
"language:en",
"language:es",
"language:fi",
"language:fr",
"language:de",
"language:el",
"language:ht",
"language:he",
"language:hi",
"language... | versae | Multilingual Bibles | @InProceedings{--,
author = {---},
title = {---},
booktitle = {---},
year = 2021,
address = "---"
} | null | 0 | 21 | ---
language:
- sq
- ar
- az
- be
- bg
- ceb
- zh
- cs
- da
- en
- es
- fi
- fr
- de
- el
- ht
- he
- hi
- hu
- it
- ko
- la
- nl
- no
- pt
- rm
- ru
- sw
- ta
- th
- tr
- vi
--- |
westphal-jan/mnli_entailment | 2022-04-19T15:13:12.000Z | [
"region:us"
] | westphal-jan | null | null | null | 0 | 21 | Entry not found |
openclimatefix/gfs-surface-pressure-2.0deg | 2022-06-28T18:38:27.000Z | [
"region:us"
] | openclimatefix | null | null | null | 0 | 21 | Entry not found |
arize-ai/xtreme_en | 2022-07-01T17:23:29.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|xtreme",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | null | 0 | 21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: named-entity-recognition-en-no-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|xtreme
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
BDas/ArabicNLPDataset | 2022-09-26T18:52:01.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
... | BDas | The dataset, prepared in Arabic, includes 10.000 tests, 10.000 validations and 80000 train data.
The data is composed of customer comments and created from e-commerce sites. | ----ArabicNLPDataset---- | null | 0 | 21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ar
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'ArabicNLPDataset'
---
# Dataset Card for "ArabicNLPDataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### arabic-dataset-v1
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Data Fields
The data fields are the same among all splits.
#### arabic-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 80000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset. |
gaurikapse/civis-consultation-summaries | 2022-09-04T18:05:08.000Z | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:other",
"legal",
"indian",
"government",
"policy",
"consultations",
"regio... | gaurikapse | null | null | null | 0 | 21 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: civis-consultation-summaries
size_categories:
- n<1K
source_datasets:
- original
tags:
- legal
- indian
- government
- policy
- consultations
task_categories:
- summarization
task_ids: []
---
|
rajistics/electricity_demand | 2022-10-19T21:03:02.000Z | [
"task_categories:time-series-forecasting",
"region:us"
] | rajistics | null | null | null | 2 | 21 | ---
task_categories:
- time-series-forecasting
---
The Victoria electricity demand dataset from the [MAPIE github repository](https://github.com/scikit-learn-contrib/MAPIE/tree/master/examples/data).
It consists of hourly electricity demand (in GW)
of the Victoria state in Australia together with the temperature
(in Celsius degrees).
|
din0s/asqa | 2022-09-20T16:14:54.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|ambig_qa",
"language:en",
"license:apache-2.0",
"factoid questions",
"l... | din0s | null | null | null | 0 | 21 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: ASQA
size_categories:
- 1K<n<10K
source_datasets:
- extended|ambig_qa
tags:
- factoid questions
- long-form answers
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for ASQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/language/tree/master/language/asqa
- **Paper:** https://arxiv.org/abs/2204.06092
- **Leaderboard:** https://ambigqa.github.io/asqa_leaderboard.html
### Dataset Summary
ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.
### Supported Tasks and Leaderboards
Long-form Question Answering. [Leaderboard](https://ambigqa.github.io/asqa_leaderboard.html)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"ambiguous_question": "Where does the civil liberties act place the blame for the internment of u.s. citizens?",
"qa_pairs": [
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by apologizing on behalf of them?",
"short_answers": [
"the people of the United States"
],
"wikipage": None
},
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by making them pay reparations?",
"short_answers": [
"United States government"
],
"wikipage": None
}
],
"wikipages": [
{
"title": "Civil Liberties Act of 1988",
"url": "https://en.wikipedia.org/wiki/Civil%20Liberties%20Act%20of%201988"
}
],
"annotations": [
{
"knowledge": [
{
"content": "The Civil Liberties Act of 1988 (Pub.L. 100–383, title I, August 10, 1988, 102 Stat. 904, 50a U.S.C. § 1989b et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.",
"wikipage": "Civil Liberties Act of 1988"
}
],
"long_answer": "The Civil Liberties Act of 1988 is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II. In the act, the blame for the internment of U.S. citizens was placed on the people of the United States, by apologizing on behalf of them. Furthermore, the blame for the internment was placed on the United States government, by making them pay reparations."
}
],
"sample_id": -4557617869928758000
}
```
### Data Fields
- `ambiguous_question`: ambiguous question from AmbigQA.
- `annotations`: long-form answers to the ambiguous question constructed by ASQA annotators.
- `annotations/knowledge`: list of additional knowledge pieces.
- `annotations/knowledge/content`: a passage from Wikipedia.
- `annotations/knowledge/wikipage`: title of the Wikipedia page the passage was taken from.
- `annotations/long_answer`: annotation.
- `qa_pairs`: Q&A pairs from AmbigQA which are used for disambiguation.
- `qa_pairs/context`: additional context provided.
- `qa_pairs/question`: disambiguated question from AmbigQA.
- `qa_pairs/short_answers`: list of short answers from AmbigQA.
- `qa_pairs/wikipage`: title of the Wikipedia page the additional context was taken from.
- `sample_id`: the unique id of the sample
- `wikipages`: list of Wikipedia pages visited by AmbigQA annotators.
- `wikipages/title`: title of the Wikipedia page.
- `wikipages/url`: link to the Wikipedia page.
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 4353 |
| Dev | 948 |
## Additional Information
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. |
esb/datasets | 2023-01-16T17:51:39.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
... | esb | null | null | null | 6 | 21 | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- cc-by-4.0
- apache-2.0
- cc0-1.0
- cc-by-nc-3.0
- other
multilinguality:
- monolingual
pretty_name: datasets
size_categories:
- 100K<n<1M
- 1M<n<10M
source_datasets:
- original
- extended|librispeech_asr
- extended|common_voice
tags:
- asr
- benchmark
- speech
- esb
task_categories:
- automatic-speech-recognition
extra_gated_prompt: |-
Three of the ESB datasets have specific terms of usage that must be agreed to before using the data.
To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
extra_gated_fields:
I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox
I hereby confirm that I have accepted the terms of usages on GigaSpeech page: checkbox
I hereby confirm that I have accepted the terms of usages on SPGISpeech page: checkbox
---
All eight of datasets in ESB can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
librispeech = load_dataset("esb/datasets", "librispeech", split="train")
```
- `"esb/datasets"`: the repository namespace. This is fixed for all ESB datasets.
- `"librispeech"`: the dataset name. This can be changed to any of any one of the eight datasets in ESB to download that dataset.
- `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(librispeech[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'dataset': 'librispeech',
'audio': {'path': '/home/sanchit-gandhi/.cache/huggingface/datasets/downloads/extracted/d2da1969fe9e7d06661b5dc370cf2e3c119a14c35950045bcb76243b264e4f01/374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished',
'id': '374-180298-0000'
}
```
### Data Fields
- `dataset`: name of the ESB dataset from which the sample is taken.
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text`: the transcription of the audio file.
- `id`: unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esb/leaderboard for scoring.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Diagnostic Dataset
ESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esb/diagnostic-dataset](https://huggingface.co/datasets/esb/diagnostic-dataset).
## Summary of ESB Datasets
| Dataset | Domain | Speaking Style | Train (h) | Dev (h) | Test (h) | Transcriptions | License |
|--------------|-----------------------------|-----------------------|-----------|---------|----------|--------------------|-----------------|
| LibriSpeech | Audiobook | Narrated | 960 | 11 | 11 | Normalised | CC-BY-4.0 |
| Common Voice | Wikipedia | Narrated | 1409 | 27 | 27 | Punctuated & Cased | CC0-1.0 |
| Voxpopuli | European Parliament | Oratory | 523 | 5 | 5 | Punctuated | CC0 |
| TED-LIUM | TED talks | Oratory | 454 | 2 | 3 | Normalised | CC-BY-NC-ND 3.0 |
| GigaSpeech | Audiobook, podcast, YouTube | Narrated, spontaneous | 2500 | 12 | 40 | Punctuated | apache-2.0 |
| SPGISpeech | Fincancial meetings | Oratory, spontaneous | 4900 | 100 | 100 | Punctuated & Cased | User Agreement |
| Earnings-22 | Fincancial meetings | Oratory, spontaneous | 105 | 5 | 5 | Punctuated & Cased | CC-BY-SA-4.0 |
| AMI | Meetings | Spontaneous | 78 | 9 | 9 | Punctuated & Cased | CC-BY-4.0 |
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esb/datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esb/datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esb/datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esb/datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esb/datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esb/datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esb/datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esb/datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esb/datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test` |
lambdalabs/naruto-blip-captions | 2022-10-27T21:17:06.000Z | [
"region:us"
] | lambdalabs | null | null | null | 12 | 21 | # Dataset Card for Naruto BLIP captions
_Dataset used to train [TBD](TBD)._
The original images were obtained from [narutopedia.com](https://naruto.fandom.com/wiki/Narutopedia) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Example stable diffusion outputs

> "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
## Citation
If you use this dataset, please cite it as:
```
@misc{cervenka2022naruto2,
author = {Cervenka, Eole},
title = {Naruto BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/naruto-blip-captions/}}
}
``` |
sinhala-nlp/SOLD | 2022-12-20T20:19:41.000Z | [
"region:us"
] | sinhala-nlp | null | null | null | 0 | 21 | # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets:
* **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* **Not Offensive (NOT)**: Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.

## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train'))
sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test'))
```
The dataset contains of the following columns.
* **post_id** - Twitter ID
* **text** - Post text
* **tokens** - Tokenised text. Each token is seperated by a space.
* **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* **label** - Sentence-level label, offensive or not-offensive.

SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train'))
```
The dataset contains following columns
* **post_id** - Twitter ID
* **text** - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r requirements.txt
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
## Citation
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ |
bigbio/biorelex | 2022-12-22T15:44:10.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at https://github.com/YerevaNN/BioRelEx/releases. Another 404 sentences are for
testing which are kept private for at this Codalab competition
https://competitions.codalab.org/competitions/20468. All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B) | @inproceedings{khachatrian2019biorelex,
title = "{B}io{R}el{E}x 1.0: Biological Relation Extraction Benchmark",
author = "Khachatrian, Hrant and
Nersisyan, Lilit and
Hambardzumyan, Karen and
Galstyan, Tigran and
Hakobyan, Anna and
Arakelyan, Arsen and
Rzhetsky, Andrey and
Galstyan, Aram",
booktitle = "Proceedings of the 18th BioNLP Workshop and Shared Task",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5019",
doi = "10.18653/v1/W19-5019",
pages = "176--190"
} | null | 1 | 21 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BioRelEx
homepage: https://github.com/YerevaNN/BioRelEx
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioRelEx
## Dataset Description
- **Homepage:** https://github.com/YerevaNN/BioRelEx
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE,COREF
BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at https://github.com/YerevaNN/BioRelEx/releases. Another 404 sentences are for
testing which are kept private for at this Codalab competition
https://competitions.codalab.org/competitions/20468. All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B)
## Citation Information
```
@inproceedings{khachatrian2019biorelex,
title = "{B}io{R}el{E}x 1.0: Biological Relation Extraction Benchmark",
author = "Khachatrian, Hrant and
Nersisyan, Lilit and
Hambardzumyan, Karen and
Galstyan, Tigran and
Hakobyan, Anna and
Arakelyan, Arsen and
Rzhetsky, Andrey and
Galstyan, Aram",
booktitle = "Proceedings of the 18th BioNLP Workshop and Shared Task",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5019",
doi = "10.18653/v1/W19-5019",
pages = "176--190"
}
```
|
bigbio/pdr | 2022-12-22T15:46:14.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.
The corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts. | @article{kim2019corpus,
title={A corpus of plant--disease relations in the biomedical domain},
author={Kim, Baeksoo and Choi, Wonjun and Lee, Hyunju},
journal={PLoS One},
volume={14},
number={8},
pages={e0221582},
year={2019},
publisher={Public Library of Science San Francisco, CA USA}
} | null | 0 | 21 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: PDR
homepage: http://gcancer.org/pdr/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- EVENT_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for PDR
## Dataset Description
- **Homepage:** http://gcancer.org/pdr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE,COREF
The corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.
The corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts.
## Citation Information
```
@article{kim2019corpus,
title={A corpus of plant--disease relations in the biomedical domain},
author={Kim, Baeksoo and Choi, Wonjun and Lee, Hyunju},
journal={PLoS One},
volume={14},
number={8},
pages={e0221582},
year={2019},
publisher={Public Library of Science San Francisco, CA USA}
}
```
|
bigbio/progene | 2022-12-22T15:46:19.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigbio | The Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.
The executing scientist was Dr. Joachim Wermter.
The main annotator was Dr. Rico Pusch who is an expert in biology.
The corpus was developed in the context of the StemNet project (http://www.stemnet.de/). | @inproceedings{faessler-etal-2020-progene,
title = "{P}ro{G}ene - A Large-scale, High-Quality Protein-Gene Annotated Benchmark Corpus",
author = "Faessler, Erik and
Modersohn, Luise and
Lohr, Christina and
Hahn, Udo",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.564",
pages = "4585--4596",
abstract = "Genes and proteins constitute the fundamental entities of molecular genetics. We here introduce ProGene (formerly called FSU-PRGE), a corpus that reflects our efforts to cope with this important class of named entities within the framework of a long-lasting large-scale annotation campaign at the Jena University Language {\&} Information Engineering (JULIE) Lab. We assembled the entire corpus from 11 subcorpora covering various biological domains to achieve an overall subdomain-independent corpus. It consists of 3,308 MEDLINE abstracts with over 36k sentences and more than 960k tokens annotated with nearly 60k named entity mentions. Two annotators strove for carefully assigning entity mentions to classes of genes/proteins as well as families/groups, complexes, variants and enumerations of those where genes and proteins are represented by a single class. The main purpose of the corpus is to provide a large body of consistent and reliable annotations for supervised training and evaluation of machine learning algorithms in this relevant domain. Furthermore, we provide an evaluation of two state-of-the-art baseline systems {---} BioBert and flair {---} on the ProGene corpus. We make the evaluation datasets and the trained models available to encourage comparable evaluations of new methods in the future.",
language = "English",
ISBN = "979-10-95546-34-4",
} | null | 1 | 21 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: ProGene
homepage: https://zenodo.org/record/3698568#.YlVHqdNBxeg
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for ProGene
## Dataset Description
- **Homepage:** https://zenodo.org/record/3698568#.YlVHqdNBxeg
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.
The executing scientist was Dr. Joachim Wermter.
The main annotator was Dr. Rico Pusch who is an expert in biology.
The corpus was developed in the context of the StemNet project (http://www.stemnet.de/).
## Citation Information
```
@inproceedings{faessler-etal-2020-progene,
title = "{P}ro{G}ene - A Large-scale, High-Quality Protein-Gene Annotated Benchmark Corpus",
author = "Faessler, Erik and
Modersohn, Luise and
Lohr, Christina and
Hahn, Udo",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.564",
pages = "4585--4596",
abstract = "Genes and proteins constitute the fundamental entities of molecular genetics. We here introduce ProGene (formerly called FSU-PRGE), a corpus that reflects our efforts to cope with this important class of named entities within the framework of a long-lasting large-scale annotation campaign at the Jena University Language {\&} Information Engineering (JULIE) Lab. We assembled the entire corpus from 11 subcorpora covering various biological domains to achieve an overall subdomain-independent corpus. It consists of 3,308 MEDLINE abstracts with over 36k sentences and more than 960k tokens annotated with nearly 60k named entity mentions. Two annotators strove for carefully assigning entity mentions to classes of genes/proteins as well as families/groups, complexes, variants and enumerations of those where genes and proteins are represented by a single class. The main purpose of the corpus is to provide a large body of consistent and reliable annotations for supervised training and evaluation of machine learning algorithms in this relevant domain. Furthermore, we provide an evaluation of two state-of-the-art baseline systems {---} BioBert and flair {---} on the ProGene corpus. We make the evaluation datasets and the trained models available to encourage comparable evaluations of new methods in the future.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
|
thennal/IMaSC | 2022-12-08T17:21:02.000Z | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ml",
"license:cc-by-sa-4.0",
"arxiv:2211.12796",
... | thennal | null | null | null | 2 | 21 | ---
annotations_creators:
- expert-generated
language:
- ml
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ICFOSS Malayalam Speech Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-to-speech
- automatic-speech-recognition
task_ids: []
---
# IMaSC: ICFOSS Malayalam Speech Corpus
**IMaSC** is a Malayalam text and speech corpus made available by [ICFOSS](https://icfoss.in/) for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
## Dataset Description
- **Paper:** [IMaSC — ICFOSS Malayalam Speech Corpus](https://arxiv.org/abs/2211.12796)
- **Point of Contact:** [Thennal D K](mailto:thennal10@gmail.com)
## Dataset Structure
The dataset consists of 34,473 instances with fields `text`, `speaker`, and `audio`. The audio is mono, sampled at 16kH. The transcription is normalized and only includes Malayalam characters and common punctuation. The table given below specifies how the 34,473 instances are split between the speakers, along with some basic speaker info:
| Speaker | Gender | Age | Time (HH:MM:SS) | Sentences |
| --- | --- | --- | --- | --- |
| Joji | Male | 28 | 06:08:55 | 4,332 |
| Sonia | Female | 43 | 05:22:39 | 4,294 |
| Jijo | Male | 26 | 05:34:05 | 4,093 |
| Greeshma | Female | 22 | 06:32:39 | 4,416 |
| Anil | Male | 48 | 05:58:34 | 4,239 |
| Vidhya | Female | 23 | 04:21:56 | 3,242 |
| Sonu | Male | 25 | 06:04:43 | 4,219 |
| Simla | Female | 24 | 09:34:21 | 5,638 |
| **Total** | | | **49:37:54** | **34,473** |
### Data Instances
An example instance is given below:
```json
{'text': 'സർവ്വകലാശാല വൈസ് ചാൻസലർ ഡോ. ചന്ദ്രബാബുവിനും സംഭവം തലവേദനയാവുകയാണ്',
'speaker': 'Sonia',
'audio': {'path': None,
'array': array([ 0.00921631, 0.00930786, 0.00939941, ..., -0.00497437,
-0.00497437, -0.00497437]),
'sampling_rate': 16000}}
```
### Data Fields
- **text** (str): Transcription of the audio file
- **speaker** (str): The name of the speaker
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio (always None)
### Data Splits
We provide all the data in a single `train` split. The loaded dataset object thus looks like this:
```json
DatasetDict({
train: Dataset({
features: ['text', 'speaker', 'audio'],
num_rows: 34473
})
})
```
### Dataset Creation
The text is sourced from [Malayalam Wikipedia](https://ml.wikipedia.org), and read by our speakers in studio conditions. Extensive error correction was conducted to provide a clean, accurate database. Further details are given in our paper, accessible at [https://arxiv.org/abs/2211.12796](https://arxiv.org/abs/2211.12796).
## Additional Information
### Licensing
The corpus is made available under the [Creative Commons license (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation
```
@misc{gopinath2022imasc,
title={IMaSC -- ICFOSS Malayalam Speech Corpus},
author={Deepa P Gopinath and Thennal D K and Vrinda V Nair and Swaraj K S and Sachin G},
year={2022},
eprint={2211.12796},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
heegyu/korean-petitions | 2023-01-15T09:46:48.000Z | [
"license:mit",
"region:us"
] | heegyu | null | null | null | 0 | 21 | ---
license: mit
---
# 청와대 국민청원
데이터 출처: https://github.com/lovit/petitions_archive<br/>
크기: 651.8MB
sample
```
{
"category": "반려동물",
"begin": "2017-08-25",
"end": "2017-11-23",
"content": "길고양이들 밥주고있는 사람입니다. 최근에 동네주민과 트러블이 생겨 싸움이 일어났습니다. 길고양이들이 모여든다고 밥주지마라고 윽박지르셨습니다. 쓰레기봉투를 뜯는다거나 사람에게 해끼치거나 하지 않았습니다. 단순히 고양이가 모여드는게 싫답니다. 그럼 애들은 굶어죽어야하나요? 길고양이들이 맘놓고 쉬고 밥먹을 수 있는 환경이 전혀 없는데 무작정 밥안주고 물 안주면 얘네는 어떻게 하나요? 안그래도 수명도 짧은데다가 길고양이를 상대로 학대하는 사람들도 많은데 너무 가엾습니다. 강동구청은 고양이 급식소라고 만들어주셨던데 동네마다 한개씩이라도 만들어 주셨으면좋겠어요.. 밥에다가 이상한짓 하는 사람 있을 수 있으니까 cctv도 설치도 해주셨으면 합니다.. (급식소에 쥐약을 뿌려 고양이가 죽은 사례가 있습니다) 지구가 사람껀 아니잖아요 동물과도 더불어 살줄 알아야죠 문대통령님께서 동물복지 관련 공략을 내셨지만 나아진게 전혀 없는거같아요. 공략 꼭 지켜주세요.. 믿고 뽑았는데 전혀 나아지고 바뀐게 없으면 너무 실망스럽잖아요.. 그리고 길고양이뿐만 아니라 다른 동물 학대하는 부분도 처벌 강화 부탁드립니다",
"num_agree": 5,
"petition_idx": "513",
"status": "청원종료",
"title": "길고양이를 도와주세요"
}
``` |
abertsch/booksum-fullbooks | 2022-12-22T21:44:19.000Z | [
"region:us"
] | abertsch | null | null | null | 3 | 21 | ---
dataset_info:
features:
- name: bid
dtype: string
- name: source
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: book
dtype: string
splits:
- name: validation
num_bytes: 23586559
num_examples: 45
- name: train
num_bytes: 165182724
num_examples: 314
- name: test
num_bytes: 31094987
num_examples: 46
download_size: 60336046
dataset_size: 219864270
---
# Dataset Card for "booksum-fullbooks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ozziey/poems_dataset | 2023-01-09T16:28:56.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:afl-3.0",
"region:us"
] | Ozziey | null | null | null | 3 | 21 | ---
license: afl-3.0
task_categories:
- tabular-classification
language:
- en
pretty_name: Detected emotions and information for poetry dataset
size_categories:
- n<1K
--- |
ruanchaves/b2w-reviews01 | 2023-01-20T18:22:37.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:intent-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datase... | ruanchaves | B2W-Reviews01 is an open corpus of product reviews. It contains more than 130k e-commerce customer reviews, collected from the Americanas.com website between January and May, 2018. B2W-Reviews01 offers rich information about the reviewer profile, such as gender, age, and geographical location. The corpus also has two different review rates | @inproceedings{real2019b2w,
title={B2W-reviews01: an open product reviews corpus},
author={Real, Livy and Oshiro, Marcio and Mafra, Alexandre},
booktitle={STIL-Symposium in Information and Human Language Technology},
year={2019}
} | null | 9 | 21 | ---
annotations_creators:
- found
language:
- pt
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: B2W-Reviews01
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- reviews
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- intent-classification
- topic-classification
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/americanas-tech/b2w-reviews01
- **Paper:** http://comissoes.sbc.org.br/ce-pln/stil2019/proceedings-stil-2019-Final-Publicacao.pdf
- **Point of Contact:** Livy Real
### Dataset Summary
B2W-Reviews01 is an open corpus of product reviews. It contains more than 130k e-commerce customer reviews, collected from the Americanas.com website between January and May, 2018. B2W-Reviews01 offers rich information about the reviewer profile, such as gender, age, and geographical location. The corpus also has two different review rates:
* the usual 5-point scale rate, represented by stars in most e-commerce websites,
* a "recommend to a friend" label, a "yes or no" question representing the willingness of the customer to recommend the product to someone else.
### Supported Tasks and Leaderboards
* Sentiment Analysis
* Topic Modeling
### Languages
* Portuguese
## Dataset Structure
### Data Instances
```
{'submission_date': '2018-01-02 06:23:22',
'reviewer_id': '6adc7901926fc1697d34181fbd88895976b4f3f31f0102d90217d248a1fad156',
'product_id': '123911277',
'product_name': 'Triciclo Gangorra Belfix Cabeça Cachorro Rosa',
'product_brand': 'belfix',
'site_category_lv1': 'Brinquedos',
'site_category_lv2': 'Mini Veículos',
'review_title': 'O produto não foi entregue',
'overall_rating': 1,
'recommend_to_a_friend': 'Yes',
'review_text': 'Incrível o descaso com o consumidor. O produto não chegou, apesar de já ter sido pago. Não recebo qualquer informação sobre onde se encontra o produto, ou qualquer compensação do vendedor. Não recomendo.',
'reviewer_birth_year': 1981,
'reviewer_gender': 'M',
'reviewer_state': 'RJ'}
```
### Data Fields
* **submission_date**: the date and time when the review was submitted. `"%Y-%m-%d %H:%M:%S"`.
* **reviewer_id**: a unique identifier for the reviewer.
* **product_id**: a unique identifier for the product being reviewed.
* **product_name**: the name of the product being reviewed.
* **product_brand**: the brand of the product being reviewed.
* **site_category_lv1**: the highest level category for the product on the site where the review is being submitted.
* **site_category_lv2**: the second level category for the product on the site where the review is being submitted.
* **review_title**: the title of the review.
* **overall_rating**: the overall star rating given by the reviewer on a scale of 1 to 5.
* **recommend_to_a_friend**: whether or not the reviewer would recommend the product to a friend (Yes/No).
* **review_text**: the full text of the review.
* **reviewer_birth_year**: the birth year of the reviewer.
* **reviewer_gender**: the gender of the reviewer (F/M).
* **reviewer_state**: the Brazilian state of the reviewer (e.g. RJ).
### Data Splits
| name |train|
|---------|----:|
|b2w-reviews01|132373|
### Citation Information
```
@inproceedings{real2019b2w,
title={B2W-reviews01: an open product reviews corpus},
author={Real, Livy and Oshiro, Marcio and Mafra, Alexandre},
booktitle={STIL-Symposium in Information and Human Language Technology},
year={2019}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. |
jayelm/natural-instructions | 2023-01-29T23:16:06.000Z | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"region:us"
] | jayelm | null | null | null | 2 | 21 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- other
---
Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the `id` or the `inputs` field.
This is modified from https://huggingface.co/datasets/Muennighoff/natural-instructions
with a few improvements:
1. Adds positive/negative examples, outputs, explanations for each task, to
support different task definitions.
2. Adds an "eval" field which which is True for the first 100 examples of each
test task (119 * 100 = 11900 examples). This field indicates whether an example
is part of the abbreviated + balanced test split. See
https://github.com/allenai/natural-instructions/blob/master/src/reorder_instances_for_testing.py.
3. Adds an "eval" field to the training dataset, which can be used as an
in-domain evaluation set. To do so, we sample a balanced set the first 15
examples of each train split (757 * 15 = 11355 examples) and mark the "eval"
field as true.
|
IlyaGusev/ru_stackoverflow | 2023-03-09T23:48:16.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:ru",
"license:other",
"region:us"
] | IlyaGusev | null | null | null | 8 | 21 | ---
license: other
task_categories:
- text-generation
- question-answering
language:
- ru
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: question_id
dtype: uint32
- name: url
dtype: string
- name: answer_count
dtype: uint32
- name: text_html
dtype: string
- name: text_markdown
dtype: string
- name: score
dtype: int32
- name: title
dtype: string
- name: tags
sequence: string
- name: views
dtype: uint64
- name: author
dtype: string
- name: timestamp
dtype: uint64
- name: comments
sequence:
- name: text
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: uint32
- name: score
dtype: int32
- name: timestamp
dtype: uint64
- name: answers
sequence:
- name: answer_id
dtype: uint32
- name: is_accepted
dtype: uint8
- name: text_html
dtype: string
- name: text_markdown
dtype: string
- name: score
dtype: int32
- name: author
dtype: string
- name: timestamp
dtype: uint64
- name: comments
sequence:
- name: text
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: uint32
- name: score
dtype: int32
- name: timestamp
dtype: uint64
splits:
- name: train
num_bytes: 3013377174
num_examples: 437604
download_size: 670468664
dataset_size: 3013377174
---
# Russian StackOverflow dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Description
**Summary:** Dataset of questions, answers, and comments from [ru.stackoverflow.com](https://ru.stackoverflow.com/).
**Script:** [create_stackoverflow.py](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** The dataset is in Russian with some programming code.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Loading:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train")
for example in dataset:
print(example["text_markdown"])
print()
```
## Data Instances
```
{
"question_id": 11235,
"answer_count": 1,
"url": "https://ru.stackoverflow.com/questions/11235",
"score": 2,
"tags": ["c++", "сериализация"],
"title": "Извлечение из файла, запись в файл",
"views": 1309,
"author": "...",
"timestamp": 1303205289,
"text_html": "...",
"text_markdown": "...",
"comments": {
"text": ["...", "...",
"author": ["...", "..."],
"comment_id": [11236, 11237],
"score": [0, 0],
"timestamp": [1303205411, 1303205678]
},
"answers": {
"answer_id": [11243, 11245],
"timestamp": [1303207791, 1303207792],
"is_accepted": [1, 0],
"text_html": ["...", "..."],
"text_markdown": ["...", "..."],
"score": [3, 0],
"author": ["...", "..."],
"comments": {
"text": ["...", "..."],
"author": ["...", "..."],
"comment_id": [11246, 11249],
"score": [0, 0],
"timestamp": [1303207961, 1303207800]
}
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
The original JSONL is already unflattened.
## Source Data
* The data source is the [Russian StackOverflow](https://ru.stackoverflow.com/) website.
* Original XMLs: [ru.stackoverflow.com.7z](https://ia600107.us.archive.org/27/items/stackexchange/ru.stackoverflow.com.7z).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
## Licensing Information
According to the license of original data, this dataset is distributed under [CC BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/). |
GIZ/policy_qa_v0 | 2023-05-31T08:59:44.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"climate",
"region:us"
] | GIZ | null | null | null | 2 | 21 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
tags:
- climate
---
This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html) in the form of Sqaud dataset with features 'question', 'answers', 'answers_start' and 'context'. The source dataset for this
comes from [Climatewatchdata](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1),
where Climatewatch has analysed Intended nationally determined contribution (INDC), NDC and Revised/Updated NDC of the countries to answer some important questions related to Climate change.
Specifications
- Dataset size: 31382
- Average Context length : 50 words
- Language: English
The list of Sectors covered include: Agriculture', 'Coastal Zone', 'Cross-Cutting Area', 'Education', 'Energy', 'Environment', 'Water', 'Buildings', 'Economy-wide', 'Industries', 'Transport', 'Waste', 'Health', 'LULUCF/Forestry', 'Social Development', 'Disaster Risk Management (DRM)', 'Urban','Tourism'.
Some of the important question categories pertaining to climate change(adapted from climatewatchdata) include
- Sectoral Policies
- Sectoral Unconditional Actions
- Building on existing downstream actions
- Sectoral plans
- Sectoral targets
- Action and priority
- Adapt Now sector
- Emission reduction potential
- Capacity Building Needs for Sectoral Implementation
- Sectoral Conditional Actions
- Technology Transfer Needs for Sectoral Implementation
- Conditional part of mitigation target
- Capacity building needs
- Technology needs
- Unconditional part of mitigation target
- Time frame
- Emission reduction potential
No answer category like 'Squad2' is not part of dataset but can be easily curated from existing examples. |
VLyb/UMLS | 2023-02-16T09:13:21.000Z | [
"license:unlicense",
"region:us"
] | VLyb | null | null | null | 1 | 21 | ---
license: unlicense
---
|
sayakpaul/instructpix2pix-demo | 2023-02-22T04:38:14.000Z | [
"arxiv:2211.09800",
"region:us"
] | sayakpaul | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: input
dtype: string
- name: edit
dtype: string
- name: output
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 2456199.0
num_examples: 5
download_size: 2460397
dataset_size: 2456199.0
---
# Dataset Card for "instructpix2pix-demo"
Dataset was created using [this notebook](https://colab.research.google.com/gist/sayakpaul/f90aa06f8f89c831f798dd5b3939818b/scratchpad.ipynb).
Paper reference: [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) |
vietgpt/ted_talks_iwslt_vi | 2023-04-03T01:15:01.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:vi",
"LM",
"region:us"
] | vietgpt | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23236337
num_examples: 1566
download_size: 11586233
dataset_size: 23236337
task_categories:
- text-generation
language:
- vi
tags:
- LM
size_categories:
- 1K<n<10K
---
# Ted Talks
- Source: https://huggingface.co/datasets/ted_talks_iwslt
- Num examples: 1,566
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/ted_talks_iwslt_vi")
``` |
wwydmanski/UNSW-NB15 | 2023-02-26T11:14:46.000Z | [
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"tabular",
"network",
"region:us"
] | wwydmanski | null | null | null | 1 | 21 | ---
task_categories:
- tabular-classification
tags:
- tabular
- network
size_categories:
- 1M<n<10M
---
## Source
https://www.kaggle.com/datasets/dhoogla/unswnb15?resource=download
## Dataset
This is an academic intrusion detection dataset. All the credit goes to the original authors: dr. Nour Moustafa and dr. Jill Slay.
Please cite their original paper and all other appropriate articles listed on the UNSW-NB15 page.
The full dataset also offers the pcap, BRO and Argus files along with additional documentation.
The modifications to the predesignated train-test sets are minimal and designed to decrease disk storage and increase performance & reliability.
Exploratory Data Analysis (EDA) through classification with very simple models to .877 AUROC. |
turuta/Multi30k-uk | 2023-05-04T19:11:45.000Z | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:uk",
"language:en",
"license:unknown",
"common",
"multi30k",
"ukrainian",
"region:us"
] | turuta | Ukrainian Multi30k | \ | null | 3 | 21 | ---
license: unknown
task_categories:
- translation
- text-generation
language:
- uk
- en
pretty_name: ukr-multi30k
size_categories:
- 10K<n<100K
tags:
- common
- multi30k
- ukrainian
---
## Dataset Multi30k: English-Ukrainian variation
Multi30K dataset is designed to develop multilingual multimodal researches.
Initially this dataset extends the Flickr30K dataset by adding German translations. The descriptions were collected from a crowdsourcing platform, while the translations were collected from professionally contracted translators.
We present a variation of this dataset manually translated for Ukrainian language.
Paper:
```python
@inproceedings{saichyshyna-etal-2023-extension,
title = "Extension {M}ulti30{K}: Multimodal Dataset for Integrated Vision and Language Research in {U}krainian",
author = "Saichyshyna, Nataliia and
Maksymenko, Daniil and
Turuta, Oleksii and
Yerokhin, Andriy and
Babii, Andrii and
Turuta, Olena",
booktitle = "Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.unlp-1.7",
pages = "54--61",
abstract = "We share the results of the project within the well-known Multi30k dataset dedicated to improving machine translation of text from English into Ukrainian. The main task was to manually prepare the dataset and improve the translation of texts. The importance of collecting such datasets for low-resource languages for improving the quality of machine translation has been discussed. We also studied the features of translations of words and sentences with ambiguous meanings.The collection of multimodal datasets is essential for natural language processing tasks because it allows the development of more complex and comprehensive machine learning models that can understand and analyze different types of data. These models can learn from a variety of data types, including images, text, and audio, for more accurate and meaningful results.",
}
``` |
Babypotatotang/logo-captioning-BLIP-BrandInfoWBP | 2023-04-04T06:23:31.000Z | [
"region:us"
] | Babypotatotang | null | null | null | 1 | 21 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 321581037.08
num_examples: 24080
- name: test
num_bytes: 82453208.54
num_examples: 6021
download_size: 265975818
dataset_size: 404034245.62
---
# Dataset Card for "logo-captioning-BLIP-BrandInfoWBP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AnanthZeke/tamil_sentences_sample | 2023-04-05T17:35:25.000Z | [
"region:us"
] | AnanthZeke | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1164550978
num_examples: 2391475
download_size: 347960778
dataset_size: 1164550978
---
# Dataset Card for "tamil_combined_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
voidful/NMSQA-CODE | 2023-07-24T18:30:24.000Z | [
"language:en",
"region:us"
] | voidful | null | null | null | 3 | 21 | ---
language: en
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: audio_full_answer_end
sequence: float64
- name: audio_full_answer_start
sequence: float64
- name: audio_segment_answer_end
sequence: float64
- name: audio_segment_answer_start
sequence: float64
- name: text
sequence: string
- name: content_segment_audio_path
dtype: string
- name: content_full_audio_path
dtype: string
- name: content_audio_sampling_rate
dtype: float64
- name: content_audio_speaker
dtype: string
- name: content_segment_text
dtype: string
- name: content_segment_normalized_text
dtype: string
- name: question_audio_path
dtype: string
- name: question_audio_sampling_rate
dtype: float64
- name: question_audio_speaker
dtype: string
- name: question_normalized_text
dtype: string
- name: hubert_100_context_unit
dtype: string
- name: hubert_100_question_unit
dtype: string
- name: hubert_100_answer_unit
dtype: string
- name: mhubert_1000_context_unit
dtype: string
- name: mhubert_1000_question_unit
dtype: string
- name: mhubert_1000_answer_unit
dtype: string
splits:
- name: train
num_bytes: 3329037982
num_examples: 87599
- name: test
num_bytes: 1079782
num_examples: 171
- name: dev
num_bytes: 411186265
num_examples: 10570
download_size: 507994561
dataset_size: 3741304029
---
# Dataset Card for "NMSQA-CODE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
437aewuh/dog-dataset | 2023-04-18T13:18:25.000Z | [
"task_categories:audio-to-audio",
"task_categories:audio-classification",
"size_categories:n<1K",
"license:other",
"biology",
"region:us"
] | 437aewuh | null | null | null | 0 | 21 | ---
license: other
task_categories:
- audio-to-audio
- audio-classification
tags:
- biology
size_categories:
- n<1K
---
This dataset is a redistribution of the following dataset.
https://github.com/suzuki256/dog-dataset
```
The dataset and its contents are made available on an "as is" basis and without warranties of any kind, including without limitation satisfactory quality and conformity, merchantability, fitness for a particular purpose, accuracy or completeness, or absence of errors.
```
|
cestwc/SG-subzone-poi-sentiment | 2023-04-20T07:44:54.000Z | [
"region:us"
] | cestwc | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: local_created_at
dtype: string
- name: id
dtype: int64
- name: text
dtype: string
- name: source
dtype: string
- name: truncated
dtype: bool
- name: in_reply_to_status_id
dtype: float64
- name: in_reply_to_user_id
dtype: float64
- name: user_id
dtype: int64
- name: user_name
dtype: string
- name: user_screen_name
dtype: string
- name: user_location
dtype: string
- name: user_url
dtype: string
- name: user_verified
dtype: bool
- name: user_default_profile
dtype: bool
- name: user_description
dtype: string
- name: user_followers_count
dtype: int64
- name: user_friends_count
dtype: int64
- name: user_listed_count
dtype: int64
- name: user_favourites_count
dtype: int64
- name: user_statuses_count
dtype: int64
- name: local_user_created_at
dtype: string
- name: place_id
dtype: string
- name: place_url
dtype: string
- name: place_place_type
dtype: string
- name: place_name
dtype: string
- name: place_country_code
dtype: string
- name: place_bounding_box_type
dtype: string
- name: place_bounding_box_coordinates
dtype: string
- name: is_quote_status
dtype: bool
- name: retweet_count
dtype: int64
- name: favorite_count
dtype: int64
- name: entities_hashtags
dtype: string
- name: entities_urls
dtype: string
- name: entities_symbols
dtype: string
- name: entities_user_mentions
dtype: string
- name: favorited
dtype: bool
- name: retweeted
dtype: bool
- name: possibly_sensitive
dtype: bool
- name: lang
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: year_created_at
dtype: int64
- name: month_created_at
dtype: int64
- name: day_created_at
dtype: int64
- name: weekday_created_at
dtype: int64
- name: hour_created_at
dtype: int64
- name: minute_created_at
dtype: int64
- name: year_user_created_at
dtype: int64
- name: month_user_created_at
dtype: int64
- name: day_user_created_at
dtype: int64
- name: weekday_user_created_at
dtype: int64
- name: hour_user_created_at
dtype: int64
- name: minute_user_created_at
dtype: int64
- name: subzone
dtype: string
- name: planning_area
dtype: string
- name: poi_flag
dtype: float64
- name: poi_id
dtype: string
- name: poi_dist
dtype: float64
- name: poi_latitude
dtype: float64
- name: poi_longitude
dtype: float64
- name: poi_name
dtype: string
- name: poi_type
dtype: string
- name: poi_cate2
dtype: string
- name: poi_cate3
dtype: string
- name: clean_text
dtype: string
- name: joy_score
dtype: float64
- name: trust_score
dtype: float64
- name: positive_score
dtype: float64
- name: sadness_score
dtype: float64
- name: disgust_score
dtype: float64
- name: anger_score
dtype: float64
- name: anticipation_score
dtype: float64
- name: negative_score
dtype: float64
- name: fear_score
dtype: float64
- name: surprise_score
dtype: float64
- name: words
dtype: string
- name: polarity_score
dtype: float64
- name: labels
dtype: int64
splits:
- name: '0203'
num_bytes: 1519418943
num_examples: 1025135
download_size: 415295950
dataset_size: 1519418943
---
# Dataset Card for "SG-subzone-poi-sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
miladfa7/Brain-MRI-Images-for-Brain-Tumor-Detection | 2023-05-16T17:11:04.000Z | [
"region:us"
] | miladfa7 | null | null | null | 2 | 21 |
Brain Tumor Detection | Vision Transformer 99%
Click -> [Kaggle](https://www.kaggle.com/code/miladfa7/brain-tumor-detection-vision-transformer-99)
---
task_categories:
- image-classification
- image-segmentation
tags:
- 'brain '
- MRI
- brain-MRI-images
- Tumor
--- |
brainer/KoreanApartmentDealData | 2023-07-09T11:57:06.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"license:other",
"korea",
"apartment",
"region:us"
] | brainer | null | null | null | 0 | 21 | ---
license: other
task_categories:
- tabular-classification
- tabular-regression
tags:
- korea
- apartment
pretty_name: Korean Apartment Deal Data
--- |
xbgoose/ravdess | 2023-05-21T22:35:11.000Z | [
"region:us"
] | xbgoose | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: modality
dtype: string
- name: vocal_channel
dtype: string
- name: emotion
dtype: string
- name: emotional_intensity
dtype: string
- name: statement
dtype: string
- name: repetition
dtype: string
- name: actor
dtype: int64
- name: gender
dtype: string
splits:
- name: train
num_bytes: 595474115.04
num_examples: 1440
download_size: 324920159
dataset_size: 595474115.04
---
# Dataset Card for "ravdess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hlydecker/face-masks | 2023-05-31T03:02:14.000Z | [
"task_categories:object-detection",
"task_categories:image-classification",
"license:mit",
"medical",
"region:us"
] | hlydecker | null | null | null | 1 | 21 | ---
license: mit
task_categories:
- object-detection
- image-classification
tags:
- medical
---
Face Masks ensemble dataset is no longer limited to [Kaggle](https://www.kaggle.com/datasets/henrylydecker/face-masks), it is now coming to Huggingface!
This dataset was created to help train and/or fine tune models for detecting masked and un-masked faces.
I created a new face masks object detection dataset by compositing together three publically available face masks object detection datasets on Kaggle that used the YOLO annotation format.
To combine the datasets, I used Roboflow.
All three original datasets had different class dictionaries, so I recoded the classes into two classes: "Mask" and "No Mask".
One dataset included a class for incorrectly worn face masks, images with this class were removed from the dataset.
Approximately 50 images had corrupted annotations, so they were manually re-annotated in the Roboflow platform.
The final dataset includes 9,982 images, with 24,975 annotated instances.
Image resolution was on average 0.49 mp, with a median size of 750 x 600 pixels.
To improve model performance on out of sample data, I used 90 degree rotational augmentation.
This saved duplicate versions of each image for 90, 180, and 270 degree rotations.
I then split the data into 85% training, 10% validation, and 5% testing.
Images with classes that were removed from the dataset were removed, leaving 16,000 images in training, 1,900 in validation, and 1,000 in testing. |
ltkw98/mapping | 2023-06-22T13:01:48.000Z | [
"region:us"
] | ltkw98 | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: tec_name
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 369062
num_examples: 2358
download_size: 165236
dataset_size: 369062
---
# Dataset Card for "mapping"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anujsahani01/English-Marathi | 2023-06-29T23:46:13.000Z | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:mr",
"region:us"
] | anujsahani01 | null | null | null | 1 | 21 | ---
task_categories:
- translation
language:
- en
- mr
size_categories:
- 1M<n<10M
---
This Dataset was prepared by collecting english-marathi translation from different resources.
Happy Fine-tuning😀 |
gabeorlanski/bc-mbpp | 2023-07-21T22:03:56.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|mbpp",
"language:en",
"license:apache-2.0",
"code",
"arxiv:2302.01973",
"arxiv:2108.07732",
"region:us"
] | gabeorlanski | The MBPP dataset in BabelCode format. | @article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{Austin2021ProgramSW,
title={Program Synthesis with Large Language Models},
author={Jacob Austin and Augustus Odena and Maxwell Nye and Maarten Bosma and Henryk Michalewski and David Dohan and Ellen Jiang and Carrie J. Cai and Michael Terry and Quoc V. Le and Charles Sutton},
journal={ArXiv},
year={2021},
volume={abs/2108.07732}
} | null | 0 | 21 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- code
pretty_name: BabelCode MBPP
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|mbpp
---
# Dataset Card for BabelCode MBPP
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).
### Dataset Summary
The BabelCode-MBPP (BC-MBPP) dataset converts the [MBPP dataset released by Google](https://arxiv.org/abs/2108.07732) to 16 programming languages.
### Supported Tasks and Leaderboards
### Languages
BC-MBPP supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-mbpp")
DatasetDict({
train: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 5308
})
test: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 6989
})
validation: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 1216
})
prompt: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 160
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `text`: The description of the problem.
- `signature`: The signature for the problem.
- `signature_with_docstring`: The signature with the adequately formatted docstring for the given problem.
- `arguments`: The arguments of the problem.
- `solution`: The solution in Python.
- `question_info`: The dict of information used for executing predictions. It has the keys:
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
- `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
- `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
- `timeouts`: The default timeouts for each command.
- `extension`: The extension for the prediction file.
**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-mbpp")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
removeOcc
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```
## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
Information on how the original MBPP was curated is located [here](https://huggingface.co/datasets/mbpp).
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{Austin2021ProgramSW,
title={Program Synthesis with Large Language Models},
author={Jacob Austin and Augustus Odena and Maxwell Nye and Maarten Bosma and Henryk Michalewski and David Dohan and Ellen Jiang and Carrie J. Cai and Michael Terry and Quoc V. Le and Charles Sutton},
journal={ArXiv},
year={2021},
volume={abs/2108.07732}
}
``` |
TREC-AToMiC/TREC-2023-Text-to-Image | 2023-06-29T21:16:33.000Z | [
"region:us"
] | TREC-AToMiC | null | null | null | 1 | 21 | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: page_url
dtype: string
- name: page_title
dtype: string
- name: section_title
dtype: string
- name: context_page_description
dtype: string
- name: context_section_description
dtype: string
- name: media
sequence: string
- name: hierachy
sequence: string
- name: category
sequence: string
- name: source_id
dtype: string
splits:
- name: train
num_bytes: 402439.0669364712
num_examples: 200
download_size: 506239
dataset_size: 402439.0669364712
---
# Dataset Card for "TREC-2023-Text-to-Image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
santoshtyss/indian_courts_cases | 2023-07-03T10:13:03.000Z | [
"region:us"
] | santoshtyss | null | null | null | 2 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 552831260
num_examples: 28816
- name: validation
num_bytes: 55504767
num_examples: 3000
download_size: 286689063
dataset_size: 608336027
---
# Dataset Card for "indian_courts_cases"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JayalekshmiGopakumar/doclaynet_classlabel | 2023-07-12T05:33:05.000Z | [
"region:us"
] | JayalekshmiGopakumar | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': financial_reports
'1': government_tenders
'2': laws_and_regulations
'3': manuals
'4': patents
'5': scientific_articles
splits:
- name: train
num_bytes: 1798548
num_examples: 691
- name: validation
num_bytes: 166488
num_examples: 64
- name: test
num_bytes: 124710
num_examples: 49
download_size: 1173005
dataset_size: 2089746
---
# Dataset Card for "doclaynet_classlabel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alexshengzhili/SciCapInstructed-graph-only-qa | 2023-07-16T02:10:33.000Z | [
"license:mit",
"region:us"
] | alexshengzhili | null | null | null | 0 | 21 | ---
license: mit
dataset_info:
features:
- name: image_file
dtype: string
- name: id
dtype: string
- name: caption
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: first_mention
dtype: string
- name: response
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: q_a_pairs
sequence:
sequence: string
splits:
- name: 1_percent_as_validation
num_bytes: 16096860.454545455
num_examples: 3002
download_size: 7889034
dataset_size: 16096860.454545455
---
|
branles14/ultrachat-uncensored_full | 2023-07-20T03:39:25.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | branles14 | null | null | null | 1 | 21 | ---
license: cc-by-nc-4.0
---
# Ultrachat-Uncensored
Ultrachat-Uncensored is a variant of the original Ultrachat dataset available at [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), where any examples where the bot's messages match the specified terms are removed. These terms can be found in [filters.txt](https://huggingface.co/datasets/branles14/ultrachat-uncensored/blob/main/filters.txt).
This process was carried out in an attempt to neutralize the bot's responses by excluding particular terms. The goal is to foster more constructive and neutral conversations with the bot.
## Dataset Variants
There are two versions of this dataset available:
1. [Ultrachat-Uncensored](https://huggingface.co/datasets/branles14/ultrachat-uncensored): In this version, the filter is only applied to the bot's messages.
2. [Ultrachat-Uncensored Full](https://huggingface.co/datasets/branles14/ultrachat-uncensored_full): In this version, the filter is applied to both human and bot messages for a more thorough filtering process.
## Purpose
The idea behind removing certain terms is to create a chatbot that feels more neutral in its interactions. The intended outcome is to ensure that the bot engages in unbiased and fair dialogue, maintaining a neutral stance on controversial topics. This neutrality is expected to make conversations with the bot more enjoyable and less prone to unnecessary confrontations or misunderstandings.
Please note that while we have made an effort to filter specific terms, we recommend using the dataset responsibly, acknowledging that no filtering process can be perfect.
## Contribution
Contributions to enhance this project are welcome! Feel free to open issues or submit pull requests for improving the filter or suggesting new enhancements.
Enjoy using Ultrachat-Uncensored, and we look forward to your constructive feedback and suggestions. |
SachinKaushik/LlamaV2InstructCode | 2023-07-21T19:17:00.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"python",
"llamav2",
"instruction",
"code",
"region:us"
] | SachinKaushik | null | null | null | 3 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: llamaV2Instruct
dtype: string
splits:
- name: train
num_bytes: 241331660
num_examples: 121959
download_size: 0
dataset_size: 241331660
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- python
- llamav2
- instruction
- code
---
# Dataset Card for "LlamaV2InstructCode"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shirsh10mall/LLM_Instruct_Learning_Project_Preprocessed_Tokenized_Open_Orca_Dataset_Flan_T5 | 2023-08-08T11:52:51.000Z | [
"region:us"
] | shirsh10mall | null | null | null | 1 | 21 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: Inputs Token length
dtype: int64
- name: Response Token length
dtype: int64
splits:
- name: train
num_bytes: 1283943963.5926845
num_examples: 430318
- name: test
num_bytes: 226579926.12734038
num_examples: 75939
download_size: 588711752
dataset_size: 1510523889.7200248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "temp_data_LLM_Project"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ILSUM/ILSUM-1.0 | 2023-07-26T13:05:11.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"language:hi",
"language:gu",
"language:en",
"region:us"
] | ILSUM | null | null | null | 0 | 21 | ---
task_categories:
- summarization
language:
- hi
- gu
- en
configs:
- config_name: Hindi
data_files:
- split: train
path: Hindi/train.csv
- split: test
path: Hindi/test.csv
- split: validation
path: Hindi/val.csv
default: true
- config_name: Gujarati
data_files:
- split: train
path: Gujarati/train.csv
- split: test
path: Gujarati/test.csv
- split: validation
path: Gujarati/val.csv
- config_name: English
data_files:
- split: train
path: English/train.csv
- split: test
path: English/test.csv
- split: validation
path: English/val.csv
config_names:
- English
- Hindi
- Gujarati
size_categories:
- 1K<n<10K
- 10K<n<100K
---
# Dataset Card for "ILSUM-1.0"
### Dataset Summary
Automatic text summarization for Indian languages has received surprisingly little attention from the NLP research community. While large scale datasets exist for a number of languages like English, Chinese, French, German, Spanish, etc. no such datasets exist for any Indian languages. Most existing datasets are either not public, or are too small to be useful. Through this shared task we aim to bridge the existing gap by creating reusable corpora for Indian Language Summarization. In the first edition we cover two major indian languages Hindi and Gujarati, which have over 350 million and over 50 million speakers respectively. Apart from this we also include Indian English, a widely regonized dialect which can be substantially different from English spoken elsewhere.
The dataset for this task is built using articles and headline pairs from several leading newspapers of the country. We provide ~10,000 news articles for each language. The task is to generate a meaningful fixed length summary, either extractive or abstractive, for each article. While several previous works in other languages use news artciles - headlines pair, the current dataset poses a unique challenge of code-mixing and script mixing. It is very common for news articles to borrow phrases from english, even if the article itself is written in an Indian Language.
Examples like these are a common occurence both in the headlines as well as in the articles.
~~~
- "IND vs SA, 5મી T20 તસવીરોમાં: વરસાદે વિલન બની મજા બગાડી" (India vs SA, 5th T20 in pictures: rain spoils the match)
- "LIC के IPO में पैसा लगाने वालों का टूटा दिल, आई एक और नुकसानदेह खबर" (Investors of LIC IPO left broken hearted, yet another bad news).
~~~
### Languages
- Hindi
- Gujarati
- English
### Data Fields
~~~
- id: Unique id of each datapoint
- Article: Entire News article
- Headline: Headline of News Article
- Summary: Summary of News Article
~~~
### Data Splits
Data for all three languages is divided into three splits train, validation and test.
### Load dataset using hf-dataset class
```python
from datasets import load_dataset
dataset = load_dataset("ILSUM/ILSUM-1.0", "Hindi")
# you can use any of the following config names as a second argument:
# "English", "Hindi", "Gujarati"
```
### Citation Information
If you are using the dataset or the models please cite the following paper
~~~
@article{satapara2022findings,
title={Findings of the first shared task on indian language summarization (ilsum): Approaches, challenges and the path ahead},
author={Satapara, Shrey and Modha, Bhavan and Modha, Sandip and Mehta, Parth},
journal={Working Notes of FIRE},
pages={9--13},
year={2022}
}
~~~
### Contributions
- Bhavan Modha, University Of Texas at Dallas, USA
- Shrey Satapara, Indian Institute Of Technology, Hyderabad, India
- Sandip Modha, LDRP-ITR, Gandhinagar, India
- Parth Mehta, Parmonic, USA
<!--## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
|
seungheondoh/LP-MusicCaps-MSD | 2023-08-01T04:06:49.000Z | [
"size_categories:100K<n<1M",
"language:en",
"art",
"music",
"text-to-music",
"music-to-text",
"arxiv:2307.16372",
"region:us"
] | seungheondoh | null | null | null | 6 | 21 | ---
language:
- en
tags:
- art
- music
- text-to-music
- music-to-text
pretty_name: LP-MusicCaps-MSD
size_categories:
- 100K<n<1M
---
======================================
**!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)!
======================================
# Dataset Card for LP-MusicCaps-MSD
## Dataset Description
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
## Dataset Summary
**LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset.
- **LP-MusicCaps MSD (This Repo)**: 0.5M Audio with 2.2M Caption. We utilize 1054 unique tags in the [MSD-ECALS](https://github.com/SeungHeonDoh/msd-subsets) to perform tag-to-caption generation through LLM.
- [LP-MusicCaps MTT](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MTT): 22k Audio with 88k Caption
- [LP-MusicCaps MC](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MC): 6k Audio with 22k Caption.
## Data Instances
Each instance in LP-MusicCaps MSD (This Repo) represents multiple image-text pair information with meta-attributes:
```
{
'track_id': 'TRIHXPZ128F1466744',
'title': 'In The Sunshine',
'artist_name': 'ARRESTED DEVELOPMENT',
'release': 'Zingalamaduni',
'year': 1994,
'tag': ['laid back mellow',
'hip hop',
'rnb',
'amiable good natured',
'rap',
'urban',
'gentle',
'political rap',
'soul',
'calm peaceful',
'summery',
'cheerful',
'alternative rap'
],
'caption_writing': 'An amiable and laid back alternative rap tune, this summery and cheerful song blends elements of soul and R&B with a gentle, mellow rap flow to create a calm and peaceful urban vibe that is both hip hop and political in its message.',
'caption_summary': 'This summery, alternative rap song is a mellow and gentle blend of hip hop, RnB, and political rap with a cheerful and amiable good natured vibe.',
'caption_paraphrase': 'This laid back mellow rap song infuses soulful and urban elements while showcasing a gentle and amiable good natured vibe, perfect for a summery day. With hints of cheerful R&B and hip hop, the alternative political rap lyrics bring balance to this peaceful and calming tune.',
'caption_attribute_prediction': 'This mellow, soulful tune is a perfect blend of rap and RnB, with a gentle beat and smooth flow that will transport you to the laid-back urban vibes of a sunny summertime day. The amiable good-natured lyrics touch on political themes, while the alternative rap style adds a cheerful, upbeat twist to the message. Overall, this is a hip-hop gem thats sure to put you in a peaceful, calm state of mind.',
'path': '3/0/303545.clip.mp3'
}
```
## Pseudo Caption Example:
Input Tags:
*"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"*
Output Pseudo Captions
*"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"*
[More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py)
## Data Fields
| Name | Type | Description |
|------------------------------|-----------------|----------------------------------------------------------------------|
| track_id | string | Unique identifier for the track |
| title | string | Title of the song |
| artist_name | string | Name of the artist performing the song |
| release | string | Release name or album name of the song |
| year | integer | Year of the song's release |
| tag | list of strings | List of tags associated with the song |
| caption_writing | string | Pseudo caption generated through a writing instruction |
| caption_summary | string | Pseudo caption generated through a summary instruction |
| caption_paraphrase | string | Pseudo caption generated through a paraphrase instruction |
| caption_attribute_prediction | string | Pseudo caption generated through an attribute_prediction instruction |
| path | string | File path or location of the audio clip |
## Data Splits
- train: 444865
- valid: 34481
- test: 34631
## Considerations for Using the Data
The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below.
## Discussion of Biases
It will be described in a paper to be released soon.
## Other Known Limitations
It will be described in a paper to be released soon. |
recastai/flickr30k-augmented-caption | 2023-08-16T11:04:24.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | recastai | null | null | null | 0 | 21 | ---
language:
- en
license: cc-by-4.0
pretty_name: Flickr30k-augmented-captions
dataset_info:
features:
- name: prompt
dtype: string
- name: caption
dtype: string
- name: filename
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 156472618
num_examples: 154573
download_size: 74228652
dataset_size: 156472618
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ixarchakos/tops_laydown | 2023-08-22T15:06:04.000Z | [
"region:us"
] | ixarchakos | null | null | null | 0 | 21 | Entry not found |
macavaney/miracl-noauth | 2023-08-06T14:38:26.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"source_datasets:miracl/miracl",
"language:ar",
"language:bn",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:hi",
... | macavaney | null | null | null | 0 | 21 | ---
annotations_creators:
- expert-generated
language:
- ar
- bn
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- zh
multilinguality:
- multilingual
pretty_name: MIRACL-corpus
size_categories: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
source_datasets:
- miracl/miracl
---
A clone of the excellent [`miracl/miracl` dataset](https://huggingface.co/datasets/miracl/miracl) that doesn't require authentication. Refer to the original dataset for details.
|
PL-MTEB/sicke-pl-pairclassification | 2023-08-11T10:49:18.000Z | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 21 | ---
license: cc-by-nc-sa-3.0
---
|
Zephyr271828/kubernete_trial | 2023-09-25T01:21:30.000Z | [
"region:us"
] | Zephyr271828 | null | null | null | 0 | 21 | Entry not found |
wesley7137/neuroalpaca_autotrain | 2023-08-20T23:13:31.000Z | [
"region:us"
] | wesley7137 | null | null | null | 0 | 21 | Entry not found |
Fsoft-AIC/the-vault-class | 2023-08-22T13:18:33.000Z | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | Fsoft-AIC | The Vault is a multilingual code-text dataset with over 40 million pairs covering 10 popular programming languages.
It is the largest corpus containing parallel code-text data. By building upon The Stack, a massive raw code sample collection,
the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It provides a
high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level.
The Vault can serve many purposes at multiple levels. | @article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
} | null | 1 | 21 | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
*Note: C and Go are not contained in this repo due to the nonexistence of traditional classes in these languages.*
## Dataset Structure
### Data Instances
```
{
"hexsha": "78b961a6673ec1e12f8d95c33ef081f75561a87c",
"repo": "AIS-Bonn/sl-cutscenes",
"path": "sl_cutscenes/object_models.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "MeshLoader",
"original_docstring": "\n Class to load the meshes for the objects in a scene.\n ",
"docstring": "Class to load the meshes for the objects in a scene.",
"docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"code": "class MeshLoader:\n \"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"\n\n def __init__(self):\n \"\"\"Module initializer\"\"\"\n self.base_dir = CONSTANTS.MESH_BASE_DIR\n self.text_dir = CONSTANTS.TEXT_BASE_DIR\n self.reset()\n\n def reset(self):\n self.loaded_meshes = []\n\n def get_meshes(self):\n \"\"\" \"\"\"\n extract_singular = lambda x: x[0] if len(x) == 1 else x\n return [extract_singular(item) for item in self.loaded_meshes]\n\n def load_meshes(self, obj_info: List[object_info.ObjectInfo], **kwargs):\n \"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"\n paths = []\n for obj in obj_info:\n path = self.text_dir if obj.name.endswith(\"_floor\") or obj.name.endswith(\"_wall\") else self.base_dir\n paths.append((path / obj.mesh_fp).resolve())\n scales = [obj.scale for obj in obj_info]\n class_ids = [obj.class_id for obj in obj_info]\n mod_scales = kwargs.get(\"mod_scale\", [1.0] * len(scales))\n scales = [s * ms for (s, ms) in zip(scales, mod_scales)]\n flags = [mesh_flags(obj) for obj in obj_info]\n meshes = sl.Mesh.load_threaded(filenames=paths, flags=flags)\n\n # Setup class IDs\n for _, (mesh, scale, class_id) in enumerate(zip(meshes, scales, class_ids)):\n pt = torch.eye(4)\n pt[:3, :3] *= scale\n mesh.pretransform = pt\n mesh.class_index = class_id\n\n info_mesh_tuples = list(zip(obj_info, meshes))\n self.loaded_meshes.append(info_mesh_tuples)",
"code_tokens": [
"class",
"MeshLoader",
":",
"def",
"__init__",
"(",
"self",
")",
":",
"\"\"\"Module initializer\"\"\"",
"self",
".",
"base_dir",
"=",
"CONSTANTS",
".",
"MESH_BASE_DIR",
"self",
".",
"text_dir",
"=",
"CONSTANTS",
".",
"TEXT_BASE_DIR",
"self",
".",
"reset",
"(",
")",
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"loaded_meshes",
"=",
"[",
"]",
"def",
"get_meshes",
"(",
"self",
")",
":",
"\"\"\" \"\"\"",
"extract_singular",
"=",
"lambda",
"x",
":",
"x",
"[",
"0",
"]",
"if",
"len",
"(",
"x",
")",
"==",
"1",
"else",
"x",
"return",
"[",
"extract_singular",
"(",
"item",
")",
"for",
"item",
"in",
"self",
".",
"loaded_meshes",
"]",
"def",
"load_meshes",
"(",
"self",
",",
"obj_info",
":",
"List",
"[",
"object_info",
".",
"ObjectInfo",
"]",
",",
"**",
"kwargs",
")",
":",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"paths",
"=",
"[",
"]",
"for",
"obj",
"in",
"obj_info",
":",
"path",
"=",
"self",
".",
"text_dir",
"if",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_floor\"",
")",
"or",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_wall\"",
")",
"else",
"self",
".",
"base_dir",
"paths",
".",
"append",
"(",
"(",
"path",
"/",
"obj",
".",
"mesh_fp",
")",
".",
"resolve",
"(",
")",
")",
"scales",
"=",
"[",
"obj",
".",
"scale",
"for",
"obj",
"in",
"obj_info",
"]",
"class_ids",
"=",
"[",
"obj",
".",
"class_id",
"for",
"obj",
"in",
"obj_info",
"]",
"mod_scales",
"=",
"kwargs",
".",
"get",
"(",
"\"mod_scale\"",
",",
"[",
"1.0",
"]",
"*",
"len",
"(",
"scales",
")",
")",
"scales",
"=",
"[",
"s",
"*",
"ms",
"for",
"(",
"s",
",",
"ms",
")",
"in",
"zip",
"(",
"scales",
",",
"mod_scales",
")",
"]",
"flags",
"=",
"[",
"mesh_flags",
"(",
"obj",
")",
"for",
"obj",
"in",
"obj_info",
"]",
"meshes",
"=",
"sl",
".",
"Mesh",
".",
"load_threaded",
"(",
"filenames",
"=",
"paths",
",",
"flags",
"=",
"flags",
")",
"for",
"_",
",",
"(",
"mesh",
",",
"scale",
",",
"class_id",
")",
"in",
"enumerate",
"(",
"zip",
"(",
"meshes",
",",
"scales",
",",
"class_ids",
")",
")",
":",
"pt",
"=",
"torch",
".",
"eye",
"(",
"4",
")",
"pt",
"[",
":",
"3",
",",
":",
"3",
"]",
"*=",
"scale",
"mesh",
".",
"pretransform",
"=",
"pt",
"mesh",
".",
"class_index",
"=",
"class_id",
"info_mesh_tuples",
"=",
"list",
"(",
"zip",
"(",
"obj_info",
",",
"meshes",
")",
")",
"self",
".",
"loaded_meshes",
".",
"append",
"(",
"info_mesh_tuples",
")"
],
"short_docstring": "Class to load the meshes for the objects in a scene.",
"short_docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"comment": [
"\"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"",
"\"\"\"Module initializer\"\"\"",
"\"\"\" \"\"\"",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"# Setup class IDs"
],
"parameters": [],
"docstring_params": {
"returns": [],
"raises": [],
"params": [],
"outlier_params": [],
"others": []
}
}
```
### Data Fields
Data fields for function level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (list): tokenized version of `short_docstring
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc)
- **docstring_tokens** (list): tokenized version of docstring
- **comment** (list): list of comments (line) inside the function/class
- **parameters** (list): List of parameters and its type (type can be None)
- **docstring_params** (dict): Dictionary of the parsed information from docstring
See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples.
### Data Splits
In this repo, the class level data is not split, and contained in only train set.
## Dataset Statistics
|Language | Number of samples |
|:-----------|------------------------:|
|Python | 353,859 |
|Java | 4,069,174 |
|JavaScript | 236,525 |
|PHP | 969,667 |
|C# | 1,138,603 |
|C++ | 150,530 |
|Ruby | 62,464 |
|Rust | 301,893 |
|C | - |
|Go | - |
|TOTAL | **7,282,715** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full class level dataset
dataset = load_dataset("Fsoft-AIC/the-vault-class")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-class", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-class", streaming= True)
for sample in iter(data['train']):
print(sample)
```
A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). |
vincenttttt/CtoD_CS_ForFineTune | 2023-08-23T12:56:27.000Z | [
"region:us"
] | vincenttttt | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10897
num_examples: 27
download_size: 6403
dataset_size: 10897
---
# Dataset Card for "CtoD_CS_ForFineTune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtc/abstractive_filtered_20min_data | 2023-08-23T15:00:29.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: titleHeader
dtype: string
- name: title
dtype: string
- name: lead
dtype: string
- name: article
dtype: string
- name: summary
dtype: string
splits:
- name: test
num_bytes: 7932779
num_examples: 2690
- name: train
num_bytes: 55523234
num_examples: 19153
- name: validation
num_bytes: 6775108
num_examples: 2318
download_size: 4414027
dataset_size: 70231121
---
# Dataset Card for "abstractive_filtered_20min_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JasiekKaczmarczyk/giant-midi-quantized | 2023-08-24T07:40:25.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart_bin
sequence: int8
length: 128
- name: duration_bin
sequence: int8
length: 128
- name: velocity_bin
sequence: int8
length: 128
splits:
- name: train
num_bytes: 168083130
num_examples: 238919
- name: validation
num_bytes: 20721368
num_examples: 29453
- name: test
num_bytes: 20062265
num_examples: 28531
download_size: 77193117
dataset_size: 208866763
---
# Dataset Card for "giant-midi-quantized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TaylorAI/pubmed_noncommercial | 2023-09-02T19:11:51.000Z | [
"region:us"
] | TaylorAI | null | null | null | 5 | 21 | Entry not found |
EleutherAI/coqa | 2023-08-30T10:44:28.000Z | [
"region:us"
] | EleutherAI | CoQA is a large-scale dataset for building Conversational Question Answering
systems. The goal of the CoQA challenge is to measure the ability of machines to
understand a text passage and answer a series of interconnected questions that
appear in a conversation. | @misc{reddy2018coqa,
title={CoQA: A Conversational Question Answering Challenge},
author={Siva Reddy and Danqi Chen and Christopher D. Manning},
year={2018},
eprint={1808.07042},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 21 | Entry not found |
deven367/babylm-100M | 2023-09-06T04:28:32.000Z | [
"region:us"
] | deven367 | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 567957485
num_examples: 10176300
- name: valid
num_bytes: 54930583
num_examples: 986022
- name: test
num_bytes: 59992087
num_examples: 1008854
download_size: 429914407
dataset_size: 682880155
---
# Dataset Card for "babylm-100M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SeyedAli/Persian-Text-Emotion | 2023-09-09T15:44:06.000Z | [
"task_categories:text-classification",
"language:fa",
"license:mit",
"region:us"
] | SeyedAli | null | null | null | 1 | 21 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1612793
num_examples: 5558
- name: test
num_bytes: 409414
num_examples: 1390
download_size: 1143196
dataset_size: 2022207
task_categories:
- text-classification
language:
- fa
---
Dataset Classes
* joy:0
* sad:1
* anger:2
* disgust:3
* fear:4
* surprise:5 |
MoaazId/cityscape | 2023-09-11T13:01:38.000Z | [
"region:us"
] | MoaazId | null | null | null | 0 | 21 | Entry not found |
amitrajitbh1/communities_content | 2023-09-13T01:51:37.000Z | [
"region:us"
] | amitrajitbh1 | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: author
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1745194094
num_examples: 850001
download_size: 1053929701
dataset_size: 1745194094
---
# Dataset Card for "communities_content"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
elliotthwang/guanaco-llama2-chinese-1k | 2023-09-13T01:47:38.000Z | [
"region:us"
] | elliotthwang | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1348677
num_examples: 1000
download_size: 0
dataset_size: 1348677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-chinese-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_id_train_100_eval_10 | 2023-09-13T13:51:30.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1610094
num_examples: 1017
- name: validation
num_bytes: 62544
num_examples: 53
download_size: 29364
dataset_size: 1672638
---
# Dataset Card for "squad_id_train_100_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arrivedercis/finreport-llama2-5k | 2023-09-16T02:49:04.000Z | [
"region:us"
] | Arrivedercis | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2293425
num_examples: 10000
download_size: 1144776
dataset_size: 2293425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finreport-llama2-5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Divya1287/llama2 | 2023-09-20T06:33:37.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] | Divya1287 | null | null | null | 0 | 21 | ---
license: openrail
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
pretty_name: prompt
size_categories:
- 1K<n<10K
--- |
DhruvShek/synapsellm-v0-1 | 2023-09-14T14:57:03.000Z | [
"region:us"
] | DhruvShek | null | null | null | 0 | 21 | Entry not found |
FanChen0116/bus_few4_80x_pvi | 2023-09-26T16:25:07.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 922303
num_examples: 4480
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 104198
dataset_size: 999821
---
# Dataset Card for "bus_few4_80x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
clareandme/uniLabelClassification | 2023-10-05T11:49:52.000Z | [
"region:us"
] | clareandme | null | null | null | 0 | 21 | |
hdeldar/Persian-Text-llama2-1k-1 | 2023-09-22T12:24:12.000Z | [
"region:us"
] | hdeldar | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1830325
num_examples: 1000
download_size: 1841325
dataset_size: 1830325
dataset_name: json
configs:
- config_name: default
data_files:
- split: train
path: data/data-*
---
# Persian-Text-QA: Lazy Llama 2 Formatting
This is a subset (1k samples) of the [`SeyedAli/Persian-Text-QA`](https://huggingface.co/datasets/SeyedAli/Persian-Text-QA) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.
|
seank0602/bluemoon_fandom_rp | 2023-09-23T19:40:42.000Z | [
"region:us"
] | seank0602 | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 260278392
num_examples: 3338
download_size: 152371862
dataset_size: 260278392
---
# Dataset Card for "bluemoon_fandom_rp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JzJd/posts | 2023-09-26T06:35:43.000Z | [
"license:afl-3.0",
"region:us"
] | JzJd | null | null | null | 0 | 21 | ---
license: afl-3.0
---
|
pmpc/processed-old-with-embeddings | 2023-09-26T10:56:50.000Z | [
"region:us"
] | pmpc | null | null | null | 0 | 21 | ---
dataset_info:
- config_name: default
features:
- name: slug
dtype: string
- name: text_chunk
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 17448677826
num_examples: 3655376
download_size: 14805980593
dataset_size: 17448677826
- config_name: small
features:
- name: slug
dtype: string
- name: text_chunk
dtype: string
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 475656222.6698008
num_examples: 99531
- name: test
num_bytes: 23459991.330199156
num_examples: 4909
download_size: 488406448
dataset_size: 499116214.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: small
data_files:
- split: train
path: small/train-*
- split: test
path: small/test-*
---
# Dataset Card for "processed-old-with-embeddings"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Chunks of about 256 words split by whitespace and their embeddings computed with the pretrained spacy model ["de_dep_news_trf"] (https://github.com/explosion/spacy-models/releases/tag/de_dep_news_trf-3.6.1).
The splits are created with respect to sentence boundaries parsed with the same model, sentences are concatenated if the result does not exceed max_words = 256, therefore the chunk length varies.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset contains texts from the legal domain in German language. (German court decisions)
## Dataset Structure
[More Information Needed]
### Data Instances
{'slug': 'ag-pinneberg-2003-12-19-68-ii-9302-weg',
'text_chunk': 'Die Berufung des Klägers gegen das am 23. April 2002 verkündete Urteil der 1. Zivilkammer des Landgerichts Wuppertal wird zurückgewiesen.\n\n Der Kläger trägt (...)',
'embedding': [-0.055155396461486816, -0.3904547095298767, -0.0033536632545292377, 0.8048776984214783, 0.30156993865966797, 0.5924882888793945, (...)]]}
### Data Fields
{
'slug': data['slug'],
'text_chunk': text,
'embedding': embedding
}
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
This dataset contains texts from the legal domain in German language. (German court decisions)
### Citation Information
@inproceedings{10.1145/3383583.3398616,
author = {Ostendorff, Malte and Blume, Till and Ostendorff, Saskia},
title = {Towards an Open Platform for Legal Information},
year = {2020},
isbn = {9781450375856},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383583.3398616},
doi = {10.1145/3383583.3398616},
booktitle = {Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020},
pages = {385–388},
numpages = {4},
keywords = {open data, open source, legal information system, legal data},
location = {Virtual Event, China},
series = {JCDL '20}
} |
vincenttttt/ultra_cut | 2023-09-27T16:08:49.000Z | [
"region:us"
] | vincenttttt | null | null | null | 0 | 21 | Entry not found |
nthngdy/babylm_10M | 2023-09-25T16:52:14.000Z | [
"region:us"
] | nthngdy | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 55441912.303940535
num_examples: 1015494
download_size: 36288832
dataset_size: 55441912.303940535
---
# Dataset Card for "babylm_10M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fedryanto/quad2 | 2023-09-25T21:04:44.000Z | [
"region:us"
] | fedryanto | null | 0 | 21 | Entry not found | ||
nelson2424/Grocery_chatbot_text_v2 | 2023-09-26T00:16:21.000Z | [
"region:us"
] | nelson2424 | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: items
dtype: string
splits:
- name: train
num_bytes: 196348
num_examples: 1070
download_size: 59003
dataset_size: 196348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Grocery_chatbot_text_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_title_v3_train_10_eval_10 | 2023-09-26T06:36:13.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 276687
num_examples: 184
- name: validation
num_bytes: 64836
num_examples: 68
download_size: 71168
dataset_size: 341523
---
# Dataset Card for "squad_title_v3_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Edge-Pyxos/CRaQAn_v1 | 2023-09-26T16:11:40.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"legal",
"region:us"
] | Edge-Pyxos | null | null | null | 0 | 21 | ---
language:
- en
license: cc-by-4.0
size_categories:
- n<1K
task_categories:
- question-answering
pretty_name: craqan_v1
tags:
- legal
dataset_info:
features:
- name: title
dtype: string
- name: article
dtype: string
- name: article_titles
sequence: string
- name: article_sections
sequence: string
- name: section
dtype: string
- name: section_index
dtype: int64
- name: section_sentences
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: sentences_required
sequence: int64
- name: url
dtype: string
- name: time_downloaded
dtype: string
splits:
- name: train
num_bytes: 17788270
num_examples: 263
download_size: 0
dataset_size: 17788270
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Coreference Resolution in Question Answering (CRaQAn)
250+ question-answer pairs that require coreference resolution across sentences from selected Wikipedia passages.
## Generation Process
Given the relative complexity of our task (coreference resolution across passages for question-answering), we aimed
to avoid crowd-sourcing this dataset and instead focused on using LLMs to automate our process. Every question-answer
pair in the CRaQAn dataset was automatically generated using a Recursive Criticism and Improvement (RCI) loop. To
accomplish our RCI loop, we wrote a GENERATOR prompt and several REVIEWER prompts, which can be found [here](https://huggingface.co/datasets/Edge-Pyxos/CRaQAn_v1/tree/main/generation_demo/prompts).
## Review Process
Every question-answer pair in the CRaQAn v1 dataset was reviewed by at least two human reviewers. We intend for this to be a
high-trust and high-quality dataset that can be used for various applications. Every human reviewer was given the
following criteria. For each QA pair:
1. The question is clear and not ambiguous with regards to the text.
2. The question is a single question, and not two separate or related questions joined by the word "and".
3. The question does not contain or assume any information outside of the required sentences.
4. The answer is correct and reasonably terse.
5. The question-answer pair must not rely on any information from outside the required sentences.
6. The question-answer pair relies on information from each of the required sentences.
7. The number of required sentences is 2 or 3.
8. The Markdown is correctly formatted.
## CRaQAn Usage
```python
from datasets import load_dataset
import pandas as pd
from IPython.display import display, Markdown
# Load dataset.
craqan = load_dataset("Edge-Pyxos/CRaQAn_v1", split = "train")
df = pd.DataFrame(craqan)
# Fix issue with section_sentences that happens during Huggingface conversion.
df["section_sentences"] = df["section_sentences"].apply(json.loads)
# Visualize a sample from the dataset.
row = df.sample(1).squeeze()
sentences = ""
for idx, s in enumerate(row.section_sentences):
sentences += (" <mark> " + s["sentence"] + " </mark> ") if idx in row.sentences_required else " " + s["sentence"]
display(Markdown(f"# Article: {row.title}"))
display(Markdown(row.article_titles[row.section_index]))
display(Markdown(f"*Required Sentences: {row.sentences_required}*"))
display(Markdown(sentences))
display(Markdown(f"**Question**: " + row.question))
display(Markdown("**Answer**: " + row.answer))
display(Markdown("-------------------"))
```
## Demo Usage
We provide all prompts, code, and processes used to generate the CRaQAn-v1 dataset in our [demo notebook](https://huggingface.co/datasets/Edge-Pyxos/CRaQAn_v1/blob/main/generation_demo/create_dataset.ipynb).
|
MattCoddity/dockerNLcommands | 2023-10-06T08:35:01.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | MattCoddity | null | null | null | 0 | 21 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
---
# Natural Language to Docker Command Dataset
This dataset is designed to translate natural language instructions into Docker commands. It contains mappings of textual phrases to corresponding Docker commands, aiding in the development of models capable of understanding and translating user requests into executable Docker instructions.
## Dataset Format
Each entry in the dataset consists of a JSON object with the following keys:
- `input`: The natural language phrase.
- `instruction`: A static field indicating the task to translate the phrase into a Docker command.
- `output`: The corresponding Docker command.
### Example Entry
```json
{
"input": "Can you show me the digests of all the available Docker images?",
"instruction": "translate this sentence in docker command",
"output": "docker images --digests"
}
```
## Usage
This dataset can be utilized to train and evaluate models for a variety of applications including, but not limited to, Natural Language Processing (NLP), Command Line Interface (CLI) automation, and educational tools for Docker.
## Commands coverage
- docker ps
- docker images
- docker stop
- docker kill
- docker login
## Contributing
We welcome contributions to improve this dataset. Please feel free to open a Pull Request or an Issue to discuss potential improvements, bug fixes, or other changes. |
Binaryy/cream_listings | 2023-10-01T13:15:46.000Z | [
"region:us"
] | Binaryy | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: location
dtype: string
- name: features
sequence: string
- name: description
dtype: string
- name: images
sequence: string
- name: videos
sequence: 'null'
- name: available
dtype: bool
- name: price
dtype: int64
- name: attachedDocument
sequence: 'null'
- name: year
dtype: int64
- name: carCondition
dtype: string
- name: engineType
dtype: string
- name: colour
dtype: string
- name: model
dtype: string
- name: noOfBed
dtype: float64
- name: noOfBathroom
dtype: float64
- name: locationISO
dtype: string
- name: forRent
dtype: bool
- name: views
sequence: string
- name: thoseWhoSaved
sequence: string
- name: createdAt
dtype: string
- name: updatedAt
dtype: string
- name: __v
dtype: int64
- name: category._id
dtype: string
- name: category.title
dtype: string
- name: category.slug
dtype: string
- name: category.isAdminAllowed
dtype: string
- name: category.createdAt
dtype: string
- name: category.updatedAt
dtype: string
- name: category.__v
dtype: int64
- name: postedBy.pageViews.value
dtype: int64
- name: postedBy.pageViews.users
sequence: 'null'
- name: postedBy.totalSaved.value
dtype: int64
- name: postedBy.totalSaved.users
sequence: string
- name: postedBy._id
dtype: string
- name: postedBy.firstName
dtype: string
- name: postedBy.lastName
dtype: string
- name: postedBy.about
dtype: string
- name: postedBy.cover
dtype: string
- name: postedBy.email
dtype: string
- name: postedBy.password
dtype: string
- name: postedBy.isAdmin
dtype: bool
- name: postedBy.savedListing
sequence: string
- name: postedBy.isVerified
dtype: bool
- name: postedBy.verifiedProfilePicture
dtype: 'null'
- name: postedBy.profilePicture
dtype: string
- name: postedBy.pronoun
dtype: float64
- name: postedBy.userType
dtype: int64
- name: postedBy.accountType
dtype: int64
- name: postedBy.subscribed
dtype: bool
- name: postedBy.noOfSubscription
dtype: int64
- name: postedBy.totalListing
dtype: int64
- name: postedBy.sellerType
dtype: int64
- name: postedBy.createdAt
dtype: string
- name: postedBy.updatedAt
dtype: string
- name: postedBy.__v
dtype: int64
- name: postedBy.address
dtype: string
- name: postedBy.city
dtype: string
- name: postedBy.country
dtype: string
- name: postedBy.gender
dtype: string
- name: postedBy.nationality
dtype: string
- name: postedBy.verificationType
dtype: int64
- name: postedBy.dob
dtype: string
- name: postedBy.locationISO
dtype: string
- name: postedBy.state
dtype: string
- name: postedBy.zipCode
dtype: int64
- name: postedBy.otherNames
dtype: string
- name: postedBy.facebookUrl
dtype: string
- name: postedBy.instagramUrl
dtype: string
- name: postedBy.phoneNumber1
dtype: string
- name: postedBy.phoneNumber2
dtype: string
- name: postedBy.websiteUrl
dtype: string
- name: postedBy.accountName
dtype: string
- name: postedBy.accountNo
dtype: string
- name: postedBy.bankName
dtype: string
- name: string_features
dtype: string
- name: complete_description
dtype: string
splits:
- name: train
num_bytes: 133946
num_examples: 37
download_size: 96214
dataset_size: 133946
---
# Dataset Card for "cream_listings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tyzuesh/CustomQADraupadiMurmu | 2023-09-29T08:08:11.000Z | [
"region:us"
] | Tyzuesh | null | null | null | 0 | 21 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Har11k/demotrain1 | 2023-09-29T08:11:36.000Z | [
"task_categories:tabular-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | Har11k | null | null | null | 0 | 21 | ---
license: apache-2.0
task_categories:
- tabular-classification
language:
- en
--- |
liyucheng/ceval_all | 2023-09-29T10:07:50.000Z | [
"region:us"
] | liyucheng | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: val
num_bytes: 406528
num_examples: 1346
- name: test
num_bytes: 3720917
num_examples: 12342
- name: dev
num_bytes: 172688
num_examples: 260
download_size: 2792076
dataset_size: 4300133
---
# Dataset Card for "ceval_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JasiekKaczmarczyk/giant-midi-sustain-masked | 2023-10-02T10:49:22.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: source
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: masking_spaces
struct:
- name: <Random Mask>
sequence: bool
length: 128
- name: <LH Mask>
sequence: bool
length: 128
- name: <RH Mask>
sequence: bool
length: 128
- name: <Harmonic Root Mask>
sequence: bool
length: 128
- name: <Harmonic Outliers Mask>
sequence: bool
length: 128
splits:
- name: train
num_bytes: 453725935
num_examples: 239612
- name: validation
num_bytes: 55936260
num_examples: 29544
- name: test
num_bytes: 52710054
num_examples: 27844
download_size: 211201981
dataset_size: 562372249
---
# Dataset Card for "giant-midi-sustain-masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Luciya/llama-2-nuv-intent-big-multi | 2023-10-02T10:41:23.000Z | [
"region:us"
] | Luciya | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 862786
num_examples: 1563
download_size: 132778
dataset_size: 862786
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-big-multi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
librarian-bots/paper-recommendations | 2023-10-07T12:37:16.000Z | [
"region:us"
] | librarian-bots | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: paper_url
dtype: string
- name: comment
dtype: string
splits:
- name: train
num_bytes: 66665
num_examples: 67
download_size: 22837
dataset_size: 66665
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paper-recommendations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/sloppy_addition_alice_1.0_easy_2 | 2023-10-05T17:49:53.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: true_label
dtype: bool
- name: id
dtype: int64
splits:
- name: train
num_bytes: 5621956.01008
num_examples: 131564
- name: validation
num_bytes: 561701.493
num_examples: 13140
- name: test
num_bytes: 565375.7065
num_examples: 13246
download_size: 0
dataset_size: 6749033.2095800005
---
# Dataset Card for "sloppy_addition_alice_1.0_easy_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/emotion_prompts | 2023-10-05T05:53:31.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 21 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 4626262
num_examples: 10000
download_size: 669543
dataset_size: 4626262
---
# Dataset Card for "emotion_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juewang/misc-data | 2023-10-07T15:50:01.000Z | [
"language:en",
"region:us"
] | juewang | null | null | null | 0 | 21 | ---
language:
- en
---
# juewang/target-data |
kowndinya23/flan2021-submix-mistral-512 | 2023-10-08T14:56:10.000Z | [
"region:us"
] | kowndinya23 | null | null | null | 0 | 21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype:
class_label:
names:
'0': aeslc:1.0.0
'1': ag_news_subset:1.0.0
'2': ai2_arc/ARC-Challenge:1.0.0
'3': ai2_arc/ARC-Easy:1.0.0
'4': anli/r1:0.1.0
'5': anli/r2:0.1.0
'6': anli/r3:0.1.0
'7': bool_q:1.0.0
'8': cnn_dailymail:3.4.0
'9': coqa:1.0.0
'10': cosmos_qa:1.0.0
'11': definite_pronoun_resolution:1.1.0
'12': drop:2.0.0
'13': fix_punct
'14': gem/common_gen:1.1.0
'15': gem/dart:1.1.0
'16': gem/e2e_nlg:1.1.0
'17': gem/web_nlg_en:1.1.0
'18': gem/wiki_lingua_english_en:1.1.0
'19': gigaword:1.2.0
'20': glue/cola:2.0.0
'21': glue/mnli:2.0.0
'22': glue/mrpc:2.0.0
'23': glue/qnli:2.0.0
'24': glue/qqp:2.0.0
'25': glue/sst2:2.0.0
'26': glue/stsb:2.0.0
'27': glue/wnli:2.0.0
'28': hellaswag:1.1.0
'29': huggingface:xsum
'30': imdb_reviews/plain_text:1.0.0
'31': lambada:1.0.0
'32': math_dataset/algebra__linear_1d:1.0.0
'33': multi_news:1.0.0
'34': natural_questions_open:1.0.0
'35': newsroom:1.0.0
'36': openbookqa:0.1.0
'37': opinion_abstracts_idebate
'38': opinion_abstracts_rotten_tomatoes
'39': para_crawl_enes
'40': paws_wiki:1.1.0
'41': piqa:1.0.0
'42': quac:1.0.0
'43': samsum:1.0.0
'44': sentiment140:1.0.0
'45': snli:1.1.0
'46': squad/v1.1:3.0.0
'47': squad/v2.0:3.0.0
'48': story_cloze/2016:1.0.0
'49': super_glue/cb:1.0.2
'50': super_glue/copa:1.0.2
'51': super_glue/multirc:1.0.2
'52': super_glue/record:1.0.2
'53': super_glue/rte:1.0.2
'54': super_glue/wic:1.0.2
'55': super_glue/wsc.fixed:1.0.2
'56': trec:1.0.0
'57': trivia_qa/rc:1.1.0
'58': true_case
'59': unified_qa_science_inst
'60': winogrande:1.1.0
'61': wmt14_translate/fr-en:1.0.0
'62': wmt16_translate/cs-en:1.0.0
'63': wmt16_translate/de-en:1.0.0
'64': wmt16_translate/fi-en:1.0.0
'65': wmt16_translate/ro-en:1.0.0
'66': wmt16_translate/ru-en:1.0.0
'67': wmt16_translate/tr-en:1.0.0
'68': word_segment
'69': yelp_polarity_reviews:0.2.0
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 2778586100.5139294
num_examples: 4069943
- name: validation
num_bytes: 28066843.486070484
num_examples: 41111
download_size: 1713188019
dataset_size: 2806652944.0
---
# Dataset Card for "flan2021-submix-mistral-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.